HS2EHS Supporting Statement B_5.26_Clean

HS2EHS Supporting Statement B_5.26_Clean.docx

Study on the Conversion of Enrollment Slots from Head Start to Early Head Start (HS2EHS Study)

OMB: 0970-0595

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes



Study on the Conversion of Enrollment Slots from Head Start to Early Head Start (HS2EHS Study)


OMB Information Collection Request

0970 – New Collection





Supporting Statement

Part B



June 2022








Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officer:

Jenessa Malin






Part B


B1. Objectives

Study Objectives

The Study on the Conversion of Enrollment Slots from Head Start to Early Head Start (HS2EHS Study) has three primary objectives:

  1. To understand how and why grant recipients convert enrollment slots from Head Start to Early Head Start;

  2. To gather information about planning for and implementation of high-quality Early Head Start services following conversion; and

  3. To learn about barriers and facilitators to the provision of high-quality Early Head Start services that meet community needs.


To achieve these objectives, we will conduct data collection with up to six case studies. Each case study will focus on a Head Start grant recipient that has recently converted enrollment slots from Head Start to Early Head Start.


Generalizability of Results

The study is intended to present an internally-valid description of up to six Head Start grant recipients that have converted enrollment slots to Early Head Start. It is not designed to yield statistical generalization to other grant recipients or populations. The study is intended to provide information to the Office of Head Start, ACF-funded training and technical assistance (T/TA) providers, and Head Start grant recipients about the conversion process as well as about facilitators and barriers to successfully convert enrollment slots and to provide high-quality Early Head Start services to infants, toddlers, and pregnant women.


Appropriateness of Study Design and Methods for Planned Uses

We will use a qualitative multi-case study design with purposive selection of grant recipients to achieve the study’s objectives. A purposive sample will ensure we include cases relevant to (and respondents with perspectives on) the range of approaches currently used to convert enrollment slots from Head Start to Early Head Start. A purposive sample also allows us to select grant recipients that vary along certain dimensions that existing research suggests may affect the conversion process. Because this project aims to learn about approaches, processes, challenges, and facilitators to converting enrollment slots, qualitative methods will promote in-depth examination of constructs of interest, using flexible instruments that can respond to local conditions.


As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information. The data collected is not meant to be representative. This study does not include an impact evaluation and will not be used to assess participants’ outcomes. All publicly available products associated with this study will clearly describe key limitations.


B2. Methods and Design

Target Population

From up to six grant recipients, the target population may include Head Start directors and staff, Head Start policy council members, Head Start Training and Technical Assistance staff, and state and local early care and education leaders and community partners. The research team will use non-probability, purposive sampling to identify potential respondents who can answer questions about the study’s key questions. Because participants will be purposively selected, they will not be representative of the population of Head Start staff, Head Start Training and Technical assistance staff, state or local early care and education leaders, or community partners. Instead, we aim to obtain variation in perspectives and experiences of the respondents we interview from grant recipients to understand the conversion process for each grant recipient.


Sampling and Site Selection

Our site selection approach will yield up to six Head Start grant recipients identified as having a promising approach to conversion and who have undergone conversion of enrollment slots recently, such that interviewees are able to recall details about the process of applying for and converting enrollment slots. We will prioritize grant recipients that have had a request for conversion approved 12-18 months prior to data collection and that have a majority of staff who were present for the conversion still employed. We will prioritize grant recipients that had their application reviewed by the Office of Head Start’s Central Office.1 We will aim for variation in the agency types of the Head Start grant recipients selected for participation.2 Our team will use data from the Head Start Enterprise System (HSES) to identify grant recipients that have had a request for conversion approved in the past 12-18 months and have had their application reviewed by the central office. We will also request nominations of grant recipients with promising approaches to conversion from federal staff via email. A promising approach could include one that was particularly effective or innovative and will be largely left up to federal staff to determine.


Among eligible recipients, we will seek variation along the following dimensions: history of providing EHS, prior conversion experience, region, urbanicity, population served, agency type, conversion to home-based vs. center-based EHS, and time from conversion request to approval. We will be mindful of these differences throughout data collection, analysis and reporting. To the extent possible, we will also seek variation across the six recipients along dimensions such as geographic region, urbanicity, proportion of slots converted to home- versus center-based Early Head Start, length of time between conversion request to approval, policy context (e.g., presence of state-funded pre-K), and size of the recipient’s grant. Again, this information will be drawn from the Program Information Report (PIR) and HSES.


We will identify potential grant recipients to invite to participate in the multi-case study (6 priority as well as backups), and our team will conduct a preparatory interview with each Head Start grant recipient’s Director and Coordinator (Instrument 1) to review study goals and collect additional information to assess eligibility. We will send a follow-up preparatory email (Instrument 2) to the Director after their call requesting the names and contact information of grant recipient staff, technical assistance providers, and other individuals that participated in the conversion and may be appropriate for an interview. We will also request the names and contact information of state or local early care and education leaders who can provide contextual information to better understand the recipient’s conversion. Based on analysis of information collected from the up to 12 grant recipients (up to 18 total Directors and Coordinators), our team will make a final determination about which grant recipients to invite to participate in the multi-case study. For each participating grant recipient, we will use the list of names provided by the Director and/or Coordinator (Instrument 2) to identify potential interviewees.


B3. Design of Data Collection Instruments

Development of Data Collection Instruments

We designed the study protocols to efficiently address the research questions and capture constructs of interest, both of which were informed by a review of existing published documents, existing instruments, key informant interviews, and expert and stakeholder input. Interviews are structured to follow, in chronological order, the process of conversion, from grant recipients’ motivation to their implementation of Early Head Start services.


The instruments each address some, if not all, of the study objectives described in B1.


Table 1. Alignment of Study Objectives and Instruments

Study Objective

Instruments Addressing Study Objective

  1. To understand how and why grant recipients convert enrollment slots from Head Start to Early Head Start

  • Instrument 3 – Full Interview for Head Start Staff and T/TA Staff Protocol (relevant sections: Admin/Management Staff, Staff Who Oversee Fiscal Operations; Staff Working Directly with Families)

  • Instrument 4 – Full Interview for Non-Head Start Staff Protocol

  1. To gather information about planning for and implementation of high-quality Early Head Start services following conversion

  • Instrument 1 – Preparatory Interview with Head Start Directors and Coordinators (relevant section: Head Start Directors)

  • Instrument 2 – Preparatory Email Request

  • Instrument 3 – Full Interview for Head Start Staff and T/TA Staff Protocol (relevant sections: Admin/Management Staff, Staff Who Oversee Fiscal Operations; Staff Working Directly with Families)

  • Instrument 4 – Full Interview for Non-Head Start Staff Protocol

  1. To learn about barriers and facilitators to the provision of high-quality Early Head Start services that meet community needs

  • Instrument 3 – Full Interview for Head Start Staff and T/TA Staff Protocol (relevant sections: Admin/Management Staff, Staff Who Oversee Fiscal Operations; Staff Working Directly with Families)

  • Instrument 4 – Full Interview for Non-Head Start Staff Protocol



In drafting these instruments, we considered whether we could gather information from other sources to minimize the burden placed on participants. For instance, for each recipient participating in the multi-case study, we will review all information about the conversion of interest from HSES and PIR data. We will seek to confirm some of what we find but will collect detailed background information from HSES and PIR to will help us avoid seeking redundant information during the interviews. We have also carefully considered which interviewee(s) are best suited to address which question(s) to avoid asking multiple people for the same information. As such, respondents will respond only to the subset of modules or questions in the master protocols that align with their own areas of knowledge/expertise.


We sought feedback on drafts of the instruments from federal staff and experts consulting on the project. Each round of input yielded feedback that helped us streamline and refine the protocols.


B4. Collection of Data and Quality Control

ACF has contracted the Urban Institute and MEF Associates to collect all necessary data. Table 2 in Supporting Statement A summarizes the data collection that will be conducted for the study.

We will assign a two-person case liaison team to each grant recipient and aim to have that team perform all screening, recruiting, and interviewing for that particular grant recipient. Every team member will participate in a training prior to data collection. Training will focus on strategies to ensure we collect high quality data in the least burdensome way for respondents, including: (1) how to prepare for the interview (for example, narrowing to the selected modules, identifying where documents we have gathered already provide the needed information); (2) how to efficiently move through the interview protocol while collecting high-quality information (for example, how to make decisions about which probes are critical, based on answers received to that point in the interview); and (3) how to synthesize notes after each interview to confirm completeness of the data.

Recruitment

We will engage federal staff to help identify potential grant recipients for participation. Once we identify eligible grant recipients (described in B2) and OMB has approved the information collection request, we will begin recruiting grant recipients. The project team will follow up directly with grant recipient directors to collect information that will help us to confirm eligibility (described in B2). The project team will then work with eligible grant recipients to confirm participation and begin scheduling interviews with the help of a Coordinator. Below we describe in more detail how we will select the Coordinator.


Outreach to Grant Recipients

The project team will send an email (Appendix B) introducing the study and asking for a time to schedule and conduct preparatory interviews with the Director of each of the grant recipients. The project team will attach a document that includes responses to anticipated frequently asked questions (FAQs; Appendix C). During the preparatory interviews with Directors, we will use the OMB-approved protocol (Instrument 1) to review study goals and expectations for participation, identify the process for confirming eligibility (e.g., policy council approval), and answer any questions. We will then send a follow-up request by email to obtain contact information about which staff were involved in the conversion process (Instrument 2). This information will help the team to identify potential participants to interview during data collection.


Based on the information provided during the preparatory interview (Instrument 1) and follow-up email (Instrument 2), the study team will make a final eligibility determination (for more information, see B2).


If the grant recipient is eligible and willing to participate, we will ask Program Directors to identify a Coordinator (Appendix D), who will be responsible for scheduling and coordinating interviews and handling day-of logistics. We will then email Coordinators (Appendix E) and will again attach a document with responses to anticipated Frequently Ask Questions (FAQs). During the Preparatory Interview with Coordinators (Instrument 1), we will review study goals and expectations for the

Coordinator role, confirm willingness to serve in this role, and begin planning for data collection.


Mode of Data Collection

We will conduct data collection entirely virtually, over the phone and using a video conferencing platform such as Zoom.


Quality and Consistency in Data Collection Activities

The research team will leverage information collected as part of the preparatory interview and/or through public sources related to the roles and responsibilities of particular respondents to identify the appropriate subset of items to ask in each interview (Instruments 1, 3, and 4). For all interviews (Instruments 1, 3, and 4) one member of the case liaison team will conduct the interview and one will take notes. With the permission of respondents, we will also record the interviews. The grant recipient liaison team will confer after each interview (using interview recordings as needed) to ensure completeness of data. Throughout the data collection period, the study research team will conduct weekly meetings to share information and strategies, help troubleshoot challenges, and ensure that all data are collected as uniformly as possible.


B5. Response Rates and Potential Nonresponse Bias

Response Rates

The interviews are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion. Response rates will not be calculated or reported.


Nonresponse

Based on previous experience with similar methods and respondents, we do not expect substantial nonresponse on the interviews. Because participants will not be randomly sampled and findings are not intended to be representative, we will not calculate nonresponse bias. As part of study reporting, we will present information about characteristics of the participating agencies and grant recipients.


B6. Production of Estimates and Projections

This study seeks to present internally valid descriptions of the conversion process grant recipients who recently converted enrollment slots from Head Start to Early Head Start, not to promote statistical generalization to all grant recipients. We do not plan to make policy decisions based on data that are not representative, nor publish biased population estimates. Information reported will clearly state that results are not meant to be generalizable.


B7. Data Handling and Analysis

Data Handling

To ensure that interview notes are complete and consistently prepared, we will use a standard note template for each data collection protocol. The notetaker could make use of audio recordings (with permission of respondents) to ensure that notes are complete. Data collectors will review the interview notes and a senior member of the team will review a subset of the notes to ensure that data are complete and error free.

Data Analysis

Qualitative data from both the interviews and documents gathered will provide comprehensive and rich information to analyze factors associated with each research question. We will build a structure for organizing and coding the data across all the data collection sources to facilitate efficient analysis. With respect to each question, we will examine each grant recipient independently (unit analysis) as well as in comparison with other grant recipients in the study (cross-case analysis).


Coding field notes and documents. To facilitate secure data management and efficient coding, we will use a qualitative coding and analysis software program. All data will be loaded into the program, including interview recordings, interview notes, and documents provided by grant recipients. If transcriptions of the interviews are created, these will be loaded into the program. Members of the research team will conduct two rounds of coding using a codebook that a senior researcher will develop. The codebook will include codes linked to the multi-case study research questions and other domains of interest identified by the research team, experts, and stakeholders in preceding project activities. We will update the codebook iteratively as analysis proceeds. A senior researcher will train coders on the codebook and coding procedures.


Coders will begin by conducting a first round of coding based on the codebook. We will permit some amount of open coding to allow emergent themes to surface, but, through group discussion, we will work to link these emergent themes back to one of the primary research questions. During this first round of coding, the senior researcher will meet with the coding team to discuss any challenges that are arising, clarify codes, and update the codebook. Following the first round of coding, the team will discuss their preliminary interpretation of findings and identify specific themes to explore in greater depth through a second round of coding.


When coding is complete, we will assign each research question to a member of the team; that team member will be responsible for examining relevant coded excerpts. These team members will first examine excerpts by grant recipient, then compare themes across grant recipients participating in the multi-case study, and finally will compare what is emerging across the different categories of grant recipients that we prioritized in sampling (e.g., urbanicity and size of conversion). The team will meet several times to discuss findings and themes that lead to the emerging “story” of each grant recipient, and place those findings in the larger, cross-site narrative about conversion of enrollment slots. The team will also consider the ways in which urbanicity and conversion size are or are not related to the conversion process for grant recipients in the multi-case study.


Data Use

When the data are coded, the team will be able to retrieve and sort data linked to specific research questions and constructs. We will synthesize data pertaining to a specific research question across respondents or for specific types of respondents. We will then share the findings in public-facing products targeted to federal staff, Head Start grant recipients, ACF-funded Training and Technical Assistance providers, and researchers. Possible products from the multi-case study include:

  • A study report

  • A brief or series of research briefs

  • Presentations or briefings



B8. Contact Persons

Jenessa Malin

Senior Social Science Research Analyst

U. S. Department of Health and Human Services

Administration for Children and Families

Office of Planning, Research & Evaluation

Switzer Building

Washington DC, 20201

(202) 401-5560

[email protected]


Diane Schilder

Urban Institute

(202) 261-5544
[email protected]


Kate Stepleton

MEF Associates

(206) 653-0222

[email protected]


Attachments

Instruments:

Instrument 1: Preparatory Interview with Head Start Directors and Coordinators

Instrument 2: Preparatory Email Request
Instrument 3: Full Interview for Head Start Staff and T/TA Staff Protocol
Instrument 4: Full Interview for Non-Head Start Staff Protocol


Appendices:

Appendix A. Study Research Questions

Appendix B. Recruitment Letter for Head Start Directors (Project Team will send)

Appendix C. Responses to FAQs

Appendix D. Eligibility and Logistics Email to Head Start Directors

Appendix E. Recruitment Letter for Coordinator


1 All conversion requests are reviewed at the Regional Office level. The Central Office also reviews when grant recipients convert more than 10% of funded enrollment slots, more than 1 classroom, more than one home-based caseload (10-12) slots in a program of fewer than 10 caseloads, if the grant recipient is planning to offer EHS services for the first time, or if the grant recipient is currently on an underenrollment plan.

2 Per the 2019 Program Information Report (OMB #0970-0427), four agency types represent 92% of recipients. These agency types are: Community Action Agency, School System, Government Agencies, and Private/Public Nonprofit.

8


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorKuhns, Catherine
File Modified0000-00-00
File Created2022-06-17

© 2024 OMB.report | Privacy Policy