Supporting Statement B_BASE revsied 1.15.24 clean

Supporting Statement B_BASE revsied 1.15.24 clean.docx

Building and Sustaining the Child Care and Early Education Workforce (BASE)

OMB: 0970-0615

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes


Building and Sustaining the Child Care and Early Education Workforce (BASE)



OMB Information Collection Request

New Collection





Supporting Statement

Part B






JANUARY 2024








Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201





Project Officers: Ann Rivera



Part B


B1. Objectives

The BASE project aims to understand factors that affect child care and early education (CCEE) workforce dynamics, including employment recruitment, retention, and advancement, as well as to build the evidence base about strategies that may help to recruit, retain, and advance the CCEE workforce. This research project is sponsored by the Office of Planning, Research, and Evaluation (OPRE) through a contract with MDRC and its partners, MEF Associates, Chapin Hall, Erikson Institute, Butler Institute, and Decision Information Resources, Inc.



Study Objectives

The proposed information collection aims to fill gaps in existing knowledge by building the evidence base about the implementation, costs, and impacts of strategies aimed at improving the compensation and economic well-being of educators in child care center-based and home-based settings. The BASE project will do so by leveraging two pilot initiatives being implemented by the Colorado Department of Early Childhood (CDEC) that provide additional funding and supports to center-based and home-based child care settings to improve the compensation for lead and assistant teachers and to home-based child care providers and assistants.


The specific objectives are broken out into four related studies:

  1. The impact study: To estimate the impacts of the center-based pilot initiative on teachers’ employment, economic, and psychological well-being outcomes in center-based child care settings;

  2. The descriptive study: To describe the experiences of home-based child care providers and assistants with the home-based pilot initiative;

  3. The implementation study: To describe the implementation of the two pilot initiatives, participant reach and engagement in the pilot initiatives, the system and infrastructure supports for implementation of the pilot initiatives, and the context in which the two pilot initiatives are being offered; and,

  4. The cost study: To describe the costs associated with delivering and receiving the two pilot initiatives from the perspectives of the implementing agencies and center-based and home-based child care settings.


The research team seeks to achieve these objectives by collecting data about CCEE settings, center directors, center lead and assistant teachers, and home-based providers and assistants who do and do not receive the pilot initiatives over time to address research questions about CCEE workforce supports, working conditions and worker experiences, and to measure hypothesized inputs, activities, outputs, and short- and longer-term outcomes in line with the underlying theory of change of the pilot initiatives (see Appendix A).



Generalizability of Results

The impact study is intended to produce internally valid estimates of the center-based pilot initiative’s causal impact among eligible center-based child care settings. This study is not intended to promote statistical generalization to other sites or service populations.


For the descriptive study of the home-based pilot initiative in home-based child care settings, the study is intended to present an internally valid description of the pilot initiative among eligible home-based child care settings, not to promote statistical generalization to other sites or service populations.


For the implementation and cost studies of the pilot initiatives in center-based and home-based child care settings, the studies are intended to present internally valid descriptions of the eligible center-based and home-based settings and the implementation of the intervention in chosen sites, not to promote statistical generalization to other sites or service populations.


Appropriateness of Study Design and Methods for Planned Uses

The impact study leverages the lottery process used by Colorado to randomly select which centers receive the additional funds and supports. This lottery process essentially serves as random assignment of centers to an intervention or control condition, and this allows us to test the effectiveness of the pilot initiative on hypothesized outcomes in center-based child care settings and lead and assistant teachers who participate in the pilot initiative.


The descriptive study leverages the lottery process used by the state of Colorado to randomly select which home-based settings receive the additional funds and supports. The lottery process essentially serves as random assignment of home-based settings to an intervention or control condition. However, because the sample of eligible home-based child care settings is not sufficiently large to rigorously test the impacts of the pilot initiative, the study is descriptive in nature and will explore comparisons in the experiences and outcomes for home-based settings, providers, and assistants. These data are intended to generate associative relationships among these constructs. The data are not intended to be representative or to be used for evaluative purposes.


The implementation and cost studies of the pilot initiatives in center-based and home-based child care settings leverage the sample, data, and study designs of the impact and descriptive studies. Data aim to describe implementation and costs associated with the pilot initiatives. The data are not intended to be representative or to be used for evaluative purposes.


For all of the data discussed above, as noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information. Findings will inform ACF efforts to promote and implement workforce development strategies that effectively support the CCEE workforce and will inform future research agendas concerning the recruitment, development and sustainability of the CCEE workforce. Limitations will be clearly stated in all written products about the study.


B2. Methods and Design

Target population

The target population for the impact and descriptive studies are center-based and home-based child care settings in Colorado that are eligible for the pilot initiatives and the center directors and lead and assistant teachers and home-based child care providers and assistants that work in those settings. The center-based and home-based child care settings that are eligible for the pilot initiatives have a Colorado Shine Rating (quality rating and improvement system [QRIS] rating) of 3, 4 or 5 and for whom about 40% or more of the children served receive state child care subsidies (Colorado Child Care Assistance Program [CCCAP]).

We will collect survey instruments from the center directors and lead and assistant teachers and home-based child care providers and assistants in up to 75 center-based child care settings and 50 home-based child care settings. This is expected to include instruments collected from 75 center directors, 1,000 lead and assistant teachers, 50 home-based child care providers, and 45 home-based child care assistants. These instruments will be used to capture setting-level and individual-level information. For analyses about setting-level characteristics, practices, and operations, the unit of analysis will be the center-based or home-based child care setting. For analysis about characteristics, experiences, and outcomes for center directors, lead and assistant teachers, and home-based child care providers and assistants, the unit of analysis will be the individual level.

The target population for the implementation study includes the same center-based and home-based child care settings, as well as center directors, lead and assistant teachers, and home-based child care providers and assistants as the impact and descriptive studies. The implementation study also includes a smaller number of center- and home-based settings that were eligible for the pilot initiatives but did not complete an application, as well as the staff in the implementing agencies for the pilot initiatives. We will collect one-on-one semi-structured interview instruments from up to 15 center directors, 25 lead and assistant teachers, and 25 home-based child care providers. We also anticipate conducting one-on-one interviews with up to 5 staff at the Colorado Department of Early Childhood and its subcontractors that are implementing the pilot initiatives. The unit of analysis for the one-on-one interviews captured from center-based and home-based child care staff and the staff in the implementing agencies of the pilot initiatives will be the individual level.

The target population for the cost study includes the same center-based child care settings as the impact and descriptive studies. We will collect the cost workbooks from up to 16 center-based child care directors or administrators. The unit of analysis for the cost workbooks will be the setting level.

Sampling and Site Selection

The potential sites for the impact, descriptive, implementation, and cost studies will include all center-based child care settings and home-based child care settings that apply to CDEC’s pilot initiatives. The sampling frame will consist of the settings that apply and the roster of directors and lead and assistant teachers or the roster of home-based providers and assistants employed at each site (i.e., when the site applies for the pilot initiative and over the follow-up period, regardless of whether they are still employed by the site when the research team fields the data collection). As such, the respondent sample is expected to be representative of the directors/providers and teaching/caregiving staff in child care settings that participate in the pilot initiatives.


In addition, for the implementation study the sampling frame also will include the center-based settings that were eligible for the pilot initiative but did not end up submitting a full application.


Table B2.1. Summary of Minimum Detectable Effect Sizes for Experimental and Non-experimental Analyses for the Impact Study, Descriptive Study, and Implementation Study


Minimum Detectable Effect Size (MDES)

Study Design

Administrative Data

Survey Data (assuming 80% response rate)

Impact Study: Center-based settings randomly assigned to intervention or control condition

75 centers; 1,000 lead and assistant teachers

0.238

0.255


Descriptive Study: Home-based settings randomly assigned to intervention or control condition

50 home-based settings; 95 providers and assistants

0.571

0.633

Implementation Study: Center-based and home-based settings randomly assigned to intervention conditions only

25 centers; 325 lead and assistant teachers

0.429

0.460

25 home-based settings; 48 providers and assistants

0.827

0.917

Notes: Assumes 30% of the center-based sites are randomly assigned to the intervention condition; assumes 75 center-based child care settings with n = 13 teachers (on avg.) per setting, for sample of 1,000 individuals; assumes 50% of the home-based sites are randomly assigned to the intervention condition; 50 home-based child care settings where 90 percent of them have another assistant (on avg.); for intraclass correlation=.05; between setting R2 =.10; within setting R2 = .10. All designs assume measurement of multi-level factors with administrative data sources. 



As shown in Table B2.1, for the impact study, a sample of about 75 center-based settings and 1,000 lead and assistant teachers will allow for detecting a minimum detectable effect size (MDES) of .238 on continuous outcomes. For dichotomous outcomes, such as retention in the job, assuming a control group average of 60 percent, the minimum detectable effect (MDE) is 12 percentage points. For the center-based survey sample (assuming an 80 percent response rate), the corresponding MDES and MDE are .255 and 13 percentage points, assuming a 60 percent control group average, respectively.


For the descriptive study as shown in Table B2.1, a sample of about 50 home-based settings and 95 providers and assistants will allow for detecting a MDES of .571 on continuous outcomes. For dichotomous outcomes, such as retention in the job, assuming a control group average of 70 percent, the MDE is 26 percentage points. For the home-based survey sample (assuming an 80 percent response rate), the corresponding MDES and MDE are .633 or 29 percentage points, assuming a control group average of 70 percent.


Findings from other studies of earnings supplements suggest that these MDEs are within the range of potential effects. For example, an earnings supplement in Virginia was evaluated using an experimental design and found that the payment reduced turnover by 14 percentage points for lead teachers in centers and by over 20 percentage points for assistant teachers.1 That program offered $1,500 to teachers for staying at a given employer. Although it was tied to retention specifically, it was also a smaller increase in earnings than offered by the Colorado’s center-based pilot initiative. A pre-post study of a wage increase provided to teachers in Denver child care centers found differences in two-year retention of 16 percentage points to 20 percentage points.2 Although based on a fairly small sample size, the findings suggest that wage increases ranging from about $1.50 to $3.50 per hour can have sizable effects on teacher retention. This suggests that the impact study will be appropriately powered to detect impacts of the center-based pilot initiative that are of interest. The descriptive study will be less likely to detect the magnitude of impacts that are of interest, therefore the analysis is considered exploratory and aims to generate associations among descriptive study constructs.


The implementation study leverages survey and administrative data, as well as qualitative data, to describe the implementation and experiences of the CCEE workforce with the pilot initiatives. As shown in Table B2.1, for the implementation study leveraging the survey and administrative data in center-based settings, the sample will focus on about 25 centers and 325 lead and assistant teachers assigned to the intervention condition. This will allow for detecting non-experimental comparisons of implementation outcomes measured with administrative data that are about .429 standard deviations in magnitude on continuous outcomes, or about 20 percentage points, assuming a comparison group average of 70 percent, in center-based settings. For the center-based survey sample (assuming an 80 percent response rate), the corresponding comparison effect sizes are .460 and 21 percentage points. The analysis of the administrative and survey data for the implementation study in center-based settings is considered to be exploratory and aims to generate associations among implementation study constructs, nevertheless it appears that the implementation study in center-based settings will be positioned to detect the magnitude of comparisons that are of interest.


In addition, for the implementation study leveraging the survey and administrative data in home-based settings as shown in Table B2.1, the sample will focus on about 25 home-based settings and 48 providers and assistants assigned to the intervention condition. This will allow for detecting non-experimental comparisons of implementation outcomes measured with administrative data that are about .827 standard deviations in magnitude on continuous outcomes, or about 38 percentage points, assuming a comparison group average of 70 percent, in home-based settings. For the home-based survey sample (assuming an 80 percent response rate), the corresponding comparison effect sizes are .917 and 42 percentage points. This suggests the implementation study in home-based settings assigned to the intervention condition will be less likely to detect the magnitude of comparisons that are of interest. The analysis of the administrative and survey data for the implementation study in home-based settings is considered exploratory and aims to generate associations among implementation study constructs.


For the qualitative data leveraged in the implementation study, the sites for the implementation study will include center-based and home-based child care settings that did not complete the applications for the pilot initiatives, as well as those that participated in the lottery processes and were assigned to the intervention conditions. For center-based settings, we will collect qualitative interview data from up to 15 center directors, which are selected from centers in the intervention condition or from centers that are eligible, but did not start or complete the application for the pilot initiative, and up to 25 lead and assistant teachers who opted in or out of receiving the pilot initiative selected from centers assigned to the intervention condition. For home-base settings, we will collect qualitative interview data from up to 25 home-based child care providers. The research team will utilize a non-probability, purposive respondent recruitment approach to identify potential respondents in center-based and home-based settings who can provide information on the implementation study’s key constructs as captured by the qualitative interview data. Because participants will be purposefully selected, they will not be representative of the eligible population of center-based and home-based child care settings and individuals working in those settings. Instead, we aim to obtain variation in the experiences of respondents to understand how the pilot initiatives’ activities are experienced.


For the implementation study, the research team will also collect qualitative interview data from up to 5 staff in the implementing agencies who were involved in the design, start-up, and implementation of the two pilot initiatives. The research team will utilize a non-probability, purposive respondent recruitment approach to identify potential respondents who can provide information on the implementation study’s key constructs. Because participants will be purposefully selected, they will not be representative of the staff at the implementing agencies. Instead, we aim to obtain the perspectives based on the different functions that staff at the implementing agencies fulfilled in the implementation of the pilot initiatives’ activities to understand how the activities were designed, launched, and implemented.


For the cost study, the sites will include center-based child care settings that participate in the lottery processes and are assigned to the intervention or control conditions. For center-based settings, we will collect cost workbooks from up to 16 center directors/administrators. The cost study will also leverage data from the follow-up surveys, implementation qualitative interviews, and administrative data for this subset of centers. A non-probability, purposive recruitment approach will be used to identify center directors and administrators (as well as home-based child care providers and implementing agency staff) who can provide information on the cost study’s key constructs. Because participants will be purposefully selected, they will not be representative of the population of eligible center-based settings and individuals working in those sites.


B3. Design of Data Collection Instruments

Development of Data Collection Instruments

Each set of instruments aims to collect unique, but complementary, information about the context and characteristics of center- and home-based child care; the experiences, economic and psychological well-being, and activities of educators (center directors, home-based providers/assistants, lead teachers, assistant teachers); educators’ experiences with the pilot initiatives; implementation of and costs associated with the pilot initiatives. Because limited existing data can inform these constructs of interest in CCEE programming, we plan to collect data from multiple sources to enhance our ability to measure these constructs. See Table B3.1 for information on which project objectives are being addressed by each data collection instrument.

All survey-based data collection instruments draw from existing scales and measures (e.g., Maslach Burnout Inventory, Supportive Environmental Quality Underlying Adult Learning [SEQUAL]) whenever possible to capture key outcomes of interest and the range of potential multilevel factors that may moderate the effects of the pilot initiatives, and to maximize the potential that this study’s findings can be compared to other studies examining workforce support strategies and collectively contribute to the evidence on the effectiveness of such strategies. When an existing item or scale did not exist for the population of interest, the team crafted an item or set of items, with the aim of minimizing the number of questions to be asked of any one respondent type. To minimize measurement error, multiple-items scales were chosen when available.


The semi-structured interview protocols were developed by reviewing the study’s research questions, along with the theory of change for the pilot initiatives (see Appendix A: Colorado Pilot Initiatives’ Theory of Change for Center-based and Home-based Child Care and Early Education settings), to determine what constructs the protocols should capture to best answer the research questions. The research team then tailored interview protocols with phrasing appropriate for different respondent types (e.g., directors who did and did not fully apply for the center-based pilot initiative; teachers who did and did not opt in to the pilot initiatives) and organized the items in a logical way so the interview could flow in a conversational manner.


The cost study will utilize measures from a center-based setting costs workbook. This workbook has been informed by the Implementation and Cost of High Quality (ICHQ) Early Care and Education Project (OMB # 0970-0499). The costs workbook gathers detailed information about the costs and time spent on activities related to educator vacancies, recruitment and hiring, and training new educators. The costs workbook also includes a section to collect detailed information about salaries and fringe benefits for staff who work on these activities. This information will allow us to monetize the time spent on activities.



Table B3.1: Study Objectives Addressed by Each Data Collection Instrument 

Study Objective 

Instruments 

Impact Study: Provide causal evidence of the effects of the center-based pilot initiative on employment, economic, and psychological well-being outcomes for lead and assistant teachers in center-based settings

  • Instrument 1: Follow-up Center Director Survey

  • Instrument 2: Follow-up Lead and Assistant Teacher Survey

Descriptive Study: Describe the experiences and employment, economic, and psychological well-being outcomes of home-based providers and assistants with the home-based pilot initiative

  • Instrument 3: Follow-up Home-Based Provider and Assistant Survey

Implementation Study: Describe the implementation, infrastructure, and context under which the pilot initiatives targeting center-based and home-based child care settings are offered and how they are experienced by participants

  • Instrument 4: One-on-One Center Director Interview

  • Instrument 5: One-on-One Lead and Assistant Teacher Interview

  • Instrument 6: One-on-One Home-Based Provider Interview

  • Instrument 7: One-on-One Key Informant Interview

Cost Study: Elucidate the resources required to implement the center-based pilot initiative, as well as the potential avoided costs that may be associated with the employment outcomes for center-based lead and assistant teachers

  • Instrument 8: Center-based Setting Costs Workbook

 


B4. Collection of Data and Quality Control

The data for this project is being collected by MDRC, MEF Associates, and Decision Information Resources, Inc. (DIR), depending on the instrument. Data collection will begin upon OMB approval and is expected to take place over an 18-month period. The team expects to begin data collection by having CDEC send study announcements via email to all potential respondents for each setting in order to reintroduce the pilot initiatives and describe upcoming data collection plans. Following that, the research team will send a study announcement by email (and/or mail if appropriate) to all potential respondents to begin communications and ensure the correct respondent for the costs workbook. Subsequent communications will aim to follow what is described below for each data collection activity. See Appendix B for example recruitment materials.

Follow-up Survey Collection Procedures

This section focuses on the data collection procedures for Instrument 1 (Follow-up Center Director Survey), Instrument 2 (Follow-up Lead and Assistant Teacher Survey), and Instrument 3 (Follow-up Home-based Provider and Assistant Survey). MDRC will lead collection of the follow-up center director survey and home-based provider and assistant survey with a web-based option, followed by telephone field follow-up, and then in-person field follow-up, if necessary. DIR will lead collection of the follow-up lead and assistant teacher surveys beginning with a web-based option, followed up by a Computer Assisted Telephone Interview (CATI) dialing, and then an in-person field follow-up, if necessary. These three protocols are described further below.

Survey Web-based Collection Protocols. First, CDEC will email an introductory email to all sample members re-introducing the pilot initiatives, explaining the study and encouraging its importance. Then, the survey team will send study participants an initial invitation (via email or USPS mail) to participate in the self-administered web version of the survey. The initial invitation will include information about the survey, the respondent’s rights as a participant, contact information for a study-specific toll-free number, and a web link (and password if needed) for accessing the online version of the survey. For lead and assistant teachers, the invitation, as well as any subsequent hardcopy outreach, will include a QR code to make it easier for teachers to participate via the web. To further improve ease of access, the web survey will be optimized for participation via cell phone.

In addition to the components discussed above, the survey invitations for center directors, lead and assistant teachers, and home-based providers and assistants will also inform study participants that if they complete the survey by the end of a specified two-week window, they will receive an “early-bird” token of appreciation of $10. This is in addition to the $40 honoraria and one professional development credit that we propose to offer all respondents. The survey team will send a reminder post card or email approximately seven days before the end of the early-bird period, reminding participants that the window to receive this additional token of appreciation is ending. The self-administered web option will remain available to all respondents for the duration of the data collection period.

Survey Telephone Interview Collection Protocols. Participants that do not complete the survey during the two-week early-bird period will become eligible for telephone field follow-up to encourage survey completion in a web-based format. In addition, lead and assistant teacher participants will become eligible for outbound CATI dialing, and will be given the option to complete the survey via web or CATI, depending on the participant’s preference. During this phase of data collection, participants will receive telephone calls from trained interviewers. Calls to participants will be distributed across times of day and days of the week in an effort to find a time most convenient for the participants. Lead and assistant teacher participants will also be able to call the CATI at their convenience. All interviewers will attend a 2-4 day training in which they will go over details about the study and associated procedures. The training agenda will include:

  • An overview of the overall study

  • Proper administration of the survey

  • Respondent confidentiality and quality assurance

  • Frequently asked questions and answers

  • Practice sessions

  • Administrative procedures, responsibilities, and compensation

  • Review of the certification process and criteria


A manual including information from the training will be provided to attendees. Only interviewers who meet the certification requirements will be allowed to conduct interviews.

Survey Field Interview Collection Protocols. Site-based field interviewers, who will work to locate sample members and either interview them or facilitate web participation, will be assigned to all study participants who have not completed the survey after 10 unsuccessful outbound call attempts, within 6 weeks after survey launch. Each interviewer will be assigned a geographically clustered group of cases and, beginning with existing contact information, seek to find the sampled respondent. Once the sample member is contacted, the field interviewer will arrange for him or her to complete the survey in person, on the web, or via CATI if that is the participant’s preference. All field interviewers will attend a 2- to 4-day training that includes the same topics described above for the telephone interviewer training. In addition, field interviewer training will include modules on community-based interviewing and locating respondents.

In addition, center directors, lead and assistant teachers, and home-based providers and assistants that have not responded to the survey with only two weeks of data collection remaining will receive a notification (sent via email or USPS mail) making respondents aware of a “late-bird” $10 token of appreciation and remind them that they can participate either in the self-administered web version of the survey. Lead and assistant teachers will also be given the option to complete the survey by telephone with a CATI interviewer. The letter will include information about the survey, the respondent’s rights as a participant, contact information for a study-specific toll-free number, and a web link and password for accessing the online version of the survey.

One-on-One Interview Procedures

This section focuses on the data collection procedures for Instrument 4 (One-on-One Center Director Interview), Instrument 5 (One-on-One Lead and Assistant Teacher Interview), Instrument 6 (One-on-One Home-based Provider Interview), and Instrument 7 (One-on-One Key Informant Interview). MDRC and MEF Associates will be collecting these instruments.

To identify interview participants, center-based and home-based child care settings will be stratified by county and setting size, and a random subset of settings will be selected to participate in the one-on-one interviews. It is assumed that one center director per setting will be present, and this director will be selected to participate in the one-on-one interviews. The lead and assistant teachers in these settings will be sorted based upon their level of engagement in the pilot initiative, then the team will use a purposeful sampling approach to identify a subset of lead and assistant teachers that will be selected based upon their level of engagement in the pilot initiative to maximize variation in levels of engagement represented in the interview participant sample.

Invitations to participate in these data collection activities will include a series of standard emails, a study information sheet in electronic format, and customized outreach by field interviewers. The recruitment materials will describe the purpose of the project, describe the nature and content of the interviews, and extend invitations to participate in the interviews. In addition, the interview invitation will also inform study participants that all respondents will receive $50 honoraria. The interview team will first contact center directors and home-based child care providers via email to notify them about the study and data collection components before contacting any lead and assistant teachers within center-based child care settings asking if they would like to participate in the interviews. Separate emails with recruitment materials will be sent to each potential participant to introduce the study and data collection activities. The interview team will also contact potential participants by email or phone, if needed, to prompt responses and reduce non-response and to schedule the interviews. The interview team will facilitate the semi-structured one-on-one interviews on the phone or video conferencing platform (e.g., Zoomgov).

Additionally, the team will interview center directors in centers who did not apply for the pilot initiative to understand the hesitations and barriers that prevented some centers from participating in the pilot initiatives. When selecting center directors who declined to participate, the team will use purposeful sampling and aim to seek maximum variation based on setting characteristics such as size, urban/rural setting, and local labor market conditions.

To ensure the quality and consistency of the interviews conducted, the interview team will engage in a 1- to 2-day interviewer training that goes over the details about the study and associated procedures. The training agenda will include:

  • An overview of the overall study

  • Interviewer procedures and practices

  • Respondent confidentiality and quality assurance

  • Frequently asked questions and answers

  • Practice sessions

  • Administrative procedures and responsibilities

  • Review of the certification process and criteria

Interview data will be recorded and transcribed. The interview team will conduct checks on quality of the recordings immediately after the interviews. The interview team will also meet to discuss and monitor progress over the course of the fielding period to ensure that any challenges and issues in conducting interviews are promptly addressed. We expect that recruitment and interviews will be conducted with an iterative process until a sufficient number of individuals agree to participate and the targeted sample size for the data collection is achieved.

Costs Workbook Procedures

This section focuses on the data collection procedures for Instrument 8 (Center-based Setting Costs Workbook). MDRC will be collecting this instrument.

To identify costs workbook respondents, center-based child care settings will be stratified by county and setting size, and a random subset of settings will be selected across research conditions to participate.

Invitations to participate in these data collection activities will be sent to center directors and will include a series of standard emails, a study information sheet in electronic format, and customized outreach and technical assistance by the costs workbook team. The recruitment materials will describe the purpose of the project, describe the nature and content of the costs workbook and technical assistance support to complete the workbook, and extend invitations to participate in the costs workbook data collection. In addition, the invitation will also inform center directors that the center will receive a $250 honorarium for completing the costs workbook. The costs workbook team will contact center directors to identify the individuals with the required knowledge to complete the costs workbook. The costs workbook team will then work with the center directors to contact the identified individuals via email or phone. It most cases, it is assumed that the center director or other finance administrators will be the appropriate respondents for the costs workbook. The time required to identify the appropriate respondents for the costs workbook is included in the burden estimates for Instrument 8.

The costs workbook team will first contact potential participants by email including the costs workbook in an electronic format (e.g., Microsoft Excel spreadsheet) to help orient the respondents to the type of information that is being collected via the workbook, and ask them to prepare the information requested in the workbook. The costs workbook team may follow-up with additional prompt emails or phone calls to encourage responses and to reduce non-response with customized technical assistance outreach and support. Upon response, the cost workbook team will follow-up with respondents by email or phone to schedule a time to provide technical assistance and answer questions to support completion of the costs workbook through one or more discussions. The purpose of the follow-up discussion(s) is to ensure the costs workbooks are completed correctly, and to work through any challenges experienced by the respondents in completing the costs workbooks. The costs workbook team will conduct discussion(s) with respondents on the phone or videoconferencing platform (e.g., Zoomgov). The information in the costs workbook will be completed in an electronic format in Qualtrics or an Excel workbook.

The team will also offer participants the option of providing cost information via a 30-minute phone call with a research team staff member to complete key portions of the workbook together. Highest priority cost information will be collected during these calls.



To ensure the quality and consistency of the costs workbook data collection, the costs workbook team will participate in a 2- to 3-day training that goes over the details about the study and associated procedures. The training agenda will include:

  • An overview of the overall study

  • Costs workbook collection and technical assistance support procedures

  • Respondent confidentiality and quality assurance

  • Frequently asked questions and answers

  • Practice sessions

  • Administrative procedures and responsibilities

  • Review of the certification process and criteria

The costs workbook team will conduct checks on the completeness of the costs workbook data and will meet weekly to discuss and monitor progress over the course of the fielding period to ensure that any challenges and issues in collecting the costs workbook data are promptly addressed.


B5. Response Rates and Potential Nonresponse Bias

Response Rates

For the follow-up surveys (Instruments 1 – 3) to be collected for the impact and descriptive studies, the research team expects to obtain a 90% response rate for the survey of center directors and home-based child care providers, with less than a 5-percentage-point differential in response rates across research conditions. We expect to obtain at least an 80% response rate for center-based lead and assistant teachers and for home-based assistants, with less than a 5-percentage-point differential in response rates across research conditions each time the follow-up survey is fielded with these targeted populations. The research team has been able to achieve similar response rates to surveys in CCEE settings. In the Variations in Implementation of Quality Interventions (VIQI) project (OMB # 0970-0508), for example, similar methods were used to obtain over a 90% response rate for surveys fielded with center directors and over a 75% response rate for surveys fielded with lead and assistant teachers.


During data collection, we will monitor response rates as well as refusal, non-contact, and other rates that compose the nonresponse rate. Each of these components serves as a process indicator for survey operations. Item-level response to the surveys will not be calculated or monitored at the time of fielding the surveys, as respondents can refuse to answer any items asked on the surveys. However, item-level non-response rates will be examined as part of assessing the quality of the data collected and analysis for the impact and descriptive studies. Item non-response rates will be reported as simple percentages (percent missing among responses), breaking out refusals, “don’t know” responses, and invalid responses.


For the one-on-one interviews to be collected for the implementation study, the interviews are not designed to produce statistically generalizable findings and participation is wholly at the discretion of the respondents. Response rates will not be calculated or reported.


For the center-based setting costs workbooks to be collected for the cost study, the costs workbook data are not designed to produce statistically generalizable findings and participation is wholly at the discretion of the respondents. Response rates will not be calculated or reported.


To ensure sufficient power to address the research questions of interest for different phases of the project, it will be important to reach the expected response rates described above. We fully recognize potential challenges and have structured a data collection plan accordingly. Our plan draws upon our extensive experience managing and collecting similar sets of data in CCEE settings in multiple large-scale, longitudinal, and experimental studies, and includes the following strategies to maximize response rates:


  • Minimizing burden. We draw upon our expertise and experience to put in place mixed-mode administration for the instruments whenever possible to minimize burden on study participants. Further, the instruments and protocols will be developed to be streamlined, cleanly formatted, and as brief as possible. We will draw upon principles from behavioral economics to tailor contact and communication with study participants to encourage responses. We will also aim to balance the breadth of data being collected by minimizing burden and disruptions to CCEE settings and staff by optimizing the amount of data collected at each point. Last, we will be flexible in accommodating the schedules of CCEE settings when collecting data, while still adhering to the planned timeline for data collection activities.


  • Multi-mode outreach and reminders to follow-up on non-response. We plan for outreach with multiple modalities, including email, postcards, phone, and in-person follow-up prompting to encourage responses from study participants. These iterative modes of contact are intended to reduce non-response for each data collection.


  • Conversion and avoidance of refusals. We will train fielding staff of the instruments in conversion and avoidance of refusals, including training on distinguishing “soft” refusals from “hard” ones. Soft refusals often occur when a study participant has been reached at an inopportune time. In these cases, it is important to back off gracefully and to establish a convenient time to follow up with the study participant, rather than to persist at that moment. Hard refusals do occur and must also be accepted gracefully by the fielding staff.


  • Oversight. Our team also includes DIR, the organization that will lead teacher survey data collection efforts in center-based settings and has extensive experience collecting high-quality data in large-scale studies. DIR’s senior data collection manager will provide centralized oversight of the collection of surveys. Field supervisors will hire and train local field staff to conduct each data collection activity. DIR will design, implement, maintain, and document an integrated study database that will provide oversight of all data collection activities. Such a system is critical for allowing project staff to monitor the flow of information and ensure that each designated sample unit (setting, type of respondent, etc.) is properly surveyed and that all required information is obtained, identified, and stored. Further, MDRC will have a dedicated data collection coordinator who will work closely with DIR, will have oversight over all of their data collection activities, and will meet at regular intervals during fielding periods, allowing us to follow up on challenges on a frequent basis to ensure high response rates.


  • Monitoring. The research team will closely monitor data collection and response rates by data source to ensure high response rates and no differential response rates by research conditions. Weekly meetings will address any issues that arise during preparations for data collection and data collection itself. The team will produce internal bi-weekly progress reports, which will include any issues and solutions for correcting issues. The research team will also review early files of data collected from each instrument to assess if there are any issues in the completeness or quality of the data being collected, so that issues can be quickly identified and solved early in the fielding stages of each instrument. 


  • Technical assistance. To maximize responses, the research team will also provide technical assistance to center-based settings as they complete the center-based setting costs workbook. This includes allowing respondents to ask questions and work through challenges by reviewing their existing accounting records and providing guidance on how to complete the sections of the costs workbook to fill in any gaps after reviewing those records.



  • Early bird/Late bird” tokens of appreciation. To maximize responses to the follow-up surveys by center directors, lead and assistant teachers, and home-based providers and assistants, additional tokens of appreciation will be offered. Respondents that complete the survey via the web-based format within the first two weeks of data collection (and for teachers, prior to the start of out-bound CATI dialing), will receive a $10 “early bird” token of appreciation. Respondents that have not completed the survey with only two weeks of data collection remaining in the fielding window will be offered a $10 “late bird” token of appreciation for completing the survey prior to the end of the data collection period. Early-bird tokens of appreciation such as these have been shown to be effective in boosting initial response rates and thus reducing costs as fewer cases require phone and field follow-up.3



Nonresponse

We will conduct a response analysis for the follow-up surveys prior to using the data to estimate program impacts and descriptive findings. Steps in this analysis are to: 1) compare respondents to non-respondents on a range of background characteristics; 2) compare the pilot initiative group respondents with control/comparison group respondents on a range of background characteristics; and 3) compare impacts calculated using administrative records data, available from the Colorado Early Care and Education Professional Development System, for respondents versus non-respondents. If these analyses suggest that the findings from the respondent sample cannot be generalized to the full sample, we will consider weighting (using the inverse predicted probability of response) or multiple imputation. However, the combined adjustment methods are not a complete fix, since both assume that respondents and non-respondents are similar on unobservable characteristics. For this reason, results from this adjustment will be presented with the appropriate caveats.


B6. Production of Estimates and Projections

Impact study. The impact study will estimate the effectiveness of the pilot initiative in improving outcomes in center-based settings. Any observed differences in outcomes between the intervention and control group members can be attributed to the effectiveness of the pilot initiative; in statistical terms, the differences are internally valid estimates of the mean impacts of the pilot initiative, as offered, on the corresponding outcomes for similar populations in the same environment. For the impact study, the center-based pilot initiative’s impacts on key outcomes will be estimated using the impact model presented below, which compares average outcomes of teachers in intervention group centers with those of teachers in control group centers, controlling for selected background characteristics and dummy variables for random assignment blocks.4 The pilot initiative’s impacts on outcomes for center-based settings will also be explored using a similar impact analytic model.


For this reason, the average estimate will not be used to make statistical inferences about the impact of the center-based pilot initiative in some larger population of centers. The center-based child care settings participating in the center-based pilot initiative are not a representative sample of the population of all centers. The data will not be used to generate population estimates, either for internal use or dissemination.

Descriptive study. For the descriptive study, to quantitatively describe how the pilot initiative is experienced by home-based child care settings, providers, and assistants, we plan to use basic descriptive analyses (e.g., means and proportions) and descriptive statistical analyses (e.g., unadjusted correlations, and regression models, controlling for selected background characteristics) that estimate the relation between characteristics of home-based child care settings, providers, and assistants with levels of engagement in the home-based pilot initiative. In addition, similar to those typically used for impact analyses, we will regress the intervention group indicator on outcomes of interest while controlling for selected background characteristics and dummy variables for random assignment blocks. However, we will not be using this average estimate to make statistical inferences about the potential effects of the pilot initiative in some larger population of home-based child care settings. The home-based child care settings are not a representative sample of all home-based settings. The data will not be used to generate population estimates, either for internal use or dissemination.

The information collected is meant to contribute to the body of knowledge on CCEE settings and the workforce. It is not intended to be used as the principal basis for a decision by a federal decision-maker, and is not expected to meet the threshold of influential or highly influential scientific information.

The estimates from this project will be released publicly following ACF review. ACF anticipates that a wide range of policy makers and policy analysts will use these reports in deciding whether to fund programs such as Colorado’s pilot initiatives aimed at improving the compensation and economic well-being of the CCEE workforce, how to implement such programs, and what guidance to provide for agencies implementing such initiatives.


B7. Data Handling and Analysis

Data Handling

To mitigate errors, the follow-up surveys will confirm respondents’ identifying information and will be programmed to allow only certain responses. This will help guard against inconsistent or incorrect values. All of the surveys will be programmed and tested prior to fielding to ensure accurate administration and to minimize errors during data processing. Data quality will be assessed early on, as well as on an ongoing basis, through checks of item frequencies and cross tabulations. The survey team will also closely track response rates. In addition, all data processing will be carefully programmed by a team member with knowledge of the data and survey instruments. It will also be checked by another staff member to minimize errors.


All interviews will be audio recorded and will be stored in secure, password-protected audio recorders or done via Zoomgov. Electronic notes will be taken during interviews and will be stored in a secure, password-protected location.


All costs workbook data will be electronically programmed and tested prior to fielding to ensure accurate administration and to minimize errors during data processing. The costs workbooks will be programmed to allow for ranges of responses. This will help guard against unexpected outliers, which will help guard against inconsistent or incorrect values. The follow-up discussions to facilitate completion of the costs workbooks will also help to guard against inconsistent or incorrect values. Data quality will be assessed early on, as well as on an ongoing basis, through checks of item frequencies and cross tabulations. The costs workbook team will also closely track response rates. In addition, all data processing will be carefully programmed by a team member with knowledge of the data and costs workbook instrument. It will also be checked by another staff member to minimize errors.


Access to all of the data collected will be granted on a need-to-know basis and only the Data Manager and the study team members with a need to know will have access to the data.


Data Analysis

For the impact and descriptive studies, this information is provided under Production of Estimates. The analysis plan is designed primarily for quantitative, descriptive, and causal data analysis. Using survey data, basic descriptive analyses (e.g., means and proportions, standard deviation, variation) and descriptive statistical analyses (e.g., unadjusted correlations and regression models, controlling for selected background characteristics) that estimate the relation between characteristics of settings and respondents with levels of engagement in the pilot initiatives. When necessary, psychometric work will be conducted to create scales. To explore whether there are any differences between those assigned to the pilot initiative condition or not, we will use impact analysis models as described in the Production of Estimates section by regressing the intervention condition indicator on outcomes of interest while controlling for selected background characteristics and dummy variables for random assignment blocks.


This data will be analyzed in conjunction with the CO LINC data to explore associations with other characteristics of settings and respondents. This data can also be used to define additional outcomes, such as total earnings and employment outcomes, that will also be examined as part of the impact and descriptive analysis.


For the implementation study, the process for coding and analyzing each set of interview data will follow the same process. Interview audio recordings will be transcribed, and a team of trained research staff will code each interview in NVIVO. The coding team will develop a codebook, starting with an initial set of structured, a priori codes based on the content of the questions in the protocol and will also develop sub-code codes that are based on the interview protocol.


The coding team will pilot the initial codebook, meeting regularly to compare and discuss their use of the initial codebook, including any inconsistencies. The coding team will also discuss the need for creating and using more detailed codes. These detailed codes are “emergent” in that they arise from the content of the interview transcript and are generated when an existing code does not seem to sufficiently capture the meaning of a text excerpt or when more specificity is needed to help with analysis on the back end. For example, if a topic not already included as a code comes up often across transcripts, it might be added as a sub-code to identify excerpts that go into more depth on a particular topic.


The lead researcher will be in charge of compiling coders’ suggestions and deciding upon adding new codes into the codebook, with an eye towards minimizing duplication or confusion about when to apply which code. Any changes are updated in the codebook (which is a living document) and a discussion of the changes and rationale are communicated to the coding team. Both the existing and newly generated codes are organized into a detailed codebook, which lays out the code name, definition, when and when not to use the code, and provides illustrative quotes. Coders will take this revised, detailed codebook and revisit the previous transcripts to apply the more comprehensive menu of codes.


The codebook will be refined throughout the coding process and team discussions. Because coders’ individual interpretation of narratives and responses to open-ended questions inherently involves some subjectivity, a lead coder will review each coder’s early transcripts. When reliability is found to be low, the coder will meet with the lead researcher to discuss sources of discrepancy, and the lead researcher will provide concrete guidance on how to resolve these disagreements. Once there are no fundamental disagreements between the coders about code definitions and application, the coders will independently code the remaining transcripts. Random spot checks (reviewing how coders coded a transcript) will continue to be done until the coding process is completed.


The next stage of analysis involves the creation of “themes,” which are the main outcomes of the coding process. Themes are a way to categorize a set of similar codes that highlight repeated ideas and can uncover significant patterns. Exploration of potential patterns will be guided by descriptors of the interview respondents – including, for example, county, setting, and research group, as well as other descriptors hypothesized to be related to patterns, such as lead and assistant teacher, and other factors that emerge as salient. Typically, “salience” is based on two factors: (1) the prevalence (frequency) with which something was remarked upon/noted during interviews, which supports the concept of saturation (that what one is reporting is a common occurrence in the sample); or (2) when uncommon or infrequently mentioned, a theme may be salient because it highlights a more unusual but illustrative circumstance to the field. By examining similarities and differences in themes, and then exploring potential reasons for these similarities and differences, the findings of the qualitative analysis can be used to shed light on variation in experiences with the pilot initiatives, using the respondents’ own narratives to guide the story.


Themes will be introduced and explained if included in the final publication. Once the concept of a theme is introduced, themes are backed up with illustrative quotes and a qualifier discussion of how frequently the theme was discussed. When there are conflicting or unanticipated themes, these will both be explored and noted further through a deeper review of the underlying codes and patterns between codes.


For the cost study, we will use information from the Center-based Setting Costs Workbook to first assess what the costs are to CCEE center-based settings of participating in the pilot initiative. To do this, we will calculate the costs associated with one “average” turnover event for both intervention and control conditions. Then, we will calculate the total costs of all turnover events for both intervention and control settings using these averages and the total turnover information based on required monthly reporting about staffing structures to inform the allocation of resources to centers under the pilot initiative for the study period. We will then explore whether there is an estimate of the net costs incurred or avoided by participating CCEE center-based settings as a result of the pilot initiative’s estimated impacts on staff recruitment, turnover, and open positions/vacancies. To do this, we will calculate the net benefits to intervention settings by (1) calculating the benefits of operating an intervention setting and net intervention costs and then (2) comparing this to the benefits of operating a control setting.

To answer the additional research questions that ask about variation in costs by setting type and staff roles, we will use more descriptive and qualitative methods. For example, we may analyze data by exploring associations between cost variations and characteristics (e.g., staff role). We may also look for themes in the text responses provided to open-ended questions in the costs workbook. These analyses will be largely exploratory and descriptive in nature and based on the level of details that participating programs are able to provide during data collection.


The analysis plan will be registered with an appropriate registry (e.g., Open Science Framework) prior to the initiation of data collection.


Data Use

A report published by ACF will show the estimated effects of the pilot initiative for center-based child care settings in Colorado on teacher employment, economic, and psychological well-being outcomes, will describe the experiences of the pilot initiative for home-based child care settings in Colorado on home-based provider and assistant employment, economic, and psychological well-being outcomes, and will describe the implementation and costs of both pilot initiatives. This report will be written to inform ACF, as well as the broader CCEE field. Limitations to the data will be included in the published report.


We also plan to archive the data at an appropriate restricted-access data repository. The team will prepare extensive documentation – including instruments and protocols, codebooks, and user manuals with guidance about how to use and interpret the data – to support secondary analysis of the archived data by other researchers.


B8. Contact Persons

The individuals listed in Table B8.1 below made a contribution to this information collection request.

Table B8.1. List of Individuals Contributing to This Information Collection Request

Name

Organization

Role in Study

Contact information

Ann Rivera

OPRE

Project Officer

Ann.Rivera@ACF.hhs.gov

Krystal Bichay-Awadalla

OPRE

Project Officer

[email protected]

Dianna Tran

OPRE

Project Officer

[email protected]

Cynthia Miller

MDRC

Project Director and Co-Principal Investigator

[email protected]

JoAnn Hsueh

MDRC

Co-Principal Investigator

[email protected]

Alexandra Bernardi

MDRC

Project Manager and Operations Lead

[email protected]

Michelle Maier

MDRC

Impact and Implementation Study Lead

[email protected] 

Rebecca Davis

MDRC

Cost Study Lead

[email protected]

Electra Small

MDRC

Data Manager

[email protected]

Victor Porcelli

MDRC

Data Analyst

[email protected]

Erin Bumgarner

MEF Associates

Implementation Qualitative Analysis Lead and Cost Analyst

[email protected]

Lisa Rau

MEF Associates

Implementation Qualitative Analysis Lead and Cost Analyst

[email protected]


Attachments

Appendix A: Colorado Pilot Initiatives’ Theory of Change for Center-based and Home-based Child Care and Early Education (CCEE) Settings

Appendix B: Example of Recruitment Materials

Appendix C: Consent Forms Compilation

Appendix D: BASE Project IRB approval


Instrument 1: Follow-up Center Director Survey

Instrument 2: Follow-up Lead and Assistant Teacher Survey

Instrument 3: Follow-up Home-Based Provider and Assistant Survey

Instrument 4: One-on-One Center Director Interview

Instrument 5: One-on-One Lead and Assistant Teacher Interview

Instrument 6: One-on-One Home-Based Provider Interview

Instrument 7: One-on-One Key Informant Interview

Instrument 8: Center-based Setting Costs Workbook

1 Bassok, Doromal, Michie, and Wong (2021).

2 Schaack et al. (2020);

3 Coopersmith, J, Klein Vogel, L, Bruursema, T, & Feeney, K. (2016) Effects of Incentive Amount and Type of Web Survey Response Rates. Survey Practice.; De Santis, J., Callahan, R., Marsh, S, & Perez-Johnson, I. (2016, May). Early-bird Incentives: Results from an Experiment to Determine Response Rates and Cost Effects. Paper presented at 71st annual meeting of the American Association of Public Opinion Research, Austin, TX,; and Ward, C., Stern, M., Vanicek, J., Black, C., Knighton, C & Wilkinson, L.(2014). Evaluating the Effectiveness of Early Bird Incentives in a Web Survey [Powerpoint]. Retrieved from https://www.census.gov/fedcasic/fc2014/ppt/02_ward.pdf.

4 CDEC conducted random assignment of center-based settings by blocking the sample of eligible centers into groups by county and size of centers in terms of number of lead and assistant teacher filled and open positions and conducting random assignment within each of these random assignment blocks.

13


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorOPRE
File Modified0000-00-00
File Created2024-07-26

© 2024 OMB.report | Privacy Policy