50037 Full OMB Package Section B_revised 42417 CLEAN

50037 Full OMB Package Section B_revised 42417 CLEAN.docx

Assessing the Implementation and Cost of High Quality Early Care and Education: Comparative Multi-Case Study, Phase 2

OMB: 0970-0499

Document [docx]
Download: docx | pdf


Assessing the Implementation and Cost of High Quality Early Care and Education: Comparative Multi-Case Study, Phase 2



OMB Information Collection Request

New




Supporting Statement

Part B

May 2017


Submitted by:

Office of Planning, Research and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


330 C Street, SW

Fourth Floor

Washington, DC 20024


Project officers:

Ivelisse Martinez-Beck, Senior Social Science Research Analyst and

Child Care Research Team Leader

Meryl Barofsky, Senior Social Science Research Analyst

This page has been left blank for double-sided copying.


ATTACHMENTS

ATTACHMENT A: RECRUITMENT AND ENGAGEMENT CALL SCRIPTS FOR CENTER DIRECTORS AND PROGRAM DIRECTORS

ATTACHMENT B: IMPLEMENTATION INTERVIEW

ATTACHMENT C: COST WORKBOOK

ATTACHMENT D: TIME-USE SURVEY

ATTACHMENT E: FEDERAL REGISTER NOTICE

ATTACHMENT F: PUBLIC COMMENT

ATTACHMENT G: RESPONSE TO PUBLIC COMMENT

ATTACHMENT H: INITIAL EMAIL TO CENTER DIRECTORS

ATTACHMENT I: TIME USE SURVEY ROSTER

ATTACHMENT J: ADVANCE LETTER, AND FOLLOW-UP LETTER

ATTACHMENT K: ADDITIONAL RECRUITMENT MATERIALS



TABLES

B.1 Targeted number of centers for Phase 2 of the multi-case study 2

B.2 OPRE project officers, study team leadership, and TEP for the ECE-ICHQ 5



This page has been left blank for double-sided copying.


B1. Respondent universe and sampling methods

The target population for this information collection is center-based early care and education (ECE) providers that serve children from birth to age 5. The sampling plan prioritizes the inclusion of different types of ECE centers from three states that represent different geographical regions and types of investments in early care and education to understand the costs of implementing quality care in a variety of contexts.

Phase 1. During Phase 1 of the study (collected under ACF’s generic clearance 0970-0355), the study team selected three states from which to sample centers: Arkansas, Pennsylvania, and Wisconsin. The study team considered the following:

  • States with quality rating and improvement systems (QRIS) that clearly link to classroom observational scores (such as the Environment Rating Scale or the Classroom Assessment Scoring System) so that these scores can serve as a proxy for identifying centers at different levels of quality

  • States with high numbers of centers participating in their QRIS and, specifically, with centers that achieve the highest rating level to ensure adequate sample sizes of centers from which to recruit

  • States that vary in their child care licensing regulations (which set a floor for quality) from those states that meet the first two priorities

  • States from different geographic regions

The approach for Phase 1 sampling started with obtaining lists of centers from state administrators in Arkansas, Pennsylvania, and Wisconsin, grouped into selection cells based on their characteristics (for example, variation in QRIS rating, funding source, and presence of infants and toddlers [all centers served preschool-age children]), and size. The study team sent study materials via email to all targeted centers. If a site emailed or called back, the team attempted to recruit the center; if a center did not respond and the selection cell had not been filled, the team called the center to follow up. Phase 1 collected data in five centers in each state for a total of 15 centers.

Phase 2. In this phase, the study team will target 50 new centers (Table B.1) from the same three states identified in Phase 1.1 In each state, the team will target 12 to 13 community-based centers (total of 38). Up to 10 of the community-based centers from each state (total of 29) will have a medium/high QRIS rating—a rating of three or higher on the typical four- or five-level ratings. Among the medium/high QRIS rating centers, the team will target two or three centers per state that receive limited or no public funding, three high-subsidy centers per state, and four centers per state with mixed funding (that is, centers that mix private tuitions with one or more public funding sources or that draw from multiple public funding sources). For the remaining three community-based centers in each state, the team will target centers with a low QRIS rating (a rating of one or two), specifically targeting two centers with mixed funding and one high-subsidy center. In addition to community-based centers, the team will also target four Head Start centers in each state (for a total of 12); ideally, half of these will be combined Early Head Start/Head Start centers. The study team expects that targeting centers based on funding mix will result in a sample with variation on ages of children served, size, and whether a center is embedded in larger organization; however, for efficiency of selection, they have not identified these characteristics as explicit criteria. The team will monitor the variation in the characteristics as they recruit centers and will adjust the recruiting pool if necessary later in the phase.

Table B.1. Targeted number of centers for Phase 2 of the multi-case study


Arkansas

Pennsylvania

Wisconsin

Total

Community-based centers with medium/high QRIS ratinga

29

Limited or no public funding

3

3

2

8

High subsidy funding a

3

3

3

9

Mixed funding b

4

4

4

12

Community-based centers with low QRIS ratinga

9

High subsidy funding a

1

1

1

3

Mixed funding b

2

2

2

6

Head Start/Early Head Start centersc

12

Head Start only

2

2

2

6

Head Start/Early Head Start

2

2

2

6

TOTAL

17

17

16

50

Note: Numbers in italics are subtotals and are not included in the overall total.

a High subsidy centers are those that serve a high proportion (50 percent or more) of children receiving child care subsidies through Child Care and Development Fund or Temporary Assistance for Needy Families.

b Mixed funding centers are those that draw from tuitions and one or more public funding sources or centers that draw from multiple public funding sources.

cCenters that are funded in full with Head Start funding, or receive most funding from Head Start mixed with other public funding.



B2. Procedures for collection of information

The study team will assemble initial contact lists for centers in the three states through state websites. They will gather public information from websites that provide detailed contact information and information about funding sources (such as if a center accepts children who receive Child Care and Development Fund subsidies, or offers the state pre-kindergarten program). The team prefers to use state websites that also list Head Start programs but, if necessary, they can also use Head Start Program Information Report (PIR) data to build or supplement the list of Head Start programs. The team will use the contact information to send advance materials to centers and follow up by phone.

After an initial hardcopy letter and email requesting participation from centers and explaining the study (Attachment H), members of the study team will recruit selected centers over the phone (Attachment A; other informational recruitment materials are included in Attachment J). The study team will send field staff to centers that require extra encouragement so staff can explain the study and answer the center directors’ questions in person.

After a center agrees to participate, the following activities will occur:

  • Conduct the implementation interview over the telephone with the center director or other staff who are most knowledgeable about center operations and the educational program provided.

  • Send the cost workbook to the center director, or his or her designee, and conduct individualized telephone and email follow up as necessary to assist in its completion.

  • Collect email addresses of select administrators and teaching staff (expected to average 14 total staff per center) in person through use of trained local field staff who will also distribute a letter inviting the staff to complete the time-use survey. The field staff person will be available to collect completed paper surveys (when applicable), answer staff questions, and offer the use of a laptop for completing the web-based survey

B3. Methods to maximize response rates and deal with nonresponse

Expected response rates

During Phase 2 of the data collection, the study team will assemble contact lists for centers in three states through state websites and Head Start PIR data, if necessary. The team will use the information from state websites to build a comprehensive list of centers that meet the selection criteria, with enough centers in reserve to replace those that are unable or unwilling to participate. Based on past studies, the study team expects to initially send hard copy letters to 800 centers, and follow-up with individual emails to 400 centers to secure the participation of the 50 centers required for this study (see Attachment H). All 400 centers will participate in recruiting discussions; 50 centers will participate in the full study engagement call (a response rate of 100 percent). The team will then complete all of the data collection materials with all 50 centers that agree to participate in the study (a response rate of 100 percent). The team expects to invite 700 center staff to complete the time-use survey. The team expects to obtain an 80 percent response rate, for 560 time-use survey completes (60 percent by web and 40 percent hard copy).

These response rates are based on the experience in Phase 1 and expected outcomes of offering the $350 center-level incentive and $10 appreciation to respondents completing the time use survey. As discussed in Supporting Statement A, when lower levels of incentives were offered, recruitment was difficult. Respondents in the pilot commented on the disconnect between the low level of incentives and the time center staff were asked to commit to data collection. After increasing our level incentive in Phase I, our response rates were much higher, and we feel confident that this level of appreciation will assist in the high response rate we expect for Phase 2 (Questions and Answers When Designing Surveys for Information Collections, https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/omb/inforeg/pmc_survey_guidance_2006.pdf).

Dealing with nonresponse

The potential for challenges with nonresponse exist mainly for the time-use survey for which the study team intends to collect data from key administrators and teaching staff at a particular center. The team will work closely with each center to maximize participation in the time-use survey and will offer each respondent $10 for participating in the survey. The team intends to collect email addresses from key administrative staff and all core teaching staff (such as lead and assistant teachers). The team will invite them to complete the survey via web and will provide them with a secure login ID and password to access the web instrument. The team will inform them of the option to complete the paper survey if they prefer. Field staff will offer the use of a laptop computer for staff to complete surveys during their visit. The study team will monitor sample lists and survey completion using a sample management and reporting system that Mathematica has used on similar projects to manage respondent sample. The team will follow up with nonresponders by email and regular mail to encourage survey completion (included in Attachment I).

The study will attempt to collect data from all teaching staff at each center in Phase 2 to understand the extent of variation within centers and among staff with similar roles. The team will create time-use measures by job category using all available data from staff in a particular position. The team might exclude centers with high nonresponse from the analysis of variation in staff time-use. The team is not collecting information on the characteristics of individual staff members that would be necessary to compare respondents with nonrespondents; however, they will compare characteristics of centers with high nonresponse with characteristics of other centers in the study sample.

Maximizing response rates

Mathematica has extensive experience in collecting implementation information and cost data with high response rates from staff in education, social services, and health programs. To maximize response rates in this purposive sample, the study team will assemble contact lists for centers in three states through state websites, supplementing with Head Start PIR data if necessary. These websites provide searchable lists of centers across the state and will enable us to obtain a sufficient list of centers that meet the selection criteria in each of the three states. The study team will send a mass hard copy and email mailing to all centers on the lists to introduce the study to a large number of centers early in the recruitment effort. The letter will describe the importance of the study, outline the study goals, encourage center participation, and offer $350 to participating centers. The team will train experienced recruiters and their training will include refusal aversion techniques. In addition to the $350 incentive, the study team will provide each participating center with a summary of the information collected which they can use to assess the activities they pursue under each of the six key functions and how they allocate staff time and center resources to support each function. Providing information structured around the key functions can help center staff think about how they may be supporting quality within their center. Should centers be reluctant to participate in the study, the team will send trained field staff to select centers required to fill out the targeted sample to present study information in-person to encourage participation.

The study team will update the time-use survey component with improved technology in the form of a web-based application. The team will use multimode approaches, email as well as hard copy reminders, and incentives. Field staff will collect contact information for select administrators and teaching staff on site and distribute an invitation letter and instructions to participate in the time use survey. The field staff will be available to answer questions and encourage survey completion. The web administration of time-use surveys will enable the respondents to complete the surveys at their convenience. Paper-and-pencil survey options will be available for center staff who have no computer or Internet access, and surveys can be completed via computers available at the center or by using a laptop computer that each field staff member will possess. All of these strategies are based on lessons learned from Phase 1 as well as experience in other studies. We are confident that this approach will lead to high response rates.

B4. Tests of procedures or methods to undertake

The methods used to understand the key functions and their costs (the implementation interviews, cost workbook, and time-use surveys) have not previously been used together for the purpose of understanding costs related to implementing high quality child care. The purpose of this data collection is to test the procedures and methods (refined based on Phase 1) in this combination and across a variety of ECE settings to produce measures that will be feasible, applicable, and meaningful to inform decision making and resource use in ECE center-based settings. The study team will pre-test the tools and procedures for Phase 2 in up to eight centers. The team will determine the number of pilot centers based on the targeted respondents for the implementation interviews and cost workbook to ensure that each distinct data collection effort does not involve more than nine respondents. The pilot centers will not include collection of the time-use surveys. The study team plans to include these eight centers in the total center sample and will therefore have to return to each pilot center to obtain the time-use surveys from staff after clearance is received.

Any resulting updates will be submitted to OMB as a nonsubstantive change request. If substantive changes result from pretesting, we will publish a 30 day Federal Register Notice allowing for public comment and submit the revised instruments to OMB for review and approval.

B5. Individuals consulted on statistical aspects and individuals collecting and analyzing data

Mathematica and consultant Dr. Elizabeth Davis of the University of Minnesota are conducting this project under contract number HHSP23320095642WC. Mathematica developed plans for this data collection and analysis and consulted with a technical expert panel (TEP). Leaders of the study team from the Office of Planning, Research and Evaluation (OPRE) and from Mathematica are listed in Table B.2, along with members of the TEP.

Table B.2. OPRE project officers, study team leadership, and TEP for the ECE-ICHQ

Name

Affiliation

Ivelisse Martinez-Beck

Office of Planning, Research and Evaluation

Meryl Barofsky

Office of Planning, Research and Evaluation

Gretchen Kirby

Mathematica Policy Research

Kimberly Boller

Mathematica Policy Research

Pia Caronongan

Mathematica Policy Research

Andrew Burwick

Mathematica Policy Research

Annalee Kelly

Mathematica Policy Research

Louisa Tarullo

Mathematica Policy Research

Melanie Brizzi

Currently, director of Child Care Services for Child Care Aware of America. She joined the TEP when she was director of Office of Early Childhood and Out of School Learning, Indiana Family Social Services Administration

Rena Hallam

Delaware Institute for Excellence in Early Childhood, University of Delaware

Lynn Karoly

RAND Corporation

Mark Kehoe

Brightside Academy

Henry Levin

Teacher’s College, Columbia University

Katherine Magnuson

School of Social Work, University of Wisconsin–Madison

Tammy Mann

The Campagna Center

Nancy Marshall

Wellesley Center for Women, Wellesley College

Allison Metz

National Implementation Research Network, Frank Porter Graham Child Development Institute, University of North Carolina at Chapel Hill

Louise Stoney

Alliance for Early Childhood Finance


1 If the study team finds that one or more of the states must be replaced (for example, because of limitations in the number of potential centers available to recruit), they will select another state with similar characteristics. In particular they will select a state or states with similar QRIS and stringency of licensing requirements. The team will also seek to maintain geographic variation across the three states.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOPRE OMB Clearance Manual
SubjectAssessing the Implementation and Cost of High Quality Early Care and Education: Comparative Multi-Case Study, Phase 2
AuthorMathematica Staff
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy