Supporting Statement B - ECE ICHQ_clean 22621

Supporting Statement B - ECE ICHQ_clean 22621.docx

Assessing the Implementation and Cost of High Quality Early Care and Education

OMB: 0970-0499

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes



Assessing the Implementation and Cost of High Quality Early Care and Education: Field Test



OMB Information Collection Request

0970 - 0499





Supporting Statement

Part B

August 2019


Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers:

Ivelisse Martinez-Beck, Senior Social Science Research Analyst and

Child Care Research Team Leader

Meryl Barofsky, Senior Social Science Research Analyst



Part B


B1. Objectives

Study Objectives

The purpose of information collection under the current request is to field test instruments using measures developed in previous phases of the study (Phase 1 completed ACF’s generic clearance 0970-0355 and Phase 2 completed under 0970-0499). The goals are to (1) refine the implementation measures to further test and improve their psychometric properties; (2) test the usability of revised instruments; and (3) test preliminary associations between implementation, cost, and quality measures. The information collected will provide evidence in the field by validating practical tools to measure how centers use resources to support high-quality early care and education, and examining preliminary evidence of associations between cost and quality. The data will be archived at the Child and Family Data Archive at the University of Michigan for future research and analyses by qualified researchers.

Generalizability of Results

This is a measurement development study intended to refine and validate instruments, in addition to examining preliminary evidence of associations between cost and quality. Data are not intended to support statistical generalization.


Appropriateness of Study Design and Methods for Planned Uses

Sites will be selected to for geographical diversity and variation in investments in ECE, which is appropriate for further refining and validating the measures created in earlier phases of the study. For sites in this field test, adding an observational measure of ECE quality and accessing quality rating and improvement systems (QRIS) data from administrative records will support the triangulation of data to assess measures’ validity.


The diversity of participating sites will support assessment of preliminary associations between site characteristics, implementation factors, quality, and cost structures of center-based ECE. This analysis is intended to assess the practicality of combining these data types, and will not be used to generate nationally-representative estimates of the prevalence of program characteristics, practices, or costs. As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.  


The data collection mode, target population, and other study design features align with earlier data collection.

B2. Methods and Design

Target Population

The target population for this information collection is center-based early care and education (ECE) providers that serve children from birth to age 5. The sampling plan prioritizes the inclusion of different types of ECE centers. To answer questions about the reliability and validity of the measures across a variety of contexts, we first plan to conduct a field test with ten centers from Phase 2 of the previous data collection effort. This is to ensure our measures capture the context of the COVID-19 pandemic on the centers we are visiting appropriately. We next plan to recruit fourteen centers each from five new states (not including any states from earlier phases of data collection) that represent different geographical regions and types of investments in early care and education. This will provide us with a sample of 80 centers.

Sampling and Site Selection

The study team will consider the following characteristics in selecting the five focal states and plans to target similar proportions of different types of centers in each state (see Table B.1):

  • Quality rating and improvement systems (QRIS). Selecting states with a QRIS will help ensure some variation in quality based on QRIS ratings. We will also include some centers that do not participate in QRIS. The study team will aim to select at least some focal states that (1) conduct the Program Administration Scale (PAS; Talan and Bloom, 2004) as part of their QRIS rating process; and (2) may be able to provide QRIS component-level data for analysis as these data may allow for additional validation analysis.

  • Child care licensing regulations. We will include states that have variation in child care licensing requirements because these requirements set the floor for quality.

  • Geographic regions. The states included in the field test should be located in different Census-defined regions of the country to capture variation in state and regional contexts and conditions.

Table B.1. Targeted number of centers for the field test


Centers in each state

Total

Centers from Phase 2 Data Collection

10

Community-based centers with medium/high QRIS ratinga


Mixed funding a

4

20

Limited or no public funding

2

10

Community-based centers with low QRIS ratinga


Mixed funding a

2

10

Limited or no public funding

1

5

Community-based centers with any QRIS rating or not participating in QRISa


Mixed funding a

2

10

Limited or no public funding

1

5

Head Start/Early Head Start centers b


Head Start only

1

5

Head Start and Early Head Start

1

5

TOTAL

12

80

Note: Numbers in italics are subtotals and are not included in the overall total.

a Mixed funding centers are those that draw from tuitions and one or more public funding sources or centers that draw from multiple public funding sources.

b Centers that are funded in full with Head Start funding, or receive the majority of their funding from Head Start mixed with other public funding.


The study team will contact the ten centers for the feasibility test from contact information from Phase 2 data collection.


For the remaining centers, the study team will assemble contact lists for centers in five states through state websites and Head Start PIR or ECLKC data, if necessary. The team will use this information to build a comprehensive list of centers that meet the selection criteria, with enough centers in reserve to replace those that are unable or unwilling to participate. We will build sampling lists based on public information on: (1) QRIS rating level, and (2) funding sources. Once we successfully recruit a center into the field test, we conduct the engagement call to collect detailed information about a center’s characteristics. We will use this information to determine the fit of the center into our recruitment goals based on the characteristics of interest. If a center has the characteristics needed, we will proceed in enrolling them in the field test and begin data collection. Based on the prior phases of this work, the study team expects to initially send hard copy letters to 2,400 centers, and follow-up with individual emails to 800 centers to secure the participation of the 70 centers required for this study (see Attachment B for the advance letter and email). In order to identify 70 willing sites, we estimate that 800 centers will be contacted for recruitment and 100 centers will participate in the full study engagement call.


On-site field staff will use the time-use survey roster (Instrument 5) to collect information about the staff in each center and will distribute a survey (Instrument 6) to all eligible staff. They will also use the classroom roster (Instrument 7) to collect information about the classrooms in each center, including the ages of the children enrolled in each. We will select up to three classrooms per center, depending on center size and ages of children served.


B3. Design of Data Collection Instruments

Development of Data Collection Instrument(s)

Since the fall of 2014, the ECE-ICHQ study team has developed a conceptual framework (Attachment A); conducted a review of the literature (Caronongan et al. 2016); consulted with a technical expert panel; collected and summarized findings from Phase 1 of the study (completed under ACF’s generic clearance 0970-0355) and collected and summarized findings from Phase 2 of the study (completed under 0970-0499). Phase 1 included thoroughly testing data collection tools and methods, conducting cognitive interviews to obtain feedback from respondents about the tools, and refining and reducing the tools for the next phase. Phase 2 of the study further refined the data collection tools and procedures through additional quantitative study of the implementation of key functions of center-based ECE providers and an analysis of costs.

This information collection request is to field test instruments based on the measures developed in previous phases of the study, reduced to include only items deemed necessary to accurately measure cost and implementation. The instruments were also updated to include information about the COVID-19 pandemic. Table B2 below outlines the final measures for the field test, including information about their length during Phase 1 and 2 of the study.

Table B.2. Data collection activity for the ECE-ICHQ field test, by respondent, and time to complete

Data collection activity

Respondents

Time to Complete P1

Time to Complete P2

Time to Complete Field Test

Center recruitment call (Instrument 1)

Site administrator or center director

Umbrella organization administrator (as applicable)

20 minutes



n/a

20 minutes



20 minutes

20 minutes



20 minutes

Center engagement call (Instrument 2)

Site administrator or center director


25 minutes

25 minutes

30 minutes

Implementation interview (Instrument 3)

Site administrator or center director

Education specialist

Umbrella organization administrator (as applicable)

5.5 hoursa

3.5 hours

3 hours

Cost workbook (Instrument 4)

Financial manager at site

Financial manager of umbrella organization (as applicable)

8 hours

7.5 hours

8 hours

Staff rosters for time-use survey

(Instrument 5)

Site administrator or center director

n/a

15 minutes

15 minutes

Time-use survey

(Instrument 6)

Site administrator or center director

Education specialist

Lead and assistant teachers

30 minutes

15 minutes

15 minutes

Classroom rosters for observations

(Instrument 7)

Site administrator or center director

n/a

n/a

30 minutes

a In Phase 1, part of the Implementation interview was administered as a self-administered questionnaire.

n/a = not applicable

B4. Collection of Data and Quality Control

The contractor team (Mathematica) will collect data for this study. Using information from publicly available websites, we will send advance materials to 800 centers in 5 states (Attachment B). We will then identify select centers on the initial contact lists that fit specific selection criteria and send a targeted email and letter (Attachment C). Project staff will call the director of each selected center to discuss the study and recruit the director to participate. The center recruitment and engagement call script (Instruments 1 and 2) will also collect information about the characteristics of the center if the director agrees to participate. If the center is part of a larger organization that requires the organization’s agreement, the recruiter will contact the appropriate person to obtain that agreement before recruiting the center (Instrument 1). Finally, the recruiter will schedule the data collection activities. All data collection activities will be remote except for the time-use roster (Instrument 5) and survey (Instrument 6) and the classroom roster (Instrument 7) and observations.

Implementation interview. The recruiter will send an email (Attachment D) to the center director to confirm the schedule and topics for the implementation interview. Interviewers will use the implementation interview protocol (Instrument 3) to conduct the interview by phone.

Cost workbook. The data collection team will send an email (Attachment E) to the center director or a staff member designated by the director who is familiar with the center’s finances to schedule a phone call to provide an overview of the cost workbook. The financial manager at each center or umbrella organization will be the primary person to complete the cost workbook (Instrument 4) with support from the data collection team as necessary

Time-use roster and survey. Field staff will visit centers and identify survey respondents with assistance from a center administrator. Each potential respondent will be listed on the time-use survey roster (Instrument 5). Field staff will distribute an advance letter inviting potential respondents to fill out the survey and a document with frequently asked questions about the survey (Attachment F). The advance letter will provide a link to the web-based survey (Instrument 6), but field staff will bring paper copies of the survey so that respondents will have the option to complete a paper copy if they prefer. Potential respondents will also receive an email invitation to complete the survey (Attachment F). A follow-up email (Attachment F) or letter (Attachment F) will be sent if the survey has not been completed within the requested time frame.

Classroom rosters for observations. When field staff visit centers to identify potential respondents for the time-use survey, they will also collect information required to select classrooms for observation using the classroom roster form (Instrument 7). The center director may provide this information in various formats, such as print outs from an administrative record system or photocopies of hard copy lists or records.


Quality assurance (QA) will be built into every stage of data collection to ensure that data will be gathered and processed in a valid, standardized, and professional manner. QA includes field staff certification at the end of training, periodic checks to observe and evaluate field staff performance in the field, and ongoing monitoring of data collectors. Together, the data collector and QA reviewer will identify essential questions and items for follow-up. Data collectors will follow up with respondents as necessary, by phone or email. Once all essential follow-up items have been addressed and documented, the QA reviewer will conduct a final review to determine if data collection is complete.


Supervisors will oversee the work of the field staff by requiring each staff member to check in via telephone or email at the end of each day of their site visits, in order to monitor each day’s data collection progress. The use of electronic instruments will permit real-time monitoring of completed surveys.


Training team members reliable with the observation measure (referred to as Gold-standard observers) will conduct a joint classroom observation with the field staff observers during the data collection period. Both the same criterion and the same procedures used to certify staff at the conclusion of the observation training will be used during the field period. The observer and their supervisor will be alerted if the process uncovers an individual with unreliable ratings.


B5. Response Rates and Potential Nonresponse Bias

Response Rates

The team plans to complete all of the cost and implementation data collections with all 80 centers that agree to participate in the study, following the selection protocol described in B2. However, if any centers withdraw from the study after agreeing to participate, a sample of 70 centers would still provide sufficient statistical power to achieve the analytic goals of the field test. As a reminder, the analytic goal of the field test is to assess the validity and reliability of measures and not to determine representative statistical estimates of the items.

Within the 80 selected sites, the team expects to invite 1,280 center staff to complete the time-use survey. The team expects to obtain an 87.5 percent response rate, for 1,120 time-use survey completes.

Maximizing response rates

The analysis plan requires obtaining complete data collection for costs and implementation from each participating center. To build center buy-in, initial communication materials will describe the importance of the study, outline the study goals, encourage center participation, and describe the offer of a $500 honorarium to participating centers. Mathematica has extensive experience in collecting implementation information and cost data with high response rates from staff in education, social services, and health programs. The team has further refined the cost and implementation data collection tools based on their use in Phase 2; these revisions are expected to support full completion.

Study protocols are designed to minimize the organizational burden of complete data collection. Following site selection, the study team will provide each participating center with a summary of the information collected which they can use to assess the activities they pursue under each of the six key functions and how they allocate staff time and center resources to support each function. Providing information structured around the key functions can help center staff think about how they may be supporting quality within their center.

For the time-use survey, on site field staff will collect contact information for select administrators and teaching staff, and distribute an invitation letter and instructions. Staff will be able to complete the survey using computers available at the center. The study team will provide them with a secure login ID and password to access the web instrument. The team will follow-up by email.

The team’s strategies to maximize response rate are based on lessons learned from Phases 1 and 2 as well as experience in other studies. In Phase 2, the study team found that when field staff explained and distributed the time-use survey on site, remained to answer questions about the survey, and offered a $10 token of appreciation for completion, response rates were over 90 percent.

Non Response

Based on previous experience in earlier phases of the project, we do not expect substantial non-response on center-level data collection (implementation and cost). As part of study reporting, we plan to present information about characteristics of the participating sites and the full universe of eligible sites on the characteristics listed in table B1.

The potential for challenges with survey non-response exists mainly for the time-use survey, to be completed by key administrators and teaching staff. The study team will work closely with each center to maximize completion of the time-use survey. See details on maximizing response rates in the section above. The team will follow up with non-responders by email and regular mail (Attachment F) to encourage survey completion.

The study will attempt to collect data from all teaching staff at each center in the field test to understand the extent of variation within centers and among staff with similar roles. The team will create time-use measures by job category using all available data from staff in a particular position. If there are no responses in a center from staff corresponding to a specific teaching position (for example, an assistant teacher), the team will explore several options for creating time-use measures for that position. One option is to develop time-use measures based on the average responses among all other respondents in the center who are in teaching positions. A second option is to impute time-use measures based on the responses from teaching staff in similar positions in a group of centers with similar characteristics. A third option is to create time-use measures using assumptions about time allocation based on information gathered about that staff member’s responsibilities in the center. The team will conduct sensitivity tests to assess whether and how different approaches to estimating measures for teaching positions with missing data affect measures at the center level.

The team will not collect information on the demographic characteristics of individual staff members that would be necessary to compare respondents with non-respondents; however, we will analyze characteristics of centers with high and low non-response in the study sample.


B6. Production of Estimates and Projections

To support evidence-informed program management and improvement, ACF will use the data from this ICR to assess the feasibility, validity, reliability, and usefulness of a field protocol to measure implementation, costs, and quality of ECE. The data will not be used to generate population estimates, either for internal use or dissemination.


B7. Data Handling and Analysis

Data Handling

Procedures for editing to mitigate or correct detectable errors, including checks built into computerized instruments.

Data from the instruments will be monitored for potential respondent errors as reflected in high levels of item nonresponse (“don’t know” and “refused” responses). ICHQ will rely on the use of some paper instruments, as some respondents may choose to complete their time-use surveys on paper. All paper instruments will be reviewed by specially trained data quality clerks who will check for completeness and clarity and adherence to routing and range rules. In addition, senior project staff members will review data collected electronically to determine the need for corrections to instruments.

Programs developed for the computer-assisted data entry (CADE) of classroom observation scores and the web-based surveys will contain built-in range checks, logic checks, and routing instructions to effectively eliminate most of the errors inherent in paper instruments. All data will undergo a series of data editing steps beginning with the field enrollment specialists’ review of all roster information entered into a web-based sampling program. Senior staff will then review the roster information and note any errors or inconsistencies for correction.

 Procedures to minimize errors due to data entry, coding, and data processing.

Cost and implementation data are reviewed by data collectors and a dedicated QA reviewer to ensure that data is complete and error free. Field observers will enter classroom observation data into laptop computers with a web-based CADE program. Data entry staff will enter the data from any paper time-use surveys into the web-based instruments. With the use of the same web-based instrument, the data received from hard copy instruments will undergo the same range, logic, and consistency checks that are built into the web-based instruments. Entering the data from paper instruments into the web-based instruments allows frequency review to be performed across all cases regardless of administration mode. Several questions in the time-use survey are open-ended and will require respondents to enter text directly. In addition, some responses to questions may not fit into any of the provided response categories. Respondents will have the option to choose “other” and then to specify a response. Probes and help screens will be built into the survey to be available for the respondents.

Data Analysis

The study team will build the measures in a series of incremental steps. The steps progress from analyzing the data at the item-level; next, creating reliable summary variables for analysis by key function; and finally, analyzing summary variables or scales to examine associations among implementation, cost, and center characteristics (including quality).

Cost measures. The cost analysis team will use data from the center’s most recently completed fiscal year to estimate the total annual operating cost for a 12-month period. We will estimate total program cost by aggregating the cost of several categories: (1) salaries and fringe; (2) staff training and education; (3) contracted services; (4) facilities; (5) supplies and materials; (6) equipment; (7) other/miscellaneous costs; and (8) payments/overhead costs for operating as part of a larger organization/entity. From total program cost, we will calculate other key measures: for example, cost per child care hour and proportion of total costs allocated to each key function.


Implementation measures. When data are complete and clean, the study team will develop implementation measures that represent a descriptor of each key function. To assess the validity and reliability of draft scales for each key function, the study team will first examine the item-total correlations, which represents the degree to which differences among centers’ responses to each individual item are consistent with their responses to all other items in the scale as a whole. A high item-total correlation indicates that the item is consistent with the scale as a whole, which is a desirable characteristic for reliability. Next, we will identify the items with adequate item-total correlations (at least 0.2) and examine face validity of the resulting set of items. In other words, we will examine whether the set of items reflects content we would expect from a theoretical perspective. Finally, we will conduct categorical confirmatory factor analysis to identify key implementation factors and how they work together within each of the key functions.


Analysis. The study team will use constructed cost and implementation measures to focus on:

  • Variation in implementation and cost measures: the team will inspect descriptive statistics for implementation and cost measures, by key function, across all centers and by a range of center characteristics (such as funding mix, inclusion of infant or toddler age, or center size).

  • Associations between implementation and cost measures: the team will examine correlations between implementation and cost measures. According to our calculations, a sample of 80 centers would be sufficient to detect correlations of 0.31 or higher.

  • Examine whether the relationship between implementation, cost, and/or quality varies by selected center characteristics: the team will conduct multivariate analysis to examine the relationship between cost and implementation, controlling for selected center characteristics (including quality). The team will also explore whether the relationship between cost, implementation and/or quality varies by other selected center characteristics. Quality measures will primarily be based on publicly available QRIS ratings and on classroom observations conducted by the study team. The study team will also explore the possibility of conducting additional analysis using center-level state administrative data (for example, additional quality measures collected through the state QRIS).

Data Use. After the field test and when measures have been finalized, the team will develop a user’s manual about the collection and analysis of data to produce and interpret the measures so that the instruments/measures can be used by other researchers to generate information to guide program, policy, and practice. If ACF opts to archive the data from this field test for secondary use, documentation will include information necessary to contextualize and assist in interpretation of the data, such as descriptive tables comparing the characteristics of participating centers to national averages.


B8. Contact Person(s)

Meryl Barofsky, Office of Planning, Research, and Evaluation, [email protected]

Ivelisse Martinez-Beck, Office of Planning, Research, and Evaluation, [email protected]

Tracy Carter Clopet, Office of Planning, Research, and Evaluation, [email protected]

Gretchen Kirby, Mathematica Policy Research, [email protected]

Pia Caronongan, Mathematica Policy Research, [email protected]

Andrew Burwick, Mathematica Policy Research, [email protected]


Attachments



ATTACHMENT A: ECE-ICHQ CONCEPTUAL FRAMEWORK

ATTACHMENT B: ADVANCE MATERIALS

ATTACHMENT C: EMAIL AND LETTER TO SELECTED CENTERS

ATTACHMENT D: IMPLEMENTATION INTERVIEW EMAIL

ATTACHMENT E: COST WORKBOOK EMAIL

ATTACHMENT F: TIME-USE SURVEY OUTREACH

ATTACHMENT G: FEDERAL REGISTER NOTICE

INSTRUMENT 1: CENTER RECRUITMENT CALL SCRIPTS

INSTRUMENT 2: CENTER ENGAGEMENT CALL SCRIPT

INSTRUMENT 3: IMPLEMENTATION INTERVIEW PROTOCOL

INSTRUMENT 4: COST WORKBOOK

INSTRUMENT 5: TIME-USE SURVEY ROSTER

INSTRUMENT 6: TIME-USE SURVEY

INSTRUMENT 7: CLASSROOM ROSTERS FOR OBSERVATIONS



9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-03-02

© 2024 OMB.report | Privacy Policy