Supporting Statement B

CCL_Landscape_Research Generic_SSB GenIC.v4.clean.docx

Formative Data Collections for ACF Research

Supporting Statement B

OMB: 0970-0356

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes



Culture of Continuous Learning Project: Landscape Survey




Formative Data Collections for ACF Research


0970 - 0356





Supporting Statement

Part B

April 2023


Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officer:

Nina Philipsen


Part B

B1. Objectives

Study Objectives

The proposed information collection has one primary purpose: to gather information regarding early care and education professionals’ experience with their state, territory, or regional quality improvement (QI) delivery systems. This data collection will also ask about perceptions of collaboration and integration of these systems with states, territories, and among Head Start regions. The findings from this study will directly inform the Culture of Continuous Learning Project (CCL) in two ways: 1) the data collected will help contextualize the CCL case studies (which will be conducted after Office of Management and Budget approval of a full information collection request1) by elucidating the current landscape of QI delivery systems in states, territories, and Head Start regions in which the CCL case studies will be located; and 2) the entire study, including this data collection, will inform the development of study design options for future evaluations of the Breakthrough Series Collaborative (BSC).

Generalizability of Results

This study is intended to present an internally-valid description of survey respondents’ understanding of the infrastructure and policies that are the focus of each survey, not to promote statistical generalization to other sites or service populations.


Appropriateness of Study Design and Methods for Planned Uses

We will use web-based surveys to directly collect information from respondents. The surveys are designed to gather details about the infrastructure, policies, and perceptions of respondents regarding state, territory, and Head Start regional QI delivery systems. A survey method with structured close-ended response options and open-ended fields is an appropriate design for the study’s purposes, which are to have a state-by-state, territory, and Head Start region comparison of QI delivery infrastructures. Providing structured close-ended response options and open-ended fields will minimize survey burden while also supplying sufficient detail to answer the study’s guiding questions (see Supporting Statement A2 for the study’s specific guiding questions). Respondents will be given a survey that is tailored to the organizational conditions and language unique to their respondent category (e.g., a Head Start education manager will not be asked questions that are only appropriate for a CCDF administrator). The CCL team will not use the information collected from this study to make statistical inferences or generalize findings beyond the study sample.


As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information. ACF will, however, use the information to provide context for the case study implementation and to inform future evaluation designs of the BSC model in the child care and early education field.  


B2. Methods and Design

Target Population

The research team will recruit administrators, directors, and managers of various early child care and Head Start systems to ask them about aspects of their policies and the way their policies are implemented. To create a complete picture of the professional development and quality improvement infrastructure landscape in states, territories, and Head Start regions, we will attempt to recruit respondents with different roles in either administering or overseeing QI activities in ECE, or respondents who are recipients of ECE QI activities. In each state, the District of Columbia, and the five U.S. territories, we will recruit four respondent types: a) the state or territory Child Care and Development Fund (CCDF) administrator (N=56), b) the state Quality Rating and Improvement System (QRIS) administrator (N=56), c) the Pyramid Model State Lead2 (N=32), and d) the lead of a quality improvement delivery agency (such as a Child Care Resource and Referral agency) (N=56). Within each Head Start region, we will recruit: a) state Head Start Collaboration (HSCO) directors (N=54), b) Head Start Regional Program Managers or Regional ECE specialists (N=12), and c) a sample of Head Start Education Managers at local programs (N=280).


Sampling

We plan a census data collection, gathering information from as many states/territories and Head Start regions as possible, because of the variability in professional development and quality improvement systems across states and territories, and the importance of understanding each state/territory’s needs in order contextualize findings from the CCL case studies, and identify future research for ACF. With the goal of describing the variety of QI features in states, territories, and Head Start regions, and to inform future evaluation designs, we propose including all 50 states, the District of Columbia, the five U.S. territories, and the Head Start regions in this study.


For five respondent types (CCDF administrator, HSCO director, QRIS administrator, Head Start Regional Program Manager, and the Pyramid Model State Lead), we will recruit the universe of respondents as there are typically only one per state, territory, and region. For Head Start Education Managers, we will request contact information from the Office of Head Start (OHS) for up to approximately 15 education managers per state, assuming a response rate of about 30 percent. This will ensure we have survey information from five Head Start education managers per Head Start region. For this study, it is important to collect information from as many states, territories, and regions as possible to thoroughly understand variation. This census approach will help identify future research designs for professional development and quality improvement studies, including a future, rigorous study of the Breakthrough Series Collaborative (BSC) approach being explored in the CCL case studies. Selecting a sample of states/territories and regions would not provide a complete picture of the professional development and quality improvement systems so requesting participation from all states, territories, and Head Start regions is necessary.

Respondents will be identified by compiling names and contact information through publicly available agency directories.


For leads of quality improvement delivery agencies (i.e., CCR&R or QI delivery contractors), we will create a list of agencies in each state and territory as described in their CCDF plan. We will randomly select one contractor per state/territory to invite to participate in the survey. For Head Start Education Managers, we will randomly select grantees from the Head Start Program Information Report (PIR). Table B2 summarizes the sample size and identification strategy for each group.


Table B2. Landscape survey respondent types, sample sizes, and identification strategies

Respondent type

Sample size

Identification strategy

CCDF administrator

56; 50 states, 5 territories, & DC

Public directory

CCR&R or QI delivery agency/contractor

56; 50 states, 5 territories, & DC

All QI delivery contracted organizations identified in CCDF Plans FY22-24. Randomly select one contracted agency per state or territory

Head Start Collaboration Office director

54; 50 states, 1 office for American Indian and Alaska Native (AIAN) Head Start, 1 office for Migrant and Seasonal Head Start (MSHS), Puerto Rico, & DC

Public directory

Head Start Education Manager

280

Randomly select grantees from the Head Start Program Information Report (PIR)3. Sample goal is up to 5 respondents per state and territory

Head Start Regional Program Manager

12; 10 ACF regions, 1 region for all AIAN grantees, & 1 region for all MSHS grantees

Directory requested from Office of Head Start

Pyramid Model State Lead

32; 31 states & Guam

Public directory

QRIS director or state professional development director*

*In states without an active QRIS, state professional development directors will be invited as alternative respondents

56; 50 states, 5 territories, & DC

Public directories (e.g., Quality Compendium)


B3. Design of Data Collection Instruments

Development of Data Collection Instruments

The surveys for this data collection were developed by a team of researchers based on project team expertise, a scan of secondary data, a review of similar surveys, and expert engagement interviews with staff from the Office of Head Start and the Pyramid Model National Consortium; each instrument was reviewed by the project officers.


Each survey asks the respondent about features of their states’, territories’, or Head Start regions’ quality improvement system. Some items are asked across respondent types, while other items are tailored for the individual respondent type, based on their role in the QI system. Each respondent will respond to only one survey.


B4. Collection of Data and Quality Control

Who will be collecting the data (e.g., agency, contractor, local health departments)?

All data will be collected by the contractor (Child Trends). Each instrument will be programmed, quality tested, and managed by the research team.


What is the recruitment protocol?

Once respondents have been identified by the methods described in Table B2, they will be invited via email to participate in the study. The email will describe the study and ask they complete a survey. See Appendices A and B for examples of initial email and follow-up email language.


What is the mode of data collection?

Data collection will be conducted via a web-based survey hosted on REDCap, a secure online data collection platform which allows for easy access for the survey respondent, customizable reminders, and restricts survey data access to only to those on the study team.


What data evaluation activities are planned as part of monitoring for quality and consistency in this collection, such as re-interviews?

The surveys will be programmed and quality checked by research staff prior to being sent to respondents. While respondents are taking a survey, REDCap’s built-in validation functions will ensure responses are within expected ranges. If the response does not pass validation, the participant will be prompted to correct the response. REDCap has response rate tracking functionality. If participants start the survey but do not complete it, reminder emails will be sent as part of the outreach efforts. The CCL team will regularly monitor survey responses and conduct quality assurance checks.




B5. Response Rates and Potential Nonresponse Bias

Response Rates

Our goal is to include as many states, territories, and regions as possible, plus the District of Columbia. In a previous study (“Integration of Head Start and State Early Care and Education Systems,” OMB #0970-0356) that included surveys with HSCO directors, CCDF administrators, and QRIS administrators, response rates ranged from about 35 percent to about 70 percent. We expect around 30 percent of Head Start education managers to respond to the survey, and higher than 30 percent response rates from other respondents. The surveys are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion.

Non-Response

As the study’s findings are not intended to produce statistically generalizable findings, non-response bias will not be calculated. Respondent demographics will be documented and reported in the internal written materials associated with the data collection. The study team will qualitatively assess non-responses to monitor for any gaps in ECE respondent types (e.g., Head Start Education Managers, CCDF Administrators) and localities (i.e., state, territory, and Head Start region) represented in the data, and will tailor ongoing recruitment efforts to improve data collection as needed.


B6. Production of Estimates and Projections

The data will not be used to generate population estimates, either for internal use or for dissemination.


B7. Data Handling and Analysis

Data Handling

The CCL team will build validation checks into each REDCap survey to ensure responses are within expect ranges. Skip logic will allow respondents to only respond to questions that are relevant to them. The survey team will also conduct ongoing reviews of the data, including frequencies and cross tabulations, to ensure each survey is running as expected. The data will be stored on REDCap’s secure server. Only research team members who have completed human subjects research and data security training will have access to data collected through the REDCap.


Data Analysis

The research team will summarize survey responses using descriptive statistics (e.g., frequencies, means, standard deviations). Findings will be summarized for each respondent type, and where items are identical across instruments, the research team will use basic comparative statistics such as t-tests, chi-square tests, and ANOVAs where appropriate. The research team will also use maps to display the variety of quality improvement features across states, territories, and Head Start regions. Finally, the research team will use descriptive statistics to create visual displays and exhibits, such as graphs, figures, and infographics.


For open-ended survey questions, the research team will use a thematic coding strategy. One trained researcher will review all responses to a question to create a preliminary coding scheme. The researcher will then code the open-ended responses, returning to applied codes to re-code as the coding scheme evolves. A second trained researcher will double-code all open-ended responses. Where there is disagreement between the two researchers, the study lead will confer with coders to hear the rationale from each researcher and come to a final consensus on the appropriate code.


Data Use

The research team will produce an internal report to ACF with recommendations for how to consider the information from the surveys (along with secondary data also collected to inform the survey instruments), interpret the findings, and how the data could be used to inform the CCL case studies and future research on BSC implementation.


Although the entire report will not be made public, it may be used to inform other future efforts, such as CCL or ACF research design documents, and to contextualize research findings from follow-up data collections that have full PRA approval. In sharing findings in these other contexts, we will describe the study methods and limitations with regard to generalizability, and as a basis for policy.

B8. Contact Persons

Kathryn Tout 

Vice President of Early Childhood Research and Partnerships 

708 North First Street, Suite 333 | Minneapolis, MN 55401 

[email protected] 

(612) 250-1592 

 

Anne Douglass 

Executive Director, Institute for Early Education Leadership & Innovation 

Professor and Program Director, College of Education & Human Development 

University of Massachusetts Boston 

[email protected] 

 

Nina Philipsen 

Senior Social Science Research Analyst, Division of Child and Family Development 

Office of Planning, Research, and Evaluation 

Administration for Children and Families 

US Department of Health and Human Services 

330 C Street, SW | Washington, DC 20201 

[email protected]  


Attachments

Instrument 1: CCDF Administrator Survey

Instrument 2: CCR&R or QI Delivery Contractor Survey

Instrument 3: Head Start Collaboration Office Director Survey

Instrument 4: Head Start Education Manager Survey

Instrument 5: Head Start Regional Program Manager or Head Start Regional ECE Specialist Survey

Instrument 6: Pyramid Model State Lead Survey

Instrument 7: QRIS Administrator or PD Director Survey

Appendix A: Recruitment materials/outreach

Appendix B: Recruitment materials/outreach for Head Start Education Managers and CCR&R Representatives


1 The referenced CCL case studies will be submitted to OMB as a full ICR in the near future. The 30-day public comment period for that requested began on December 16, 2022 (ICR Reference No: 202212-0970-007).

2 The Pyramid Model is a framework for how to support children’s social emotional development in early childhood classrooms and a focus of the larger CCL project. Many states have adopted the use of Pyramid Model as their primary social emotional development strategy. For these states, there is a “state lead” who is typically a state agency staff member in either a human service, education, or health agency.

3 OMB# 0970-0427

8


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAlexandra Verhoye
File Modified0000-00-00
File Created2023-10-26

© 2024 OMB.report | Privacy Policy