HS TTA Survey_SSB_to ACF_FINAL

HS TTA Survey_SSB_to ACF_FINAL.docx

[OPRE Descriptive Study] Survey of Head Start Grantees on Training and Technical Assistance

OMB: 0970-0532

Document [docx]
Download: docx | pdf


Survey of Head Start Grantees on

Training and Technical Assistance





OMB Information Collection Request

New Collection





Supporting Statement

Part B

MAY 2019





Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services



4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201



Project Officer:

Ann Rivera, Ph.D.





B. Collections of Information Employing Statistical Methods

B.1. Respondent Universe and Sampling Methods

In this section, an overview of the respondent universe and study population for each data collection component of the Survey of Head Start Grantees on Training and Technical Assistance is provided. The relevant procedures for identifying the study population and the data collection procedures are discussed. There are no unusual problems requiring specialized sampling procedures.

B.1.1. Overview of Respondent Universe, Study Population, and Expected Response Rates

Analyses of 2017 Head Start Program Information Report (PIR) data and Head Start Enterprise System (HSES) administrative data indicate that Head Start grantees vary by agency type, organizational structure, and number of children served. Head Start grantees include community action agencies, government agencies, private and public non-profits and for profit agencies, and tribal governments. Grantees range in the number of centers operated and number of children served. Thirteen percent of grantee organizations provide services at one center, 39 percent provide services at 2-8 centers (average number is 5 centers), and 38 percent provide services at 9 or more centers (average number is 15 centers). Together these organizations account for approximately 680,000 Head Start slots. Three percent of Head Start grantees provide direct services at program centers but also delegate services to intermediary organizations. Such grantees account for about 91,000 Head Start slots. Another three percent of grantees delegate services and do not provide direct services, accounting for about 91,000 slots, as well.

This study will collect information through two web surveys. We will invite all grantee organizations to respond to the first survey (Wave 1 – Head Start Director survey). The second online survey (Wave 2 - Head Start Manager/Coordinator survey) will collect further domain-specific information from grantee organizations that complete the first wave interview. Of interest are four domains of Head Start practice: 1) fiscal operations; 2) early childhood development and education; 3) health, mental health, and safety; and 4) family and community services. Assignment of organizations to domain-specific surveys as part of Wave 2 will use representative sampling procedures.

The unit of observation for this study, including the Wave 1 and all Wave 2 surveys, is the grantee organization. Wave 1 respondents will be individuals with organization-wide perspective and knowledge, most commonly the grantee director. For the Wave 1 sample, the universe of Head Start Grantee Organizations will be identified from the 2019 HSES data. The HSES database contains contact information for the directors of the approximately 1,600 organizations. The table below, Exhibit B1.1, shows expected counts based on 2017 HSES data. The diversity of agency types, organizational structure, and enrollment size among Head Start grantees, necessitates that we approach all 1,600 grantees to request their participation in the survey. Approaching all grantees for the Wave 1 survey maximizes this data collection’s potential to reliably represent the wide range of organizations comprising Head Start grantees and to achieve the necessary quantity of responses to the Wave 2 surveys in order to understand the variety of T/TA processes, needs, and experiences across grantees and key practice areas. 

Wave 2 respondents will be individuals with detailed knowledge of one of the four domains listed in the previous paragraph. These will be managers/coordinators identified by the Wave 1 respondent (i.e., grantee director) as having responsibility for each of the four domains noted above. Each Wave 2 survey will be administered to approximately one quarter of grantee organizations that responded to the Wave 1 survey.  Given the expected number of grantee organizations and anticipated response rates, we can expect to have approximately 215 completes per Wave 2 survey.  This quantity of completed cases will produce adequate estimates from Wave 2 surveys, but any less could lead to estimates with less than desired precisions. Thus, each grantee organization surveyed (in Wave 1) will have one manager surveyed to represent one of the four domains (in Wave 2), such that the information collected about the four practice areas will be distributed across all grantees that respond to the Wave 1 survey.

A primary objective of the study is the comparison of attributes between Wave 2 domains. In order to collect data that allows these comparisons while minimizing respondent burden on each grantee, only one Wave 2 domain area survey will be administered for each Wave 1 respondent grantee organization. The limited number of grantee organizations and the need to have adequate quantity to detect differences between domains requires all 1,600 grantee organizations be surveyed as part of Wave 1. Administering one randomly selected Wave 2 survey results in 25% of all responding grantee organizations being surveyed for each of the Wave 2 domain areas. Exhibit B1.1 shows that with expected response rates this will result in about 215 respondents per domain. This quantity will result in the acceptable ability to detect differences between groups, as shown in Exhibit B1.2. Decreased quantities of observations within each domain would only allow the detection of large differences between domains to be observed. The objective to maximize the ability to detect differences between Wave 2 domains while minimizing burden on each grantee organization requires sampling all grantee organizations as part of Wave 1.

Expected Response Rates. Based on 2012 National Survey of Early Care and Education survey of center-based providers1 and the 2012-2013 Head Start Health Matters survey2 we expect 75% of Head Start grantee organization directors to respond and complete the Wave 1 survey. Again, based on the experience of a high percentage of Head Start directors supplying information for health coordinators in the Health Matters survey, we expect 95% of directors completing the Wave 1 survey to provide adequate information about domain managers/coordinators to allow for sampling and contacting selected individuals for Wave 2 domain specific surveys. This corresponds to 71% (95%*75%) of grantee organizations completing a Wave 2 survey.

Exhibit B1.1: Estimated Expected Respondent Universe and Response Rates

Survey Component and
Respondent Universe Description

Sampled Unit Count

Expected Response Rate

Expected Survey Completes

Wave 1




Head Start Director Survey

1,600

75%

1,200

Wave 2 – assume that 95% of Wave1 completes result in a sampled Wave 2 case

Head Start Manager/Coordinator Surveys

Fiscal Operations

285

75%

215

Early Childhood Development and Education

285

75%

215

Family and Community Services

285

75%

215

Health, Mental Health, and Safety

285

75%

215

Total for Wave 2

1,140

75%

860



B.1.2. Statistical Methodology for Stratification and Sample Selection and Degree of Accuracy Needed

Response rates of 75% are expected for both Wave 1 and wave 2 surveys (see Exhibit B1.1). Each of the surveys will be weighted accordingly to account for non-response. In addition, any bias observed in non-response rates will be corrected via post-stratification.

Much of the analysis of both Wave 1 and Wave 2 surveys will consist of the proportions of subgroups of interest that possess a particular characteristic, such as the proportion of grantees receiving T/TA for early childhood development and education. The proportion of grantees receiving such T/TA could be examined between subgroups that are evenly split across entire population of grantees, or subgroups could be disproportionately spread across the population. In addition, the expected proportion having the characteristic of interest can vary greatly. The table below (Exhibit B1.2) shows the expected observable differences in proportions between groups for various configurations of group split, and expected proportion for both Wave 1 and Wave 2 given the expected population sizes and response rates from Exhibit B1.1 above.

The table below (Exhibit B1.2) shows that if we are interested in comparing a proportion with an expected value near 0.2 between two equally sized subgroups covering all Wave 1 grantees, our proposed number of completed interviews would allow us to say with 95% confidence that an observed difference of 0.068 indicates a difference in the proportion between those two groups. In a similar fashion, comparisons of the same proportion between two subject matter domains from Wave 2, such as child development and family and community services, would allow us to say with 95% confidence that the proportions are different between the two domains if the observed difference was 0.118.

The Wave 2 row shows even for expected base probabilities near 0.5 differences of 0.133 must be observed to be at least 95% confident that there is a difference in proportion between the two Wave 2 subject matter domains. While this resolution is acceptable, more sample would provide improved resolutions. However, we are limited by the total number of grantees, so these numbers represent the best possible resolution for these comparisons between Wave 2 domains. In order to achieve this while administering a single Wave 2 survey to each grantee, we need to include all grantees in Wave 1.





Exhibit B1.2: Detectable Difference in Proportions Given Expected Response Rates



B.2. Procedures for Collection of Information

In this section, we describe the data collection procedures for the online surveys. We also discuss relevant estimation procedures. Since this is a one-time data collection, the use of periodic data collection cycles is not applicable.

B.2.1 Data Collection Procedures

Data collection will begin upon OMB approval and is expected to take place over an 8 month period in 2019-2020.

As described in B.1.1 Overview of Respondent Universe, Study Population, and Expected Response Rates, the entire universe of grantee organizations and their directors will be selected into the sample for the Wave 1 survey of grantee organization directors. One Wave 2 respondent will then be selected from each Wave 1 interview that includes adequate information for Wave 2 sampling. The manager/ coordinator sampled from each eligible Wave 1 respondent will be randomly selected ensuring that approximately 25% of all grantees who participated in Wave 1 are represented by exactly one of the four Wave 2 survey domains.

NORC will use a multi-mode data collection for both survey waves, beginning with a web-based data collection and transitioning to telephone collection as needed (as described in A.3 Improved Information Technology to Reduce Burden). NORC will begin data collection for each wave with an advance letter sent by email to all sample members explaining the study and encouraging their participation. (See Appendix E.1 for the Head Start Director letter and Appendix F.1. for the Head Start Manager/Coordinator letter). Wave 1 mailings to Head Start directors will initiate upon OMB approval, estimated in October 2019. Wave 2 will begin 4 months later, estimated in February 2020. Respondents will be provided with a URL, PIN and password to complete the survey online.  Approximately two weeks following the initial mailing, NORC will make additional contacts using various strategies as outlined in section B.3 to prompt non-respondents. These contacts are intended to reduce non-response and will continue for approximately ten weeks after the initial contact. Recruitment materials for Head Start directors and manager/coordinators include a reminder email, a reminder phone call from a field interviewer, a postcard, and a letter sent by USPS. These materials are provided in Appendices E and F.

B.2.2 Estimation Procedure

Given the large sampling rates and simplicity of the sampling design, estimates will be generated using standard quantitative analysis methods. Non-response in both Wave 1 and Wave 2 will be examined and standard statistical procedures to correct for potential non-response bias will be applied if deemed necessary.

B.3. Methods to Maximize Response Rates and Deal with Nonresponse

NORC expects to obtain a high response rate for both Wave 1 and Wave 2 surveys, as discussed in Section B.1. We will execute a number of strategies to achieve the best response rates possible. A letter from the Office of Head Start will be sent to encourage participation in the surveys (see Appendix D). The Head Start directors and manager/coordinators will receive an email invitation to participate in the study which describes the motivation for the study and highlights the importance of participation (see Appendices E.1 and F.1, respectively). We will send email and telephone reminders to encourage completion of the surveys by directors and managers/coordinators (see Appendix E.3, E.4 for the Head Start Director and Appendix F.2 and F.3 for the Manager/Coordinator).

Despite encouraging participation through clear and attractive materials, we do anticipate some nonresponse to our initial requests to participate in the study. Head Start directors and managers/ coordinators invited to participate in the online survey will be assigned a unique ID used to track selected cases throughout the survey fielding process. Thus, we will monitor whether the directors and managers/coordinators complete the questionnaire in a timely manner. For those who do not initially respond, we will use several strategies to encourage their participation. For example, NORC will follow up the initial mailing with a postcard and letter (see Appendix E.5, E.6 for the Head Start Director and Appendix F.4 and F.5 for the Manager/Coordinator), again encouraging participation via the web. Once the mail prompting phase has been completed, NORC will enlist field interviewers to prompt non-respondents by telephone to complete the survey (see Appendix E.4 for the Head Start Director and Appendix F.3 for the Manager/Coordinator). Field interviewers can either simply encourage non-respondents to complete the survey over the web, or complete the survey with them over the phone. Additionally, NORC anticipates having a concentrated number of sample members in larger urban areas. In these areas, NORC may enlist field interviewers to make in-person outreach to complete the survey. 

Although we will make our best efforts to avoid nonresponse, we will also have procedures in place to convert nonresponse cases and maximize completion rates. Non-responder follow up contact will include postcards, prompting letters, and phone follow-up, using the recruitment materials and procedures described above. In reporting our results, we will calculate nonresponse rates according to the standards promulgated by the American Association for Public Opinion Research. According to this standard, the response rate will be calculated as the ratio of the number of eligible completed cases to the number of eligible cases.

B.4. Tests of Procedures or Methods to be Undertaken

As described in A.2 Purpose of Survey and Data Collection Procedures, some items included in the Head Start Director and the Head Start Manager/ Coordinator surveys were adapted from existing Head Start surveys (see Appendices A.1 and A.2). NORC developed new items to measure constructs for which existing measures are not currently available. These items draw upon phrasing and language from prior research on Head Start and other studies of training and technical assistance. ACF federal project officers (FPO) and specialists that oversee Head Start operations and training and technical assistance efforts reviewed all of the questions and response items for both survey instruments. In addition, the study team consulted with the ACF FPOs to develop the domain-specific constructs and questions for the Head Start Manager/Coordinators survey. During an earlier phase of the project (under the Generic Clearance for Formative Data Collections, OMB #0970-0356, issued in April 2016), the study team had consulted extensively with ACF regional stakeholders (i.e., regional program managers and specialists) about terminology related to training and technical assistance. This information also informed survey development.

NORC conducted cognitive interviews of the two surveys in February – March 2019. The purpose of the cognitive interviews was to: 1) ensure that the questions are understandable, use language familiar to respondents, and are consistent with the concepts they aim to measure; 2) identify typical instrumentation problems such as question wording and incomplete or inappropriate response categories; and 3) measure the response burden.

We conducted cognitive interviews with 5 potential respondents to test the Head Start Director survey and with 7 potential respondents to test the Head Start Manager/Coordinator survey, which included the domain-specific items. During the cognitive interviews, the same question was not asked of more than 9 people. Instruments were revised as needed after the pretests. The respondent burden was also estimated based on these tests.

B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The data for this study is being collected by the NORC at the University of Chicago on behalf of the Administration for Children and Families, Office of Planning, Research, and Evaluation. With ACF oversight, NORC at the University of Chicago is responsible for the study design, data collection, analysis, and report preparation. The following individuals at NORC and ACF are leading the study team and have contributed key input on the statistical aspects of the study design:

NORC at the University of Chicago

  • Joshua Borton, Statistician

  • Rupa Datta, Senior Fellow

  • Claudia Gentile, Senior Fellow

  • Eileen Graf, Research Scientist

  • Carol Hafford, Principal Research Scientist

  • Marc Hernandez, Principal Research Scientist

  • Lekha Venkataraman, Senior Research Director

Administration for Children and Families

  • Kiersten Beigel, ACF, Office of Head Start

  • Nina Hetzner, ACF, Office of Planning, Research, and Evaluation

  • Ann Rivera, ACF, Office of Planning, Research, and Evaluation (Contracting Officer’s Representative)

1 National Survey of Early Care and Education Project Team. (2013). National Survey of Early Care and Education: Summary Data Collection and Sampling Methodology. OPRE Report #2013-46, Washington DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.

2 Karoly, L. A., Martin, L.T., Chandra, A. & Setodji, C.M. (2016). Head Start Health Matters: Findings from the 2012–2013 Head Start Health Manager Descriptive Study for Regions I–XII, OPRE Report 2016-44, Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorCarol Hafford
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy