FACES 2014 Spring 2017 OMB Part B_OMB passback_clean final

FACES 2014 Spring 2017 OMB Part B_OMB passback_clean final.docx

Head Start Family and Child Experiences Survey (FACES 2014-2018)

OMB: 0970-0151

Document [docx]
Download: docx | pdf


Head Start Family and Child Experiences Survey (FACES 2014–2018) OMB Supporting Statement for Data Collection

Part B: Collection of Information Involving Statistical Methods

May 7, 2014

Update December 2016

Updated March 2017



CONTENTS

B. STATISTICAL METHODS (USED FOR COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS) 1

B.1. Respondent Universe and Sampling Methods 2

B.2. Procedures for Collecting Information 6

1. Sampling and Estimation Procedures 6

2. Data Collection Procedures 10

B.3. Methods to Maximize Response Rates and Data Reliability 22

B.4. Test of Procedures or Methods 27

B.5. Individuals Consulted on Statistical Methods 27



APPENDICES

APPENDIX C: STUDY INTRODUCTION MATERIALS

APPENDIX H: ADVANCE MATERIALS

APPENDIX J: SPRING 2015 ADVANCE MATERIALS

APPENDIX K: AI/AN FACES FALL 2015 ADVANCE MATERIALS

APPENDIX M: FACES2014 PARENT EXPERIMENT RESULTS MEMO

APPENDIX N: AI/AN FACES SPRING 2016 ADVANCE MATERIALS

APPENDIX P: SPRING 2017 ADVANCE MATERIALS

APPENDIX Q: SPRING 2017 PROGRAM INFORMATION PACKAGE



TABLES

B.1 FACES 2014–2018 Minimum Detectable Differences 12

B.2 AI/AN FACES Minimum Detectable Differences for Child-Level Estimates 13

B.3 Final Response Rates for Fall 2014 Approved Information Requests 25

B.4 Final Response Rates for Spring 2015 Approved Information Requests 26

B.5 Final Response Rates for Fall 2015 AI/AN FACES Approved Information Requests 26

B.6 Final Response Rates for Spring 2016 AI/AN FACES Approved Information Requests 27

FIGURES

B.1 Flow of Sample Selection Procedures for Core FACES 4

B.2 Flow of Sample Selection Procedures for AI/AN FACES 5




ATTACHMENTS

ATTACHMENT 1: Classroom sampling form from Head Start staff

ATTACHMENT 2: Child roster form from Head Start staff

ATTACHMENT 3: HEAD START CORE CHILD ASSESSMENT

ATTACHMENT 4: HEAD START CORE PARENT SURVEY

ATTACHMENT 5: HEAD START FALL SUPPLEMENTAL PARENT SURVEY

ATTACHMENT 6: HEAD START CORE TEACHER CHILD REPORT

ATTACHMENT 7: HEAD START SPRING SUPPLEMENT PARENT SURVEY

ATTACHMENT 8: HEAD START CORE TEACHER SURVEY (REVISED SPRING 2017)

ATTACHMENT 9: HEAD START CORE PROGRAM DIRECTOR SURVEY (REVISED SPRING 2017)

ATTACHMENT 10: HEAD START CORE CENTER DIRECTOR SURVEY (REVISED SPRING 2017)

ATTACHMENT 11: HEAD START PARENT QUALITATIVE INTERVIEW (FAMILY ENGAGEMENT)

ATTACHMENT 12: HEAD START STAFF QUALITATIVE INTERVIEW (FSS ENGAGEMENT)

ATTACHMENT 13: HEAD START STAFF (FSS) Roster form

ATTACHMENT 14: early care and education providers survey for plus study (5E-Early ED pilot)

ATTACHMENT 15: early care and education providers survey for plus study (FPTRQ)

attachment 16: HEAD START CHILD ASSESSMENT for plus study (Ai/An FACES)

attachment 17: HEAD START PARENT SURVEY FOR PLUS STUDY (AI/AN FACES)

ATTACHMENT 18: head start teacher child report for plus study (AI/an faces)

ATTACHMENT 19: HEAD START CORE PARENT SURVEY FOR PLUS STUDY (AI/AN FACES SPRING 2016)

ATTACHMENT 20: HEAD START CORE TEACHER SURVEY FOR PLUS STUDY (AI/AN FACES)

ATTACHMENT 21: HEAD START PROGRAM DIRECTOR CORE SURVEY FOR PLUS STUDY (AI/AN FACES)

ATTACHMENT 22: HEAD START CENTER DIRECTOR CORE SURVEY FOR PLUS STUDY (AI/AN FACES)

ATTACHMENT 23: EARLY CARE AND EDUCATION ADMINISTRATOR SURVEY FOR PLUS STUDY (HEAD START PROGRAM PERFORMANCE STANDARDS)

ATTACHMENT 24: EARLY CARE AND EDUCATION PROVIDERS SURVEY FOR PLUS STUDY (5E-EARLY ED)

ATTACHMENT 25: TELEPHONE SCRIPT FOR PROGRAM DIRECTORS (REVISED SPRING 2017)

ATTACHMENT 26: TELEPHONE SCRIPT FOR ON-SITE COORDINATORS (REVISED SPRING 2017)

B. STATISTICAL METHODS (USED FOR COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS)

The Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families (ACF), U.S. Department of Health and Human Services (HHS), is collecting data for the Head Start Family and Child Experiences Survey (FACES). FACES 2014–2018 features a new “Core Plus” study design that consists of two Core studies—the Classroom + Child Outcomes Core and the Classroom Core—and Plus studies, which will include additional survey content of policy or programmatic interest. The Classroom + Child Outcomes Core, occurring during the 2014–2015 program year, collects child-level data, along with program and classroom data, from a subset of programs, while other programs will only have data collected on program and classroom information (see Part A for details). In spring 2017, we will conduct the Classroom Core focusing on program and classroom data collection for all programs.

The proposed FACES design includes multiple components as noted above, and therefore will involve multiple information collection requests. The current information collection request is for spring 2016 American Indian and Alaska Native (AI/AN) FACES Plus Study data collection, including surveys with parents, teachers, program directors, and center directors.

Previously approved information collection requests for FACES 2014–2018 include the following:

  • Sampling plans for Head Start programs, centers, classrooms, and children, as well as the procedures for recruiting programs and selecting centers (approved April 7, 2014).

  • Fall 2014 data collection activities, including selecting classrooms and children for the study, conducting child assessments and parent interviews, and obtaining Head Start teacher reports on children’s development (approved July 7, 2014). 1

  • Spring 2015 core data collection activities that included selecting classrooms in additional Head Start programs; conducting classroom observations; surveying teachers, center directors, and program directors; and interviewing parents and staff for FACES Plus studies (approved February 20, 2015).

  • Fall 2015 AI/AN FACES Plus Study data collection activities that included selecting Head Start classrooms and children for the study2, conducting child assessments and parent surveys, and obtaining Head Start teacher reports on children’s development (approved August 7, 2015).3

  • Spring 2016 AI/AN FACES Plus Study data collection activities that included conducting classroom observations; surveying teachers, center directors, and program directors; and surveying parents (Approved March 2, 2016).

B.1. Respondent Universe and Sampling Methods

The target population for FACES 2014–2018 is all Head Start programs in the United States, their classrooms, and the children and families they serve. The sample design is similar to the one used for FACES 2009 in some respects, but with some key differences noted below. FACES 2014–2018 will use a stratified multistage sample design with four stages of sample selection: (1) Head Start programs, with programs defined as grantees or delegate agencies providing direct services; (2) centers within programs; (3) classes within centers; and (4) for a random subsample of programs, children within classes. To minimize the burden on parents/guardians who have more than one child selected for the sample, we will also randomly subsample one selected child per parent/guardian, a step that was introduced in FACES 2009.

The frame that will be used to sample programs is the 2012–2013 Head Start Program Information Report (PIR), which is an updated version of the frame used for previous rounds of FACES. We will exclude from the sampling frame: Early Head Start programs, programs in Puerto Rico and other U.S. territories, migrant and seasonal worker programs, programs that do not directly provide services to children in the target age group, programs under transitional management, and programs that are (or will soon be) defunded.4 While the Core FACES study samples programs in Head Start Regions I through X, the AI/AN FACES Plus study will involve sampling programs in Region XI. For AI/AN FACES, we will combine the PIR with supplemental information from the Office of Head Start on the number of centers per program to form the sampling strata for program selection. We will develop the sampling frame for centers through contacts with the sampled programs. Similarly, the study team will construct the classroom and child frames after centers and classroom samples are drawn. All centers, classrooms, and children in study-eligible, sampled programs will be included in the center, classroom, and child frames, respectively, with two exceptions. Classrooms that receive no Head Start funding (such as prekindergarten classrooms in a public school setting that also has Head Start-funded classrooms) are ineligible. Also, sampled children who leave Head Start between fall and spring of the program year become ineligible for the study.

The sample design for the new round of FACES is based on the one used for FACES 2009, which was based on the designs of the four previous rounds. But, unlike the earlier rounds of FACES, the sample design for the Core FACES 2014–2018 will involve sampling for two newly designed study components: the Classroom + Child Outcomes Core and the Classroom Core. The Classroom + Child Outcomes Core study will involve sampling at all four stages (programs, centers, classrooms, and children), and the Classroom Core study will involve sampling at the first three stages only (excluding sampling of children within classes). Under this design, the collective sample size across the two studies will be larger than in prior rounds of FACES at the program, center, and classroom levels, allowing for more powerful analyses of program quality, especially at the classroom level. The sample design for the AI/AN FACES will include the Classroom + Child Outcomes Core but not the Classroom Core. Also new to the FACES 2014–2018 design, the child-level sample will represent children enrolled in Head Start for the first time and those who are attending a second year of Head Start. This will allow for a direct comparison of first- and second-year program participants and analysis of child gains during the second year. Previously, FACES followed newly enrolled children through one or two years of Head Start and then through spring of kindergarten. FACES 2014–2018 will follow the children only through the fall and spring of one program year.

To minimize the effects of unequal weighting on the variance of estimates, we proposed sampling with probability proportional to size (PPS) in the first two stages. At the third stage, we selected an equal probability of classrooms within each sampled center and, in centers where children are to be sampled, an equal probability sample of children within each sampled classroom. The measure of size for PPS sampling in each of the first two stages was the number of classrooms. This sampling approach maximized the precision of classroom-level estimates and allowed for easier in-field sampling of classrooms and children within classrooms. For the Core FACES 2014-2018, we selected a total of 180 programs across both Core study components. Sixty of the 180 programs sampled for the Core study were randomly subsampled with equal probability within strata to be included in the Classroom + Child Outcomes study. Within these 60 programs, we selected, if possible, two centers per program, two classes per center, and a sufficient number of children to yield 10 consented children per class, for a total of about 2,400 children at baseline. For the AI/AN FACES Plus study, we selected a total of 22 programs. Within these 22 programs, we are selecting, when possible, two centers per program, two classes per center, and a sufficient number of children to yield 10 consented children per class, for a total of about 800 children at baseline. However, due to the large proportion of Region XI programs with only one center (about half), we are selecting four classrooms in single-center programs whenever possible. For one-center programs with fewer than four classrooms, we are selecting only two classrooms in that center, but selecting twice as many such programs as we ordinarily would given their sample allocation based on the number of classrooms. For some sampled centers in multi-center programs, we may sample more than two classrooms if the other sampled center has only one classroom to sample.

Based on our experience with earlier rounds of FACES, we estimate that 70 percent of the 2,400 baseline children in Core FACES (about 1,680) will be new to Head Start, as will about 560 of the 800 baseline children in AI/AN FACES. We expect a program and study retention rate of 90 percent from fall to spring, for a Core FACES sample of 2,160 study children in both fall 2014 and spring 2015, of which about 1,512 (70 percent) are estimated to have completed their first Head Start year, and an AI/AN FACES sample of 720 study children in both fall 2015 and spring 2016, of which about 504 are estimated to have completed their first year.

For Core FACES, the Classroom Core study component will include the 60 programs where students are sampled plus the remaining 120 programs from the sample of 180. From the additional 120 programs, we will select two centers per program and two classrooms per center. Across both study components, we will collect data from a total of 360 centers and 720 classrooms in spring 2015. For follow-up data collection in spring 2017, we will select a refresher sample5 of programs and their centers so that the new sample will be representative of all programs and centers at the time of follow-up data collection, and we will select a new sample of classrooms in all centers. Figure B.1 is a diagram of the sample selection and data collection procedures for Core FACES. At each sampling stage, we will use a sequential sampling technique based on a procedure developed by Chromy.6

Figure B.1. Flow of Sample Selection Procedures for Core FACES

Shape1

For the AI/AN FACES Plus study, we will collect data from a total of 37 centers and 80 classrooms in fall 2015 and spring 2016. Figure B.2 is a diagram of the sample selection and data collection procedures for this study component.

Figure B.2. Flow of Sample Selection Procedures for AI/AN FACES

Shape2

For the AI/AN Plus study, we initially selected double the number of desired programs, and paired adjacent selected programs within strata. (These paired programs were similar to one another with respect to the implicit stratification variables.) We also selected extra pairs of programs to use if both members of a pair did not end up participating. We then randomly selected one from each pair to be released as part of the Core sample of programs. After the initial programs from each pair were selected, we asked the Office of Head Start (OHS) to confirm that the selected programs were in good standing. Once confirmed, we contacted each program and recruited them to participate in the study. If the program was not in good standing or refused to participate, we released the other member of the program’s pair into the sample and went through the same process of confirmation and recruitment with that program. We will count all released programs as part of the sample for purposes of calculating response rates and weighting adjustments. We selected extra centers within each program, in the event that any of the two main selections refuse to participate. At the subsequent stage of sampling, we are releasing all sampled classrooms, expecting full participation among the selected classes. At the child level for AI/AN FACES, we are selecting all children in each class, expecting up to 10 eligible children with parental consent, which is our target. We expect to lose, on average, seven children per class, either because they are no longer enrolled, because parental consent was not granted, or because siblings were subsampled. For AI/AN FACES, we expect lower parental consent rates for a number of reasons, such as access to technology and general distrust for research.

We will select centers PPS within each sampled program using the number of classrooms as the measure of size, again using the Chromy procedure. For the Classroom + Child Outcomes Core, we will randomly select classrooms within centers with equal probability. Classrooms with very few children will be grouped with other classrooms in the same center for sampling purposes to ensure a sufficient sample yield. Once classrooms are selected, we will select an equal probability sample of 12 children per class, with the expectation that 10 will be eligible and will receive parental consent. For spring 2015 (Core FACES), we added one of two five-minute modules to the parent interview (referred to in this document as the Head Start parent spring supplement survey). Each of these two modules were randomly assigned to half the parents in each program.

In spring 2015, FACES included a Plus topical module focused on family engagement. This Plus feature was conducted within the 60 programs participating in child-level data collection in the Classroom + Child Outcomes Core study. Within each of these 60 programs, we randomly selected three family services staff (FSS) from among those working in the two sampled centers.7 Due to the length of the FSS interview, we randomly assigned half the sampled FSS one set of questions and the other half another set of questions. We also selected a subsample of six parents per program from the list of all parents associated with sampled, eligible, and consented children from the fall data collection, implicitly stratifying by center. For both samples, we released backup sample members to replace cases of nonresponse. For both respondent types, we selected a probability sample within each program to help ensure that the selected FSS and parents were representative. In total, we selected 180 FSS and 360 parents. Of those selected we conducted interviews with 135 FSS and 305 parents.

Additionally in spring 2015, FACES piloted a new measure of program functioning. This Plus feature was conducted within the 120 programs participating in classroom-level only data collection. Within each of these 120 programs, all teachers were be invited to complete the survey. They were randomly assigned to receive one of two versions of the survey.

In spring 2017, FACES will include a Plus topical module focused on programs’ planning for the new Head Start program performance standards. This Plus module will be conducted in all 180 programs participating in the Classroom Core study. Additionally, in spring 2017, FACES plans to continue its exploration of program functioning by asking all teachers in the Classroom Core to complete measures form the Five Essentials Measurement System for Early Education (5E-Early Ed) educator survey. They will be randomly assigned to receive a subset of measures.

B.2. Procedures for Collecting Information

1. Sampling and Estimation Procedures

Statistical methodology for stratification and sample selection. The sampling methodology is described under item B1 above. When sampling programs for Core FACES, we formed explicit strata using census region, metro/nonmetro status, and percentage of racial/ethnic minority enrollment. Sample allocation were proportional to the estimated fraction of eligible classrooms represented by the programs in each stratum.8 We will implicitly stratify (sort) the sample frame by the percentage of dual language learner (DLL) children, whether the program is a public school district grantee, ACF region, and the percentage of children with disabilities. For AI/AN FACES, we formed seven explicit strata using program structure (number of centers and classrooms), with three categories, and geographic region within one of the three structure categories. The AI/AN FACES Workgroup provided guidance on how to combine into five groups the states in which Region XI Head Start programs exist. We implicitly stratifed the frame by the percentage of children in the program who are AI/AN. When selecting new programs for 2017 as part of the sample freshening, these new programs will form their own stratum, and we will implicitly stratify them by census region, metro/nonmetro status, and percentage of racial/ethnic minority enrollment.

No explicit stratification was used for selecting centers within programs, classes within centers, or children within classes, although implicit stratification based on the percentage of children who are dual language learners was used for center selection. For the Plus topic module on family engagement, we randomly subsampled FSS within programs (within the sampled centers if possible), and randomly subsampled within program parents associated with the sampled children (implicitly stratifying by center).

Estimation procedure. We will create analysis weights to account for variations in the probabilities of selection and variations in the eligibility and cooperation rates among those selected. For each stage of sampling (program, center, class, and child) and within each explicit sampling stratum, we will calculate the probability of selection. The inverse of the probability of selection within stratum at each stage is the sampling or base weight. The sampling weight takes into account the PPS sampling approach, the presence of any certainty selections, and the actual number of cases released. We treat the eligibility status of each sampled unit as known at each stage. Then, at each stage, we will multiply the sampling weight by the inverse of the weighted response rate within weighting cells (defined by sampling stratum) to obtain the analysis weight, so that the respondents’ analysis weights account for both the respondents and nonrespondents.

Thus, the program-level weight adjusts for the probability of selection of the program and response at the program level; the center-level weight adjusts for the probability of center selection and center-level response; and the class-level weight adjusts for the probability of selection of the class and class-level response. The child-level weights adjust for the subsampling probability of programs for the Classroom + Child Outcomes Core; the probability of selection of the child within classroom, whether parental consent was obtained, and whether various child-level instruments (for example, direct child assessments and parent surveys) were obtained. The formulas below represent in simplified form the various weighting steps for the cumulative weights through prior stages of selection, where P represents the probability of selection and RR the response rate at that stage of selection. Because FACES 2014–2018 includes all children (not just those newly enrolled), we will post-stratify to known totals at each weighting stage when possible.

For the Plus topical module on family engagement, we will create weights for the FSS instrument and for the parent instrument. For the FSS the weight would be:

where N is the total number of FSS in the program from which the sample was selected, and n is the number selected and released for interviewing. For the parent engagement survey, the weight would be:

where M is the total number of parents in the program from which the sample was selected, and m is the number of parents selected and released for interviewing.

Degree of accuracy needed for the purpose described in the justification. The complex sampling plan, which includes several stages, stratification, clustering, and unequal probabilities of selection, requires using specialized procedures to calculate the variance of estimates. Standard statistical software assumes independent and identically distributed samples, which would indeed be the case with a simple random sample. A complex sample, however, generally has larger variances than would be calculated with standard software. Two approaches for estimating variances under complex sampling, Taylor Series and replication methods, can be estimated by using SUDAAN and special procedures in SAS, Stata, and other packages. Most of the analyses will be at the child and classroom levels. Given various assumptions about the sample design and its impact of estimates, the sample size should be sufficiently large to detect meaningful differences. In Table B.1 (Core FACES), we show the minimum detectable differences with 80 percent power (and = 0.05) and various sample and subgroup sizes, assuming different intraclass correlation coefficients for classroom- and child-level estimates at the various stages of clustering (see table footnote).

For point-in-time estimates, we are making the conservative assumption that there is no covariance between estimates for two subgroups, even though the observations may be in the same classes, centers, and/or programs. By conservative, we mean that smaller differences than those shown will likely be detectable. For pre-post estimates, we do assume covariance between the estimates at two points in time. Evidence from another survey shows expected correlations between fall and spring estimates of about 0.5. Using this information, we applied another design effect component to the variance of estimates of pre-post differences to reflect the fact that it is efficient to have many of the same children or classes at both time points.

The top section of Table B.1 (labeled “Point in Time Subgroup Comparisons”) shows the minimum differences that would be detectable for point-in-time (cross-sectional) estimates at the class and child levels. We have incorporated the design effect attributable to clustering. The bottom section (labeled “Estimates of Program Year Gains”) shows detectable pre-post difference estimates at the child level. Examples are given below.

The columns farthest to the left (“Subgroups” and “Time Points”) show several sample subgroup proportions (for example, a comparison of male children to female children would be represented by “50, 50”). The child-level estimates represent two scenarios: (1) all consented children in fall 2014 (n = 2,400) and (2) all children in spring 2015 who remained in Head Start (n = 2,160). For example, the n = 2,400 row within the “33, 67” section represents a subgroup comparison involving children at the beginning of data collection for two subgroups, one representing one-third of that sample (for example, children in bilingual homes), the other representing the remaining two-thirds (for example, children from English-only homes).

The last few columns (“MDD”) show various types of variables from which an estimate might be made; the first two are estimates in the form of proportions, the next is an estimate for a normalized variable (such as an assessment score) with a mean of 100 and standard deviation of 15 (for child-level estimates only), and the last shows the minimum detectable effect size—the MDD in standard deviation-sized units. The numbers for a given row and column show the minimum underlying differences between the two subgroups that would be detectable for a given type of variable with the given sample size and design assumptions.

If we were to compare two equal-sized subgroups of the 720 classrooms in spring 2015, our design would allow us to detect a minimum difference of .280 standard deviations with 80 percent power. At the child level, if we were to compare normalized assessment scores with a sample size of 2,400 children in fall 2014, and two approximately equal-sized subgroups (such as boys and girls), our design would allow us to detect a minimum difference of 3.578 points with 80 percent power. If we were to compare these two subgroups again in the spring of 2015, our design would allow us to detect a minimum difference of 3.617 points.

If we were to perform a pre-post comparison (fall 2014 to spring 2015) for the same normalized assessment measure, we would be able to detect a minimum difference of 1.887 points. If we were to perform the same pre-post comparison for a subgroup representing 40 percent of the entire sample (n = 960 in fall 2014; n = 864 in spring 2015), we would be able to detect a minimum difference of 2.98 points.

The primary goal for the AI/AN FACES Plus Study is to provide a descriptive picture of Region XI Head Start children and families and their classroom and program experiences. For a percentage outcome of around 50 percent, the confidence interval around such an estimate would be plus or minus 9.3 percentage points; for a percentage outcome closer to 10 or 90 percent, the confidence interval would be plus or minus 5.6 percentage points. A secondary goal is to consider group differences. Comparisons between subgroups are also possible, but with the relatively small sample sizes, the underlying differences would have to be quite large to be detectable as statistically significant. Therefore, the study design aims to have a sample of sufficient size for exploratory and hypothesis-generating purposes. The current sample size and design support the exploratory work because they are sufficient to reflect the perspectives of AI/AN families with varying backgrounds and experiences with Head Start. Table B.2 shows MDDs for the AI/AN FACES Plus Study. The columns farthest to the left (“Subgroups” and “Time Points”) show several sample subgroup proportions (for example, a comparison of male children to female children, subgroups defined by a child characteristic, would be represented by the “Child Characteristic” row with “50, 50”). The child-level estimates represent two scenarios: (1) all consented children in fall 2015 (n = 820) and (2) all children in spring 2016 who remained in Head Start (n = 738). For example, the n = 820 row within the “Program Characteristic” row with “33, 67” section represents a subgroup comparison involving children at the beginning of data collection for two subgroups, one representing one-third of that sample (for example, children in programs in the southwest of the U.S.), the other representing the remaining two-thirds (for example, children in programs in the rest of the U.S.).

The last few columns (“MDD”) show various types of variables from which an estimate might be made; the first two are estimates in the form of proportions, the next is an estimate for a normalized variable (such as an assessment score) with a mean of 100 and standard deviation of 15 (for child-level estimates only), and the last shows the minimum detectable effect size—the MDD in standard deviation-sized units. The numbers for a given row and column show the minimum underlying differences between the two subgroups that would be detectable for a given type of variable with the given sample size and design assumptions.

If we were to compare normalized assessment scores with a sample size of 820 children in fall 2015, and two approximately equal-sized subgroups (such as boys and girls), our design would allow us to detect a minimum difference of 6.029 points with 80 percent power. If we were to compare these two subgroups again in the spring of 2015, our design would allow us to detect a minimum difference of 6.133 points.

If we were to perform a pre-post comparison (fall 2015 to spring 2016) for the same normalized assessment measure, we would be able to detect a minimum difference of 4.129 points. As noted in Part A, the Plus topical module on family engagement (not included in Table B.1) will explore several research questions. A primary goal of the study is to highlight themes and patterns overall and for key subgroups—for exploratory and hypothesis-generating purposes. Although the analyses will be primarily exploratory in nature, we want sufficient sample sizes so as to reflect the perspectives of families (and staff) with varying backgrounds and experiences with Head Start.

Unusual problems requiring specialized sampling procedures. We do not anticipate any unusual problems that require specialized sampling procedures.

Any use of periodic (less frequent than annual) data collection cycles to reduce burden. We do not plan to reduce burden by collecting data less frequently than once per year.

2. Data Collection Procedures

As in previous rounds of FACES, we propose to collect data from several sources: Head Start children, their parents, and Head Start staff (program directors, center directors, and teachers). Although FACES 2014–2018 follows a new Core Plus study design, many data collection features are the same or build on procedures that proved successful for FACES 2009 while adding enhancements to increase efficiency and lower costs. Table A.1 (in Part A) shows the instrument components, sample size, type of administration, and periodicity.

The period of field data collection for the Classroom + Child Outcomes Core was ten weeks long, beginning in September for the fall 2014 wave and in March for the spring 2015 wave. A member of the study team (led by Mathematica Policy Research), in conjunction with the Head Start program’s on-site coordinator (a designated Head Start program staff member who will work with the study team to recruit teachers and families and help schedule site visits), scheduled the data collection week based on the program’s availability. The study team scheduled a maximum of ten sites for visits each week. Approximately two weeks before the program’s data collection visit, the study team sent parents email invitations for the parent survey. For consents received during the data collection visit, the study team sent out parent emails on a rolling basis.9 Data collection for the AI/AN FACES Plus Study tookplace in the fall of 2015 and the spring of 2016. The recruitment and data collection procedures paralleled those in the Core FACES Study, but the training for the Mathematica study team included a greater emphasis on cross-cultural understanding and working with AI/AN children and families.

Table B.1. FACES 2014–2018 Minimum Detectable Differences

POINT IN TIME SUBGROUP COMPARISONS

Time Point

Subgroups

Minimum Detectable Difference

Percentage in Group 1

Percentage in Group 2

Classes in
Group 1

Classes in
Group 2

Proportion of
0.1or 0.9

Proportion of
0.5


Minimum Detectable Effect Size

Spring 2015

50

50

360

360

.084

.140


.280

33

67

238

482

.090

.149


.298

15

85

108

612

.119

.198


.392

Time Point

Percentage in Group 1

Percentage in Group 2

Children in Group 1

Children in Group 2

Proportion of
0.1 or 0.9

Proportion of
0.5

Normalized Variable (Mean = 100, s.d.= 15)

Minimum Detectable Effect Size

Fall 2014

50

50

1,200

1,200

.072

.119

3.578

.239

33

67

792

1,608

.076

.127

3.805

.254

40

30

960

720

.087

.144

4.321

.288

Spring 2015

50

50

1,080

1,080

.072

.121

3.617

.241

ESTIMATES OF PROGRAM YEAR GAINS

Time Points

Minimum Detectable Difference

Time 1

Time 2

Percent Subgroup at Both Times

Children at
Time 1

Children at
Time 2

Proportion of
0.1 or 0.9

Proportion of
0.5

Normalized Variable (Mean = 100, s.d.= 15)

Minimum Detectable Effect Size

Fall 2014

Spring 2015

100

2,400

2,160

.038

.063

1.887

.126

70

1,680

1,512

.045

.075

2.255

.150

40

960

864

.060

.100

2.983

.199

Note: Conservative assumption of no covariance for point-in-time subgroup comparisons. Covariance adjustment made for pre-post difference (Kish, p. 462, Table 12.4.II, Difference with Partial Overlap). Assumes =.05 (two-sided), .80 power. For classroom-level estimates, assumes 180 programs, 360 centers, between-program ICC = .2, between-center ICC = .2. For child-level estimates, assumes 60 programs, 120 centers, between-program ICC = .05, between-center ICC = .05, between-classroom ICC = .05.

s.d. = standard deviation

The minimum detectable effect size is the minimum detectable difference in standard-deviation-sized units.


Table B.2. AI/AN FACES Minimum Detectable Differences for Child-Level Estimates

POINT IN TIME SUBGROUP COMPARISONS


Subgroups

Minimum Detectable Difference

Time Point

Subgroup Defined by

Percentage in Group 1

Percentage in Group 2

Children in Group 1

Children in Group 2

Proportion of
0.1 or 0.9

Proportion of
0.5

Normalized Variable (Mean = 100, s.d.= 15)

Minimum Detectable Effect Size

Fall 2015

Program Characteristic

50

50

410

410

.157

.261

7.833

.522

33

67

271

549

.167

.278

8.330

.555

Child Characteristic

50

50

410

410

.121

.201

6.029

.402

33

67

271

549

.123

.205

6.151

.410

Spring 2016

Program Characteristic

50

50

369

369

.158

.264

7.913

.528

33

67

293

494

.168

.280

8.415

.561

Child Characteristic

50

50

369

369

.123

.204

6.133

.409

33

67

243

494

.125

.209

6.266

.418

ESTIMATES OF PROGRAM YEAR GAINS

Time Points

Minimum Detectable Difference

Time 1

Time 2

Children at
Time 1

Children at
Time 2

Proportion of
0.1 or 0.9

Proportion of
0.5

Normalized Variable (Mean = 100, s.d.= 15)

Minimum Detectable Effect Size

Fall 2015

Spring 2016

820

738

.083

.138

4.129

.275

Note: Assumes =.05 (two-sided), .80 power, using T distribution for critical values, Assumes 21 programs, 34 centers, and 69 classrooms. Between-program ICC = .05, between-center ICC = .05, and between-classroom ICC = .05.

Covariance adjustment made for pre-post difference (Kish, p. 462, Table 12.4.II, Difference with Partial Overlap), assuming 10 percent attrition from fall to spring and .50 pre-post correlation.

s.d. = standard deviation

The minimum detectable effect size is the minimum detectable difference in standard-deviation-sized units.

Below we outline the procedures for each of the Core and Plus study data collection instruments (and anticipated marginal response rates). The instruments used in FACES 2014–2018 and AI/AN FACES are streamlined versions of those used in FACES 2009. The advance material is similar to those used in previous rounds, but have been modified based on changes to the study design. For AI/AN FACES in particular, the AI/AN FACES Workgroup has collaborated on the development of these materials to ensure cultural appropriateness. Below is a list of the instruments that have been previously approved, are currently being submitted, and will be submitted under future requests. Bullets one through nineteen were administered as part of the FACES Core or Plus studies in fall 2014, spring and fall 2015, and spring 2016 and were previously approved as noted below. The current information collection request covers spring 2017 instruments presented in bullets twenty through twenty-four. Any Plus activities using Core instruments will follow the same procedures as the Core data collection. Potential data collection activities for Plus studies might differ from the Core activities, depending on the nature of the study.10

Previously approved instruments

  1. Head Start classroom sampling form (Attachment 1; approval granted in previous package for child and family data, OMB Approval Number 0970-0151, approved on July 7, 2014). Upon arrival at a selected center, a Field Enrollment Specialist (FES) requested a list of all Head Start-funded classrooms from Head Start staff (typically the On-Site Coordinator). Head Start staff may provide this information in various formats such as print outs from an administrative record system or photocopies of hard copy list or records. The FES entered the information into a tablet computer. For each classroom, the FES entered the teacher’s first and last names, the session type (morning, afternoon, full day, or home visitor), and the number of Head Start children enrolled into a web-based sampling program via the tablet computer. The sampling program selected about two classrooms for participation in the study. In fall 2014 and spring 2015, no On-Site Coordinators refused to provide this information. The Head Start classroom sampling form was used for the additional 37 centers sampled from the 22 programs participating in the AI/AN FACES Plus study. In spring 2017, we plan to use this classroom sampling form with the 360 centers in 180 programs participating in the spring 2017 data collection.

  2. Head Start child roster form (Attachment 2; approval granted in previous package for child and family data, OMB Approval Number 0970-0151, approved on July 7, 2014). For each selected classroom, the FES requested the names and dates of birth of each child enrolled in the selected classroom from Head Start staff (typically the On-Site Coordinator). Head Start staff may have provided this information in various formats such as print outs from an administrative record system or photocopies of hard copy list or records. The FES used a tablet computer to enter this information into a web-based sampling program. The program selected up to 12 children for participation in the study. For these selected children only, the FES then entered each child’s gender, home language, and parent’s name into the sampling program. Finally, the FES asked Head Start staff (typically the On-Site Coordinator) to identify among the 24 selected children any siblings. The FES identified the sibling groups in the sampling program and the sampling program then dropped all but one member of each sibling group, leaving one child per family.

  3. Head Start core child assessments (Attachment 3; approval granted in previous package for child and family data, OMB Approval Number 0970-0151, approved on July 7, 2014). The study team conducted direct child assessments in fall 2014 and spring 2015 during the scheduled data collection week. The on-site coordinator scheduled child assessments at the Head Start center. Parents were reminded of the child assessments the week before the field visit via reminder notices sent home with their child (Appendix H-1). On average, child assessments took approximately 45 minutes. A trained assessor used computer-assisted personal interviewing with a tablet computer to conduct the child assessments one-on-one, asking questions and recording the child’s responses. In fall 2014 and spring 2015, we completed assessments for 95 percent of the sampled children.

  4. Head Start Core parent surveys (Attachment 4; approval granted in previous package for child and family data, OMB Approval Number 0970-0151, approved on July 7, 2014). On average, each parent survey is approximately 20 minutes long. With the introduction of web-based surveys with a low-income population, we conducted an experiment in fall 2014 to understand how response rates and costs are affected by this new option. In particular, we were interested in whether it is cost-effective to use a web survey as compared to a telephone-administered survey with a low-income population and whether parents’ choice of a web survey is a function of how this option is introduced to them. Each program’s parents were randomly assigned to one of two groups to complete the parent survey: (1) a web-first group or (2) a choice group. The web-first group received a web-based survey initially with computer-assisted telephone interviewing (CATI) follow-up after three weeks. The choice group received the option of either web-based or CATI administration starting at the beginning of data collection. If parents in the web-first group did not complete the survey within the first three weeks of receiving the invitation, we actively called them to attempt to complete the survey and sent follow-up reminder materials indicating that they could now call in to complete their survey over the phone. Parents in the choice group had the option to complete the survey on the web or phone. In the first three weeks after parents received the invitation, we used a passive telephone effort in which we completed surveys only with parents who called in to Mathematica’s phone center. This allowed us to determine the parents’ choice of mode. After three weeks, we actively began efforts to reach parents by phone to complete the survey. We anticipated a response rate of 86 percent in the fall and 75 percent in the spring among sampled families, with approximately 40 percent of the parent surveys completed online and the remainder by telephone. The fall experience demonstrated a response rate of 77 percent (see Section A.12 for more information about the fall response rates). The spring response rate was 73 percent.

In fall 2014, we sent parents an email or hard copy invitation (parents who provided an email address on their consent form received the email) approximately two weeks before the start of data collection to invite them to complete the survey. The invitations for the parents in the web-first group contained an Internet web address, login id, and password for completing the survey online (Appendix H-2 [email], H-3 [hard copy]). The invitations for the parents in the choice group also contained an Internet web address, login id, and password for completing the survey online as well as a toll-free telephone number should they choose to complete the survey by phone (Appendix H-4 [email], H-5 [hard copy]). When needed, we sent parents an email or hard copy letter approximately three weeks after the start of data collection to remind them to complete the survey. The reminders for parents in the web-first group contained the same information provided in their invitation as well as the toll-free telephone number offering them the option to complete the survey by phone (Appendix H-6 [email], H-7 [hard copy]). The reminders for parents in the choice group contained the same information as their invitation (Appendix H-8 [email], H-9 [hard copy]). Telephone interviewing was conducted as needed, either beginning with any call-ins by parents after receipt of these letters or approximately three weeks after the field visit week as part of follow-up.

Before the field visit, we discussed center and family access to computers and the internet with the on-site coordinator. We also determined the feasibility of setting up a computer station for parents to complete the survey during the field visit.

Based on the fall 2014 results, in spring 2015 we (1) gave all parents the choice between telephone and web (Appendix H-4), (2) reduced the delay in active calling from three weeks to two weeks and (3) continued to offer a $5 bonus for responding online and another $5 for responding within two weeks (see Appendix M).

  1. Head Start core parent fall supplemental survey (Attachment 5; approval granted in previous package for parent fall supplement survey, OMB Approval Number 0970-0151, approved on July 7, 2014). Head Start parents also completed supplemental survey questions within the core parent surveys to gather background information or additional content. These supplemental questions, requiring about 5 minutes, followed the same procedures as described above for the core parent surveys.

  2. Head Start core teacher child report (TCR) (Attachment 6; approval granted in previous package for child and family data, OMB Approval Number 0970-0151, approved on July 7, 2014). Head Start teachers were asked to complete a TCR for each consented FACES child in their classroom. The study team sent teachers a letter containing an Internet web address, login ID, and password for completing the TCRs online (Appendix H-10). During the onsite field visit, field interviewers had hard copies of the TCR forms for teachers who would prefer to complete the forms with paper and pencil. Each TCR takes approximately 10 minutes to complete. Teachers had approximately 10 FACES children in each classroom. In fall 2014, we achieved a response rate of 98 percent of TCR forms; in spring 2015 we achieved a response rate of 95 percent.

  3. Head Start core parent spring supplemental survey (Attachment 7; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). Head Start parents also completed a different set of supplemental survey questions for the spring within the core parent surveys to gather background information or additional content. These supplemental questions, requiring about 5 minutes, followed the same procedures as described above for the core parent surveys.

  4. Head Start core teacher survey (Attachment 8; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). On average, each teacher survey took approximately 30 minutes to complete. It was a self-administered web instrument with a paper-and-pencil option. These cases were released during the center’s spring data collection. The study team sent teachers a letter containing an Internet web address, login ID, and password for completing the teacher survey (Appendix J-1 and Appendix J-2). During the onsite field visit, field interviewers had hard copies of the surveys for teachers who preferred to complete the survey with paper and pencil. In spring 2015, we achieved a response rate of 93 percent.

  5. Head Start core program director survey (Attachment 9; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). On average, each program director survey was approximately 30 minutes in length. It was a self-administered web instrument with a paper-and-pencil option. These cases were released in the spring at the beginning of the spring data collection period. The study team sent program directors a letter containing an Internet web address, login ID, and password for completing the program director survey (Appendix J-3). FACES liaisons followed up with directors needing paper forms as needed. We achieved a response rate of 97 percent in spring 2015.

  6. Head Start core center director survey (Attachment 10; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). On average, each center director survey was approximately 25 minutes in length. It was a self-administered web instrument with a paper-and-pencil option. These cases were released during the center’s spring data collection visit week. The study team sent center directors a letter containing an Internet web address, login ID, and password for completing the center director survey (Appendix J-4). During the onsite field visit, field interviewers had hard copies of the surveys for directors who preferred to complete the survey with paper and pencil. We achieved a response rate of 93 percent.

  7. Head Start Plus study qualitative interviews. Head Start staff or parents may be selected for Plus topical modules or special studies that would involve qualitative interviews. These interviews would last approximately one hour and would follow a semi-structured protocol. Interviews will be conducted over the phone by either a FACES liaison or Mathematica’s Survey Operation Center. In spring 2015, two such interviews were conducted around the topic of family engagement.

    1. Head Start family engagement Plus study parent interviews (Attachment 11; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). These interviews lasted approximately one hour and included open- and close-ended questions on what was happening in programs around family engagement and service provision and how practices and experiences may differ across families. Interviews were conducted over the phone by Mathematica’s Survey Operation Center. Parents were contacted by phone with the phone number provided on their consent form. If needed, we sent parents an email or hard copy letter approximately one to three weeks after the start of interviewing to remind them to complete the interview (Appendix J-6). We achieved a response rate of 81 percent in spring 2015.

    2. Head Start family engagement Plus study staff interviews (Attachment 12; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). These interviews lasted approximately one hour and included open- and close-ended questions on what was happening in programs around family engagement and service provision, how practices and experiences may differ across staff, the background characteristics of family support staff, and the alignment (or lack thereof) of practices with performance standards or other key resources. Interviews were conducted over the phone by a FACES liaison. Staff were contacted by phone at a time scheduled through the On-Site Coordinator. When needed,we sent staff an email or hard copy letter approximately one to three weeks after the start of interviewing to remind them to complete the interview (Appendix J-8). We achieved a response rate of 85 percent in spring 2015.

    3. Head Start staff (FSS) sampling form from Head Start staff (Attachment 13; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). For each selected program, the FACES liaison requested the names of all FSS from Head Start staff (typically the On-Site Coordinator). Additional information was requested on their title (e.g., family service worker, family service manager) and centers served. Head Start staff may have provided this information in various formats such as print outs from an administrative record system or photocopies of hard copy list or records.

  1. Early care and education administrators and providers surveys for Plus study. Additional early care and education administrators and providers (such as education coordinators or family service staff) may be sampled for plus studies. These surveys would last approximately 30 minutes to gather background information or additional content on a particular topic. In spring 2015, a pilot educator survey and family provider teacher relationship questionnaire were conducted, as described below.

  1. 5 Essentials Early Education Educator Pilot Survey (Attachment 14; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). On average, the pilot survey was approximately 20 minutes long. It was a self-administered web instrument. Teachers were assigned to receive one of two versions. These cases were released during the center’s spring data collection. The study team sent teachers a letter containing an Internet web address, login ID, and password for completing the teacher survey (Appendix J-2). We achieved a response rate of 91 percent in spring 2015.

  2. Family Provider Teacher Relationship Questionnaire (FPTRQ; Attachment 15; approval granted in previous package for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). On average, the FPTRQ survey took approximately 5 minutes. It was a self-administered web instrument with a paper-and-pencil option. Items were integrated into the Head Start Core Teacher Survey but only asked of the 240 teachers in the 60 programs participating in child-level data collection. Therefore, the procedures and achieved response rate were the same as bullet 7 above.

  1. Head Start child assessment for Plus study: AI/AN FACES (Attachment 16; approval granted in previous package for fall 2015 AI/AN FACES data collection, OMB Approval Number 0970-0151, approved on August 7, 2015). The study team will conduct direct child assessments in fall 2015 and spring 2016 during the scheduled data collection week. The same procedures for the Core child assessments will be followed (bullet 3 above). In particular, parents will be reminded of the child assessments the week before the field visit via reminder notices sent home with their child (Appendix K.10). We achieved a response rate of 95 percent in fall 2015 and 96 percent in Spring 2016.

  2. Head Start parent survey for Plus study: AI/AN FACES Fall 2015 (Attachment 17; approval granted in previous package for fall 2015 AI/AN FACES data collection, OMB Approval Number 0970-0151, approved on August 7, 2015). On average, each parent survey is approximately 30 minutes long. Similar to the Core spring 2015 data collection (see bullet 4 above), we will send parents an email or hard copy invitation after receiving their consent form (parents who provide an email address on their consent form will receive the email) approximately two weeks before the start of data collection to invite them to complete the survey. If needed, we will send parents an email or hard copy letter approximately two weeks after the start of data collection to remind them to complete the survey. Telephone interviewing will begin immediately after parents receive the advance letter asking them to answer the parent survey. We will work with the Head Start programs to host a “parent night” with several laptop stations that parents could use to complete the survey online when the data collection team is on site. We will also offer in-person interviewing in conjunction with the on-site visit.We achieved a response rate of 83 percent in fall 2015, with 34 percent completed on the web and 64 percent by CATI.

  3. Head Start teacher child report for Plus study (Attachment 18; approval granted in previous package for fall 2015 AI/AN FACES data collection, OMB Approval Number 0970-0151, approved on August 7, 2015). Head Start teachers will be asked to complete a TCR for each consented AI/AN FACES child in their classroom following the same procedures used in the Core study (bullet 6 above). In particular, the study team will send teachers a letter containing an Internet web address, login ID, and password for completing the TCRs online (Appendix K.9). We achieved a response rate of 97 percent in fall 2015; 41 percent of the TCR forms were completed by web. In spring 2016, we achieved a response rate of 97 percent; 49 percent of the TCR forms were completed by web.

  4. Head Start core parent survey for Plus study: AI/AN FACES Spring 2016 (Attachment 19; approval granted in previous package for spring 2016 AI/AN FACES data collection, OMB Approval Number 0970-0151, approved on March 2, 2016). The spring 2016 parent survey had a similar design to the fall 2015 parent survey (see bullet 14 above), but contained some new content. The procedures was the same as the fall, but there was an updated letter sent to parents (Appendix N.6).

We achieved a response rate of 82 percent in spring 2016, with 34 percent completed on the web and 66 percent by CATI.

  1. Head Start core teacher survey for Plus study: AI/AN FACES (Attachment 20; approval granted in previous package for spring 2016 AI/AN FACES data collection, OMB Approval Number 0970-0151, approved on March 2, 2016). On average, each teacher survey was approximately 35 minutes long. It was a self-administered web instrument with a paper-and-pencil option. These cases were released during the center’s spring 2016 data collection week. The study team sent teachers a letter containing an Internet web address, login ID, and password for completing the teacher survey (Appendix N-1). During the onsite field visit, field interviewers had hard copies of the surveys for teachers who preferred to complete the survey with paper and pencil.

We achieved a response rate of 96 percent in spring 2016, with 58 percent completed by web.

  1. Head Start program director survey for Plus study: AI/AN FACES (Attachment 21; approval granted in previous package for spring 2016 AI/AN FACES data collection, OMB Approval Number 0970-0151, approved on March 2, 2016). On average, each program director survey was approximately 20 minutes in length. It was a self-administered web instrument with a paper-and-pencil option. These cases were released in the spring at the beginning of the spring data collection period. The study team sent program directors a letter containing an Internet web address, login ID, and password for completing the program director survey (Appendix N-2; Appendix N-4 for one-center program directors). FACES liaisons followed-up with directors needing paper forms as needed. We achieved a response rate of 100 percent in spring 2016, with 76 percent completed by web.

  2. Head Start center director survey for Plus study: AI/AN FACES (Attachment 22; approval granted in previous package for spring 2016 AI/AN FACES data collection, OMB Approval Number 0970-0151, approved on March 2, 2016). On average, each center director survey was approximately 20 minutes in length. It was a self-administered web instrument with a paper-and-pencil option. These cases were released during the center’s spring data collection visit week. The study team sent center directors a letter containing an Internet web address, login ID, and password for completing the center director survey (Appendix N-3; Appendix N-5 for multi-center directors). During the onsite field visit, field interviewers had hard copies of the surveys for directors who preferred to complete the survey with paper and pencil. We achieved a response of 97 percent in spring 2016, with 54 percent completed by web.

Current information collection request

  1. Head Start core teacher survey (Attachment 8, revised spring 2017; previous version approved for spring 2015 data on February 20, 2015 under OMB #0970-0151). On average, each teacher survey will take approximately 30 minutes to complete. It will be a self-administered web instrument with a paper-and-pencil option. The study team will send teachers a letter containing an Internet web address, login ID, and password for completing the teacher survey (Appendix P-1). During the onsite field visit, field interviewers will have hard copies of the surveys for teachers who prefer to complete the survey with paper and pencil. In spring 2017, we expect a response rate of 90 percent. Revisions to this instrument consist of minor updates to improve clarity of questions and responses based on updates made for the AI/AN FACES instrument or spring 2015 experience.

  2. Head Start core program director survey (Attachment 9, revised spring 2017; previous version approved for spring 2015 data collectionon February 20, 2015 under OMB #0970-0151). On average, each program director survey will be approximately 30 minutes in length. It will be a self-administered web instrument with a paper-and-pencil option. The study team will send program directors a letter containing an Internet web address, login ID, and password for completing the program director survey (Appendix P-2). FACES liaisons will follow up with directors needing paper forms as needed. In spring 2017, we expect a response rate of 95 percent. Revisions to this instrument consist of minor updates to improve clarity of questions and responses based on updates made for the AI/AN FACES instrument or spring 2015 experience. Additionally, some questions have been dropped based on lack of item variability or frequency of change anticipated.

  3. Head Start core center director survey (Attachment 10; revised in spring 2017; previous version approved for spring 2015 data collection, OMB Approval Number 0970-0151, approved on February 20, 2015). On average, each center director survey will be approximately 25 minutes in length. It will be a self-administered web instrument with a paper-and-pencil option. The study team will send center directors a letter containing an Internet web address, login ID, and password for completing the center director survey (Appendix P-3). During the onsite field visit, field interviewers will have hard copies of the surveys for directors who preferred to complete the survey with paper and pencil. In spring 2017, we expect a response rate of 95 percent. Revisions to this instrument consist of minor updates to improve clarity of questions and responses based on updates made for the AI/AN FACES instrument or spring 2015 experience. Additionally, some questions have been dropped based on lack of item variability or frequency of change anticipated.

  4. Early care and education administrators surveys for Plus study. Additional early care and education administrators (such as directors) will be asked additional questions for plus studies to gather background information or additional content on a particular topic. In Spring 2017 we plan to add a few items on program perspectives as part of the Plus topical module, as described below.

  1. Program’s perspectives on new Head Start program performance standards (Attachment 23). Directors will complete a set of supplemental survey questions within the Core instruments. These supplemental questions will require 5 minutes and will follow the same procedures as described above for the core program and center director surveys.

  1. Early care and education providers surveys for Plus study. Additional early care and education providers (such as lead teachers) will be asked additional questionsfor plus studies to gather additional content on a particular topic. In spring 2017, the Five Essentials-Early Education Educator survey, piloted in spring 2015, will be conducted with all sampled teachers, as described below.

  1. 5 Essentials Early Education Educator Survey (Attachment 24). Teachers will complete a set of supplemental survey questions within the Core instrument. These supplemental questions will require 10 minutes, and will follow the same procedures as described above for the core teacher survey. Teachers will be assigned to receive a subset of measures from the 5E-Early Ed survey.



Future requests

  1. Head Start child assessment, parent survey, parent supplemental survey, and teacher child report for plus study. Additional Head Start children, parents, and teachers may be selected for Plus topical modules or special studies. Child assessments, requiring about 45 minutes, parent surveys and supplemental surveys requiring about 20 minutes and 5 minutes respectively, as well as teacher child reports, requiring about 10 minutes, would follow the same procedures as described above for the core child assessments, parent surveys, and teacher child reports.

  2. Head Start staff surveys for plus study. Additional Head Start teachers, program directors, and center directors may be selected for Plus topical modules or special studies. Teacher surveys, requiring about 30 minutes, program director surveys requiring about 30 minutes, as well as center director surveys, requiring about 25 minutes, would follow the same procedures as described above for the Head Start staff surveys.

  3. Early care and education administrators and providers surveys for plus study. Additional early care and education administrators or providers (such as education coordinators or family service staff) may be sampled for plus studies to gather background information or additional content on a particular topic. These surveys would last approximately 30 minutes to gather background information or additional content on a particular topic.



B.3. Methods to Maximize Response Rates and Data Reliability

There is an established, successful record of gaining program cooperation and obtaining high response rates with center staff, children, and families in research studies of Head Start, Early Head Start, and other preschool programs. To achieve high response rates, we will continue to use the procedures that have worked well on FACES, such as multi-mode approaches, e-mail as well as hard copy reminders, and tokens of appreciation. Because multiple attempts to locate parents and obtain responses leads to increased cost the longer data collection goes on, in fall 2014 and spring 2015 we offered a $5 bonus for parents who completed their survey within the first three weeks of being asked to do so. However, we did not do this in fall 2015, and instead simplified the payment structure to a $25 gift card for all parents (See A9). We also updated some of the components with improved technology, such as tablet computers or web-based applications. Marginal response rates for FACES 2009 ranged from 93 percent to 100 percent across instruments. As outlined in a previous OMB clearance package for program recruitment, ACF will sent a letter to selected programs, signed by Maria Woolverton (the federal project officer) and a member of the senior staff at OHS describing the importance of the study, outlining the study goals, and encouraging their participation. Head Start program staff and families were motivated to participate because they are vested in the success of the program. For AI/AN FACES, experienced Mathematica site liaisons received FACES training with additional sections on cultural awareness with three consultants; Michelle Sarche, Miker Richardson, and Jessica Barnes-Najor. Each liaison partnered with AI/AN workgroup members who served as ongoing cultural mentors. Workgroup members also advised on the approaches for reaching out to parents and other sample members. If programs or centers were reluctant to participate in the study, Mathematica senior staff contacted them to encourage their participation.

Additionally, the study team will send correspondence to remind Head Start staff and parents about upcoming surveys (Appendix H, J and P; Appendix K for AI/AN FACES) and child assessments (Appendix C-4; Appendix K.3 for AI/AN FACES). The web administration of Head Start staff and parent surveys will allow the respondents to complete the surveys at their convenience. The study team will ensure that the language of the text in study forms and instruments are at a comfortable reading level for respondents. Paper-and-pencil survey options will be available for Head Start staff who have no computer or Internet access, and parent surveys can be completed via computers available at the center during the data collection visit or by telephone. CATI and field staff will also be trained on refusal conversion techniques, and they will also receive FACES training with additional sections on cultural awareness.

These approaches, most of which have been used in prior rounds of FACES, will help ensure a high level of participation. Obtaining the high response rate we expect to attain makes the possibility of nonresponse bias less likely, which in turn makes our conclusions more generalizable to the Head Start population. We will calculate both unweighted and weighted, marginal and cumulative, response rates at each stage of sampling and data collection. Following the American Association for Public Opinion Research (AAPOR) industry standard for calculating response rates, the numerator of each response rate will include the number of eligible completed cases. We define a completed instrument as one in which all critical items for inclusion in the analysis are complete and within valid ranges. The denominator will include the number of eligible selected cases.

Final response rates for Fall 2014 are provided in Table B.3 (also presented in Part A). The parent response rate of 77 percent falls below our expected target of 86 percent. The parent survey experiment (described in Section A.3) included a three-week delay when study staff began to actively contact parents in order to complete the survey by phone. This delay could have adversely impacted the response rate, especially in the later weeks of the data collection period. All consented parents are contacted in the spring, even if they did not complete the fall survey. In an effort to remediate the fall response rate issues for the spring data collection, we released fall nonrespondent cases first to allow more time for contact and to complete data collection for these cases. We also shortened the interval between when a parent is invited to complete the survey and when active calling begins from three to two weeks. Table B.4 (also presented in Part A) presents the interim response rates for spring 2015 data collection, which includes recruiting an additional 120 programs, continuing fall activities in the 60 programs (child assessments, parent surveys, and teacher child reports), conducting Plus interviews in those programs, and administering staff surveys in all 180 programs.

Table B.4 reports the final response rates for spring 2015 data collection. As in fall 2014, the final spring 2015 parent survey response rate of 73 percent is lower than we expected based on our experience surveying parents in FACES 2006 and 2009.

Given the Core parent survey response rate was less than 80 percent at both the fall 2014 and spring 2015 data collection waves, we conducted an analysis to assess nonresponse bias.11 In this analysis, we compared estimates of child outcomes for parent survey respondents and nonrespondents and looked for significant differences between the two groups. We then examined whether the child-level nonresponse-adjusted weights mitigated the bias. We did this analysis separately for the parent survey in the fall and spring. To examine differences between respondents and nonrespondents for program-level characteristics and parent survey contact options (for example, whether they had the ability to send or receive text messages) obtained from the consent form, we focused on all sampled and consented children. To examine differences between respondents and nonrespondents for child outcomes, we focused on those sampled and consented children with completed child assessments.

More than three-quarters of the variables we examined did not have significantly different distributions between respondents and nonrespondents, even before nonresponse adjustments to the weights. Among those that did have different distributions, nonresponse adjustments to the weights generally either resolved or lessened those differences (to 2 percentage points or less).12 It is important to note that among the parents of the 2,462 children who were in the study in the fall, 2,105 (85.5 percent) completed at least one of the two surveys. Among the parents of the 2,206 children who were in the study in both fall 2014 and spring 2015, 1,951 (88.4 percent) completed at least one of the two surveys. This is important because those parents who completed the spring survey but did not complete the fall survey were asked key demographic questions from that fall survey instrument in the spring. Therefore, most spring or program-year weights require that either the fall or spring parent interview be completed, but not necessarily both. Because of this, we feel researchers should feel comfortable making child-level estimates from the FACES 2014 Classroom + Child Outcomes Core study using the appropriate weights. (For more information, see the memorandum entitled “Nonresponse Bias Analysis for the FACES Core Study Parent Survey in Fall 2014 and Spring 2015,” submitted November 22, 2016.)

In light of the difficulties we experienced completing parent surveys in FACES this past year, we made several changes to the approach for AI/AN FACES. We simplified the incentive structure to a single amount (described in A.9), removed the delay in active calling, and offered additional on-site access for parents to complete the survey. Table B.5 presents final response rates for fall 2015 AI/AN FACES data collection.The Head Start program response rate of 68 percent fell below our expected target of 80 percent, which was based on our experience recruiting programs in FACES 2006 and 2009 in Regions I-X. In addition to expected requirements, many Region XI programs selected for AI/AN FACES also required the approval of a tribal council or other representative body in order to participate in the study. This contributed to the lower response rate when the tribal body declined to participate or when the time allotted for recruitment expired.

Given the program participation rate was less than 80 percent for Region XI Head Start programs sampled for AI/AN FACES, we conducted an analysis of the potential for nonresponse bias for estimates from programs participating in the AI/AN FACES study. We examined whether the distributions of a set of program-level variables from the Head Start Program Information Report differed between participating and nonparticipating programs.13 None of the variables we examined had statistically significantly different distributions between participating programs and nonparticipating programs before nonresponse adjustments were made to the sampling weights. That is, we were unable to reject the null hypotheses that participating programs did not differ from nonparticipating programs although, given the small effective sample size, we likely did not have sufficient power to reject any of the null hypotheses. However, some estimated percentages did appear to differ between participating and nonparticipating programs before weights were applied. Nonresponse adjustments to the weights mostly improved these distributions, with most differences becoming less than 3 percentage points, although in one variable (percent of children with disabilities) they resulted in greater deviations than initially observed. Because of the small sample size used for this nonresponse bias analysis, researchers should be cautious in interpreting its findings. For program size, urbanicity, and the percentage of children who are AI/AN, we saw small differences before nonresponse adjustments and even smaller differences after those adjustments. In turn, this likely means that the program-level nonparticipation will have minimal impact on child-level estimates that will result from these participating programs, because child-level weights are built upon the final adjusted program weights. Furthermore, the study was designed to produce child-level, not program-level estimates, with a primary focus on point estimates, rather than comparisons between child subgroups. Our child sample size exceeded those laid out in the study design. Therefore, we believe researchers should feel comfortable using the AI/AN child-level data, along with the appropriate weights. (For more information, see the memorandum entitled “Nonresponse Bias Analysis for AI/AN FACES Program Participation,” submitted November 22, 2016.)

Table B.6 presents final response rates for spring 2016 AI/AN FACES data collection which came within expectations for each instrument.

Table B.3. Final Response Rates for Fall 2014 Approved Information Requests

Data Collection

Expected Response Rate

Final Response
Rate

Head Start program

100%

90%

Head Start centera

100%

100%

Head Start core parent consent formb

90%

95%

Head Start core child assessmentc

92%

95%

Head Start core parent surveyc

86%

77%

Head Start fall parent supplement surveyc

86%

77%

Head Start core teacher child reportc

93%

98%

a Among participating programs

b Among eligible children

c Among eligible, consented children





















Table B.4. Final Response Rates for Spring 2015 Approved Information Requests

Data Collection

Expected Response
Rate

Final Response
Rate

Head Start programa

100%

92%

Head Start centerb

100%

99%

Head Start core child assessmentc

92%

95%

Head Start core parent surveyc

75%

73%

Head Start spring parent supplement surveyc

75%

73%

Head Start core teacher child reportc

93%

95%

Head Start core teacher survey

83%

93%

Head Start core program director survey

100%

97%

Head Start core center director survey

100%

93%

Head Start parent engagement interview consent form

n.a.d

59%

Head Start parent qualitative interview (Family Engagement)

85%

83%

Head Start staff engagement interview consent form

n.a.d

90%

Head Start staff qualitative interview (FSS Engagement)

90%

89%

Early care and education providers survey for Plus study (5E-Early Ed Pilot)

80%

91%

Early care and education providers survey for Plus study (FPTRQ)

83%

95%

a Among the new programs sampled for spring 2015 Classroom Core

b Among participating new spring 2015 programs

c Among eligible, consented children

d Family Engagement study had a target of 360 parent and 180 Head Start staff completed interviews.





Table B.5. Final Response Rates for Fall 2015 AI/AN FACES Approved Information Requests

Data Collection

Expected Response Rate

Fall 2015 Sample Size

Final Response
Rate

Head Start program

80%

31

68%

Head Start centera

100%

35

97%

Head Start core parent consent form

90%

1034

95%

Head Start core child assessmentb

83%

984

95%

Head Start core parent surveyb

83%

984

83%

Head Start core teacher child reportb

83%

984

97%

a Among participating programs

b Among eligible, consented children











Table B.6. Final Response Rates for Spring 2016 AI/AN FACES Approved Information Requests

Data Collection

Expected Response Rate

Spring 2016 Sample Size

Final Response Rate

Head Start core child assessmentb

95%

980

96%

Head Start core parent surveyb

80%

980

82%

Head Start core teacher child reportb

95%

980

97%

Head Start core teacher survey

90%

74

96%

Head Start core program director survey

90%

21

100%

Head Start core center director survey

90%

36

97%

a Among participating programs

b Among eligible, consented children

B.4. Test of Procedures or Methods

Most of the scales and items in the proposed parent survey, child assessment, and teacher child reports have been successfully administered in FACES 2009 and in the fall 2014 wave of FACES 2014. For the AI/AN FACES Plus study, all assessment and survey instruments and study procedures and methods have been reviewed by the members of the AI/AN FACES Workgroup and determined to be appropriate for AI/AN children and families. We have conducted usability pretests with fewer than 10 respondents to test new devices, such as tablet computers, new modes, and to assess the timing of the updated, streamlined instruments.

B.5. Individuals Consulted on Statistical Methods

The team is led by Maria Woolverton and Mary Muggenborg, the federal contracting officer’s representatives (CORs); Dr. Lizabeth Malone, project director; Dr. Louisa Tarullo and Dr. Nikki Aikens, co-principal investigators; and Annalee Kelly, survey director. Additional staff consulted on statistical issues include Barbara Carlson, a senior statistician at Mathematica, and Dr. Margaret Burchinal, a consultant to Mathematica on statistical and analytic issues.

1 The fall 2014 approval included spring 2015 data collection for child assessments and teacher child reports (TCRs).

2 One program’s visit for selecting classrooms and children was delayed until spring 2016 in order to complete local tribal approval processes.

3 The fall 2015 approval included spring 2016 data collection for child assessments and teacher child reports (TCRs).

4 We will work with the Office of Head Start (OHS) to update the list of programs before finalizing the sampling frame. Grantees and programs that were known by OHS to have lost their funding or otherwise closed between summer 2013 and winter 2014 will be removed from the frame, and programs associated with new grants awarded since then will be added to the frame.

5 The freshening of the program sample for FACES 2014–2018 will use well-established methods that ensure that the refreshed sample can be treated as a valid probability sample, giving new programs a chance of selection into the sample, while retaining the original programs still in operation.

6 The procedure offers all the advantages of the systematic sampling approach but eliminates the risk of bias associated with that approach. The procedure makes independent selections within each of the sampling intervals while controlling the selection opportunities for units crossing interval boundaries. Chromy, J.R. “Sequential Sample Selection Methods.” Proceedings of the Survey Research Methods Section of the American Statistical Association. Alexandria, VA: American Statistical Association, 1979, pp. 401–406.

7 If there are fewer than four FSS in a program’s sampled centers, we will sample from among all FSS in the program.

8 We will round the stratum sizes as needed.

9 If parents do not provide an email address, we will send hard copy invitations for the parent survey.

10 Plus studies may also include additional participants completing Core instruments such as direct child assessments or parent or staff surveys.

11 As response rates decrease, the risk for nonresponse bias for an estimate increases if nonrespondents would have responded differently from respondents. Bias usually cannot be directly measured; in this case, however, we can do so. We have key outcomes (outcome data from the child assessments) for nearly all sampled children, so we examined what happens to estimates of those outcomes with and without children whose parents completed the parent survey.

12 Although there is no rule of thumb for how large a bias is acceptable, the larger it is, the more caution is merited in analysis. In a modeling context, potential bias due to nonresponse can be mitigated by controlling for any possibly problematic variables in an analysis. For analyses that require a completed fall parent survey, a conservative approach would be to control for teacher-reported child disability status in the fall, which was more likely to be present for respondents. For analyses that require a completed spring parent survey, one could control for whether the Head Start program was under public school auspices (as reported on the Head Start PIR), from which parents were more likely to respond.

13 We have no information beyond the program level (for example, centers, classrooms, children) for nonresponding programs.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMathematica Staff
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy