FACES 2019 Data Collection OMB Part B

FACES 2019 Data Collection OMB Part B.docx

OPRE Evaluation: Head Start Family and Child Experiences Survey (FACES 2019) [Nationally representative studies of HS programs]

OMB: 0970-0151

Document [docx]
Download: docx | pdf



Head Start Family and Child Experiences Survey 2019 (FACES 2019) OMB Supporting Statement for Data Collection



OMB Information Collection Request

0970 - 0151


Supporting Statement
Part B

November 2018

Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, DC, 20201


Project Officers:

Mary Mueggenborg and Meryl Barofsky


CONTENTS

B1. Respondent Universe and Sampling Methods 1

B2. Procedures for Collection of Information 5

B3. Methods to Maximize Response Rates and Deal with Nonresponse 17

B4. Tests of Procedures or Methods to be Undertaken 21

B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 22



TABLES

Table B.1. FACES 2019 minimum detectable differences – Regions I-X 9

Table B.2. AI/AN FACES 2019 minimum detectable differences – Region XI 10

Table B.3. FACES 2019 expected response rates and final response rates for FACES 2014-2018 18

Table B.4. AI/AN FACES 2019 expected response rates and final response rates for AI/AN FACES 2015-2016 18



FIGURES

Figure B.1. Flow of sample selection procedures for FACES 2019 4

Figure B.2. Flow of sample selection procedures for AI/AN FACES 2019 5



APPENDICES

appendix a: program information packages

APPENDIX b: omb history

appendix c: mathematica confidentiality pledge

APPENDIX d: ai/an faces 2019 confidentiality agreement

appendix e: ai/an faces 2019 agreement of collaboration and participation

APPENDIX f: ai/an faces 2019 tribal presentation template

Appendix G: authorizing statues

Appendix H: conceptual model

APPENDIX I: faces 2019 classroom observations components

appendix J: ai/an faces 2019 classroom observation components

appendix K: faces 2019 respondent materials

appendix L: ai/an faces 2019 respondent materials

appendix M: faces 2019 screen shots

appendix N: ai/an faces 2019 screen shots

APPENDIX O: NONRESPONSE BIAS ANALYSIS FOR THE FACES CORE STUDY PARENT SURVEY IN FALL 2014 AND SPRING 2015



ATTACHMENTS

ATTACHMENT 1: telephone script and recruitment information collection for program directors, regions I through x

ATTACHMENT 2: telephone script and recruitment information collection for program directors, region xI

ATTACHMENT 3: telephone script and recruitment information collection for on-site coordinators, regions i through x

ATTACHMENT 4: telephone script and recritment information collection for on-site coordinators, regions xi

ATTACHMENT 5: faces 2019 Classroom sampling form from Head Start staff

ATTACHMENT 6: faces 2019 Child roster form from Head Start staff

ATTACHMENT 7: faces 2019 Parent consent form

ATTACHMENT 8: faces 2019 HEAD START PARENT SURVEY

ATTACHMENT 9: faces 2019 HEAD START CHILD ASSESSMENT components

ATTACHMENT 10: faces 2019 HEAD START TEACHER CHILD REPORT

ATTACHMENT 11: faces 2019 HEAD START TEACHER SURVEY

ATTACHMENT 12: faces 2019 HEAD START PROGRAM DIRECTOR SURVEY

ATTACHMENT 13: faces 2019 HEAD START CENTER DIRECTOR SURVEY

ATTACHMENT 14: AI/AN faces 2019 Classroom sampling form from Head Start staff

ATTACHMENT 15: AI/AN faces 2019 Child roster form from Head Start staff

ATTACHMENT 16: AI/AN faces 2019 Parent consent form

ATTACHMENT 17: AI/AN faces 2019 HEAD START PARENT SURVEY

ATTACHMENT 18: AI/AN faces 2019 HEAD START CHILD ASSESSMENT components

ATTACHMENT 19: AI/AN faces 2019 HEAD START TEACHER CHILD REPORT

ATTACHMENT 20: AI/AN faces 2019 HEAD START TEACHER SURVEY

ATTACHMENT 21: AI/AN faces 2019 HEAD START PROGRAM DIRECTOR SURVEY

ATTACHMENT 22: AI/AN faces 2019 HEAD START CENTER DIRECTOR SURVEY



B1. Respondent Universe and Sampling Methods

The Head Start Family and Child Experiences Survey (FACES) will provide data on a set of key indicators in Head Start Regions I–X (FACES 2019) and Region XI (AI/AN FACES 2019). There is a slightly different sample design for FACES 2019 and AI/AN FACES 2019 (whose Head Start grants are awarded to tribal governments or consortiums of tribes). ACF proposes to contact 230 Head Start programs in Regions I-X and 30 Head Start programs in Region XI that will be selected to participate in FACES 2019 or AI/AN FACES 2019 for the purpose of gathering information that will be used (1) to develop a sampling frame of Head Start centers in each program, and (2) to facilitate the selection of the center and classroom samples and, (3) for a subsample of programs, the selection of child samples. In this package, we present the sampling plans for all the data collection activities, including selecting classrooms and children for the study and gathering consent for children, data collection instruments and procedures, data analyses, and the reporting of study findings. A previous information collection request (also 0970-0151) was approved (August 31, 2018) for all the information collection activities needed to recruit Head Start programs and centers into FACES 2019 and AI/AN FACES 2019.

Target population. The target population for FACES 2019 and AI/AN FACES 2019 is all Region I through XI Head Start programs in the United States (in all 50 states plus the District of Columbia), their classrooms, and the children and families they serve. The sample design is similar to the one used for FACES 2014 and AI/AN FACES in 2015. FACES 2019 and AI/AN FACES 2019 will use a stratified multistage sample design with four stages of sample selection: (1) Head Start programs, with programs defined as grantees or delegate agencies providing direct services; (2) centers within programs; (3) classes within centers; and (4) for a random subsample of programs, children within classes. To minimize the burden on parents/guardians who have more than one child selected for the sample, we will also randomly subsample one selected child per parent/guardian, a step that was introduced in FACES 2009.

Sampling frame and coverage of target population. The frame that will be used to sample programs is the latest available Head Start Program Information Report (PIR), which has been used as the frame for previous rounds of FACES. Because we are recruiting programs for AI/AN FACES 2019 from Region XI in fall 2018, we will use the most current PIR (most likely the 2016-2017 PIR) to select those programs. FACES 2019 recruitment will begin spring 2019, so we will likely use the 2017-2018 PIR for the programs from Regions I-X. We will exclude from the sampling frame: Early Head Start programs, programs in Puerto Rico and other U.S. territories, Region XII migrant and seasonal worker programs, programs that do not directly provide services to children, programs under transitional management, and programs that are (or will soon be) defunded.1 The center frame will be developed through contacts with the sampled programs. Similarly, the classroom and child frames will be constructed after center and classroom samples are drawn, respectively. All centers, classrooms, and children in study-eligible, sampled programs will be included in the center, classroom, and child frames, respectively, with three exceptions. Centers that are providing services to Head Start children through child care partnership arrangements are excluded. Classrooms that receive no Head Start funding (such as prekindergarten classrooms in a public school setting that also has Head Start-funded classrooms) are ineligible. Also, sampled children who leave Head Start between fall and spring of the 2019-2020 program year become ineligible for spring 2020.

Sample design

FACES 2019. The sample design for the new FACES study is based on the one used for FACES 2014, which in turn was based on the designs of the five previous studies. Like FACES 2014, the sample design for FACES 2019 will involve sampling for two study components: Classroom + Child Outcomes Core and Classroom Core. The Classroom + Child Outcomes Core study will involve sampling at all four stages (programs, centers, classrooms, and children) and the Classroom Core study will involve sampling at the first three stages only (excluding sampling of children within classes). Proposed sample sizes were determined with the goal of achieving accurate estimates of characteristics at multiple nested levels (the program, center, classroom, and child levels) given various assumptions about the validity and reliability of the selected measurement tools with the sample, the expected variance of key variables, the expected effect size of group differences, and the sample design and its impact on estimates. At the program and center levels we gather information primarily via surveys administered to the directors. At the classroom level we measure quality using the observational measures Classroom Assessment Scoring System (CLASS) and the short Early Childhood Environmental Rating Scale-Revised (ECERS-R) and gather additional information from surveys administered to the teacher. At the child level we conduct direct assessments of child outcomes using a battery of validated tools and gather additional information from surveys administered to parents and teachers. To determine appropriate sample sizes at each nested level, we explored thresholds at which sample sizes could support answering the study’s primary research questions. The study’s research questions determine both the characteristics we aim to describe about programs, centers, classrooms, and children as well as the subgroup differences we expect to detect. We selected sample sizes appropriate for point estimates with adequate precision to describe the key characteristics at each level. Furthermore, we selected sample sizes appropriate for detection of differences between subgroups of interest (if differences exist) on key variables at each level. We evaluated the expected precision of these estimates after accounting for sample design complexities. (See section on “Degree of Accuracy” below.)

As was the case for FACES 2014, the child-level sample for FACES 2019 will represent children enrolled in Head Start for the first time and those who are attending a second year of Head Start. This allows for a direct comparison of first- and second-year program participants and analysis of child gains during the second year. Prior to 2014, FACES followed newly enrolled children through one or two years of Head Start and then through spring of kindergarten. FACES 2019 will follow all children, but only through the fall and spring of one program year.

To minimize the effects of unequal weighting on the variance of estimates, we propose sampling with probability proportional to size (PPS) in the first two stages. At the third stage we will select an equal probability of classrooms within each sampled center and, in centers where children are to be sampled, an equal probability sample of children within each sampled classroom. The measure of size for PPS sampling in each of the first two stages will be the number of classrooms. This sampling approach maximizes the precision of classroom-level estimates and allows for easier in-field sampling of classrooms and children within classrooms. We will select a total of 180 programs across both Core study components for Regions I-X. Sixty of the 180 programs sampled for the Core study will be randomly subsampled with equal probability within strata to be included in the Classroom + Child Outcomes study. Within these 60 programs, we will select, if possible, two centers per program, two classes per center, and a sufficient number of children to yield 10 consented children per class, for a total of about 2,400 children at baseline.

Based on our experience with earlier FACES studies, we estimate that 70 percent of the 2,400 baseline children (about 1,680) will be new to Head Start. We expect a program and study retention rate of 90 percent from fall to spring, for a sample of 2,160 study children in both fall 2019 and spring 2020, of which about 1,512 (70 percent) are estimated to have completed their first Head Start year.

The FACES 2019 Classroom Core study component will include the 60 programs where children are sampled plus the remaining 120 programs from the sample of 180 in Regions I-X. We will select, from the additional 120 programs, two centers per program, and two classrooms per center. Across both study components, we will have a total of 360 centers and 720 classrooms for spring 2020 data collection. For follow-up data collection in spring 2022, we will select a refresher sample2 of programs and their centers so that the new sample will be representative of all programs and centers in Regions I-X at the time of follow-up data collection, and we will select a new sample of classrooms in all centers. Figure B.1 is a diagram of the sample selection and data collection procedures. At each sampling stage, we will use a sequential sampling technique based on a procedure developed by Chromy.3

We will initially select double the target number of programs, and pair adjacent selected programs within strata. (These paired programs would be similar to one another with respect to the implicit stratification variables.) We will then randomly select one from each pair to be released as part of the main sample of programs. After the initially released programs are selected, we will ask the Office of Head Start (OHS) to confirm that these programs are in good standing. If confirmed, each program will be contacted and recruited to participate in the study: the 60 programs subsampled for the Classroom + Child Outcomes Core will be recruited in spring 2019 (to begin participation in fall 2019); the remaining 120 programs will be recruited in fall 2019 (to begin participation in spring 2020). If the program is not in good standing or refuses to participate, we will release into the sample the other member of the program’s pair and go through the same process of confirmation and recruitment with that program. All released programs will be accounted for as part of the sample for purposes of calculating response rates and weighting adjustments. At subsequent stages of sampling, we will release all sampled cases, expecting full participation among the selected centers and classes. At the child level, we estimate that out of 12 selected children per class, we will end up with 10 eligible children with parental consent, which is our target. We expect to lose, on average, two children per class because they are no longer enrolled, because parental consent was not granted, or because of the subsampling of selected siblings.

Figure B.1. Flow of sample selection procedures for FACES 2019

We will select centers PPS within each sampled program using the number of classrooms as the measure of size, again using the Chromy procedure. For the Classroom + Child Outcomes Core, we will randomly select classrooms and home visitors within center with equal probability. Classrooms with very few children will be grouped with other classrooms in the same center for sampling purposes to ensure a sufficient sample yield.4 Once classrooms are selected, we will select an equal probability sample of 12 children per class, with the expectation that 10 will be eligible and will receive parental consent. For the additional 120 programs in the classroom core, we will randomly select classrooms within center. Home visitors will not be sampled in this study component, and classroom grouping will not be necessary, as child sample sizes are not relevant for this set of programs.

AI/AN FACES 2019. In this section, we will highlight any differences in the sample design relative to FACES 2019 described above. We will select a total of 22 programs for Region XI. Within the 15 sampled programs with two or more centers, we will select two centers per program, two classes per center, and a sufficient number of children to yield 10 consented children per class. Within the seven sampled programs with one center, we’ll select up to four classrooms and a sufficient number of children to yield 10 consented children per class. We’ll have a total of about 800 children from Region XI. Based on our experience with AI/AN FACES 2015, we expect a program and study retention rate of 90 percent from fall to spring, for a sample of 720 study children in both fall 2019 and spring 2020. We will have a total of 37 centers and 80 classrooms for spring 2020 data collection. Figure B.2 is a diagram of the sample selection and data collection procedures. The primary stratification of the program sample is based on program structure (number of centers per program, and number of classrooms per single-center programs, as described in the next section in more detail). At the child level, we estimate that out of 13 selected children per class, we will end up with 10 eligible children with parental consent, which is our target. We expect to lose, on average, three children per class because they are no longer enrolled, because parental consent was not granted, or because of the subsampling of selected siblings.

Figure B.2. Flow of sample selection procedures for AI/AN FACES 2019

B2. Procedures for Collection of Information

B2.A. Sampling and estimation procedures

Statistical methodology for stratification and sample selection. The sampling methodology is described under section B1 above. When sampling programs from Regions I-X, we will form explicit strata using census region, metro/nonmetro status, and percentage of racial/ethnic minority enrollment. Sample allocation will be proportional to the estimated fraction of eligible classrooms represented by the programs in each stratum.5 We will implicitly stratify (sort) the sample frame by other characteristics, such as percentage of dual language learner (DLL) children (categorized), whether the program is a public school district grantee, and the percentage of children with disabilities. No explicit stratification will be used for selecting centers within programs, classes within centers, or children within classes, although some implicit stratification (such as the percentage of children who are dual language learners) may be used for center selection. For sampling programs from Region XI, we will form three explicit strata using program structure (number of centers and classrooms), and further divide the largest of these three strata multicenter programs) into five geographic regions to form a total of seven strata. The AI/AN FACES Workgroup provided guidance prior to the AI/AN FACES 2015 data collection on how to geographically divide the country into five groups the states in which Region XI Head Start programs exist. We will implicitly stratify the frame by the percentage of children in the program who are AI/AN (from PIR information) to ensure we get a range of programs with respect to the percentage of children who are AI/AN.

Estimation procedure. For both FACES 2019 and AI/AN FACES 2019, we will create analysis weights to account for variations in the probabilities of selection and variations in the eligibility and cooperation rates among those selected. For each stage of sampling (program, center, class, and child) and within each explicit sampling stratum, we will calculate the probability of selection. The inverse of the probability of selection within stratum at each stage is the sampling or base weight. The sampling weight takes into account the PPS sampling approach, the presence of any certainty selections, and the actual number of cases released. We treat the eligibility status of each sampled unit as known at each stage. Then, at each stage, we will multiply the sampling weight by the inverse of the weighted response rate within weighting cells (defined by sampling stratum) to obtain the analysis weight, so that the respondents’ analysis weights account for both the respondents and nonrespondents.

Thus, the program-level weight adjusts for the probability of selection of the program and response at the program level; the center-level weight adjusts for the probability of center selection and center-level response; and the class-level weight adjusts for the probability of selection of the class and class-level response. For FACES 2019 only, the child-level weights adjust for the subsampling probability of programs for the Classroom + Child Outcomes Core. For FACES 2019 and AI/AN FACES 2019, we then adjust for the probability of selection of the child within classroom, whether parental consent was obtained, and whether various child-level instruments (for example, direct child assessments and parent surveys) were obtained. The formulas below represent the various weighting steps for the cumulative weights through prior stages of selection, where P represents the probability of selection and RR the response rate at that stage of selection.






Degree of accuracy needed to address the study’s primary research questions. The complex sampling plan, which includes several stages, stratification, clustering, and unequal probabilities of selection, requires using specialized procedures to calculate the variance of estimates. Standard statistical software assumes independent and identically distributed samples, which would indeed be the case with a simple random sample. A complex sample, however, generally has larger variances than would be calculated with standard software. Two approaches for estimating variances under complex sampling, Taylor Series and replication methods, can be estimated by using SUDAAN and special procedures in SAS, Stata, and other packages.

Most of the analyses will be at the child and classroom levels. Given various assumptions about the validity and reliability of the selected measurement tools with the sample, the hypothesized variation expected on key variables, the expected effect size of group differences, and the sample design and its impact of estimates, the sample size should be sufficiently large to detect meaningful differences. In Tables B.1 and B.2 (for Regions I-X and Region XI, respectively), we show the minimum detectable differences with 80 percent power (and alpha=0.05) and various sample and subgroup sizes, assuming different intraclass correlation coefficients for classroom- and child-level estimates at the various stages of clustering (see table footnote).

For point-in-time estimates, we are making the conservative assumption that there is no covariance between estimates for two subgroups, even though the observations may be in the same classes, centers, and/or programs. By conservative, we mean that smaller differences than those shown will likely be detectable. For pre-post estimates, we do assume covariance between the estimates at two points in time. Evidence from another survey shows expected correlations between fall and spring estimates of about 0.5. Using this information, we applied another design effect component to the variance of estimates of pre-post differences to reflect the fact that it is efficient to have many of the same children or classes at both time points.

The top section of Tables B.1 and B.2 (labeled “Point in Time Subgroup Comparisons”) shows the minimum differences that would be detectable for point-in-time (cross-sectional) estimates at the class and child levels. We have incorporated the design effect attributable to clustering. We show minimum detectable differences between point-in-time child subgroups defined two different ways: (1) assuming the subgroup is defined by program-level characteristics, and (2) assuming the subgroup is defined by child-level characteristics (which reduces the clustering effect in each subgroup). The bottom section (labeled “Estimates of Program Year Gains”) shows detectable pre-post difference estimates at the child level. Examples are given below.

The columns farthest to the left (“Subgroups” and “Time Points”) show several sample subgroup proportions (for example, a comparison of male children to female children would be represented by “50, 50”). The child-level estimates represent two scenarios: (1) all consented children in fall 2019 (n = 2,400) and (2) all children in spring 2020 who remained in Head Start
(n = 2,160). For example, the n = 2,400 row within the “33, 67” section represents a subgroup comparison involving children at the beginning of data collection for two subgroups, one representing one-third of that sample (for example, children in bilingual homes), the other representing the remaining two-thirds (for example, children from English-only homes).

The last few columns (“MDD”) show various types of variables from which an estimate might be made; the first two are estimates in the form of proportions, the next is an estimate for a normalized variable (such as an assessment score) with a mean of 100 and standard deviation of 15 (for child-level estimates only), and the last shows the minimum detectable effect size—the MDD in standard deviation-sized units. The numbers for a given row and column show the minimum underlying differences between the two subgroups that would be detectable for a given type of variable with the given sample size and design assumptions. The MDD numbers given in Tables B.1 and B.2 assume that all sampled classrooms will have both completed observations and completed teacher surveys. Similarly, they assume that all sampled children who have parental consent will have completed child assessments, parent surveys, and teacher child reports.



Table B.1. FACES 2019 minimum detectable differences – Regions I-X

POINT IN TIME SUBGROUP COMPARISONS FOR CLASSROOMS AND CHILDREN

Time point

CLASSROOM SUBGROUPS

Minimum detectable difference

Percentage in group 1

Percentage in group 2

Classes in
group 1

Classes in
group 2

Proportion of
0.1 or 0.9

Proportion of
0.5


Minimum detectable effect size

Spring 2020

50

50

360

360

.084

.140


.280

33

67

238

482

.090

.149


.298

15

85

108

612

.119

.198


.392


CHILD SUBGROUPS

Minimum detectable difference

(Program-defined subgroups / Child-defined subgroups)

Time point

Percentage in group 1

Percentage in group 2

Children in group 1

Children in group 2

Proportion of
0.1 or 0.9

Proportion of
0.5

Normalized variable (mean = 100, s.d.= 15)

Minimum detectable effect size

Fall 2019

50

50

1,200

1,200

0.091/0.067

0.151/0.112

4.5/3.4

0.301/0.224

33

67

792

1,608

0.096/0.068

0.161/0.113

4.8/3.4

0.320/0.227

40

30

960

720

0.110/0.070

0.183/0.117

5.5/3.5

0.364/0.233

Spring 2020

50

50

1,080

1,080

0.091/0.068

0.152/0.113

4.5/3.4

0.303/0.226

ESTIMATES OF PROGRAM YEAR GAINS FOR CHILDREN

TIME POINTS

Minimum detectable difference

Time 1

Time 2

Percent Subgroup at both times

Children at
time 1

Children at
time 2

Proportion of
0.1 or 0.9

Proportion of
0.5

Normalized variable (mean = 100, s.d.= 15)

Minimum detectable effect size

Fall 2019

Spring 2020

100

2,400

2,160

0.048

0.079

2.375

0.158

70

1,680

1,512

0.057

0.095

2.839

0.189

40

960

864

0.075

0.126

3.756

0.250

Note: Conservative assumption of no covariance for point-in-time subgroup comparisons. Covariance adjustment made for pre-post difference (Kish, p. 462, Table 12.4.II, Difference with Partial Overlap). Assumes =.05 (two-sided), .80 power. For classroom-level estimates, assumes 180 programs, 360 centers, between-program ICC=.2, between-center ICC = .2. For child-level estimates, assumes 60 programs, 120 centers, between-program ICC = .10, between-center ICC = .05, between-classroom ICC = .12.

Note: The minimum detectable effect size is the minimum detectable difference in standard deviation-sized units.


Table B.2. AI/AN FACES 2019 minimum detectable differences – Region XI

POINT IN TIME SUBGROUP COMPARISONS FOR CHILDREN


CHILD SUBGROUPS

Minimum detectable difference

(Program-defined subgroups / Child-defined subgroups)

Time point

Percentage in group 1

Percentage in group 2

Children in group 1

Children in group 2

Proportion of
0.1 or 0.9

Proportion of
0.5

Normalized variable (mean = 100, s.d.= 15)

Minimum detectable effect size

Fall 2019

50

50

400

400

0.155/0.115

0.258/0.191

7.7/5.7

0.511/0.380

33

67

264

536

0.165/0.116

0.274/0.194

8.1/5.8

0.543/0.385

40

30

320

240

0.187/0.120

0.312/0.200

9.3/6.0

0.617/0.397

Spring 2020

50

50

360

360

0.155/0.116

0.259/0.193

7.7/5.8

0.514/0.385

ESTIMATES OF PROGRAM YEAR GAINS FOR CHILDREN

TIME POINTS

Minimum detectable difference

Time 1

Time 2

Percent subgroup at both times

Children at
time 1

Children at
time 2

Proportion of
0.1 or 0.9

Proportion of
0.5

Normalized variable (mean = 100, s.d.= 15)

Minimum detectable effect size

Fall 2019

Spring 2020

100

800

720

0.081

0.135

4.0

0.269

70

560

504

0.097

0.162

4.8

0.321

40

320

288

0.129

0.215

6.4

0.425

Note: Conservative assumption of no covariance for point-in-time subgroup comparisons. Covariance adjustment made for pre-post difference (Kish, p. 462, Table 12.4.II, Difference with Partial Overlap). Assumes =.05 (two-sided), .80 power. Assumes 22 programs, 37 centers, 80 classrooms, between-program ICC = .10, between-center ICC = .05, between-classroom ICC = .12.

Note: The minimum detectable effect size is the minimum detectable difference in standard deviation-sized units.


If we were to compare two equal-sized subgroups of the 720 classrooms (Regions I-X) in spring 2020, our design would allow us to detect a minimum difference of .280 standard deviations with 80 percent power. At the child level, if we were to compare normalized assessment scores with a sample size of 2,400 children in fall 2019, and two approximately equal-sized child-defined subgroups (such as boys and girls), our design would allow us to detect a minimum difference of 3.4 points with 80 percent power. If we were to compare these two subgroups again in the spring of 2020, our design would allow us to detect a minimum difference of 3.4 points.

If we were to perform a pre-post comparison (fall 2019 to spring 2020) for the same normalized assessment measure, we would be able to detect a minimum difference of 2.4 points. If we were to perform the same pre-post comparison for a subgroup representing 40 percent of the entire sample (n = 960 in fall 2019; n = 864 in spring 2020), we would be able to detect a minimum difference of 3.8 points.

The main purpose of the AI/AN FACES 2019 is to provide descriptive statistics for this population of children. Comparisons between child subgroups are a secondary purpose, given the smaller sample size. If we were to compare normalized assessment scores for two equal-sized subgroups (say, boys and girls) of the 800 children in Region XI (AI/AN) in fall 2019, our design would allow us to detect a minimum difference of 5.7 points with 80 percent power. If we were to compare these two subgroups again in the spring of 2020, our design would allow us to detect a minimum difference of 5.8 points. If we were to perform a pre-post comparison (fall 2019 to spring 2020) for the same normalized assessment measure, we would be able to detect a minimum difference of 4.0 points.

Unusual problems requiring specialized sampling procedures. We do not anticipate any unusual problems that require specialized sampling procedures.

Any use of periodic (less frequent than annual) data collection cycles to reduce burden. We do not plan to collect data less frequently than once per year to reduce burden.

B2.B. Analysis Plans

This section describes the plans for data analysis. Please see section A.16 for information on the time schedule for tabulation and publication based on these findings.

FACES 2019

The analyses will aim to (1) describe Head Start programs, centers, and classrooms; (2) describe children and families served by Head Start, including children’s outcomes; (3) relate classroom, center, and program characteristics to classroom quality; and (4) relate family, classroom, center, and program characteristics to children’s outcomes. Analyses will employ a variety of methods, including cross-sectional and longitudinal approaches, descriptive statistics (means, percentages), simple tests of differences across subgroups and over time (t-tests, chi-square tests), and multivariate analysis (regression analysis, hierarchical linear modeling [HLM]). For all analyses, we will calculate standard errors that take into account multilevel sampling and clustering at each level (program, center, classroom, child) as well as the effects of unequal weighting. We will use analysis weights, taking into account the complex multilevel sample design and nonresponse at each stage.

Cross-sectional Analyses. Descriptive analyses will provide information on characteristics at a single point in time, overall and by various subgroups. For example, for questions on the characteristics of Head Start programs, classrooms, or teachers (for example, average quality of classrooms or current teacher education levels) and the characteristics of Head Start children and families (for example, family characteristics or children’s skills at the beginning of the Head Start year), we will calculate averages (means) and percentages. We will also examine differences in characteristics (for example, children’s outcomes or classroom quality), by various subgroups. We will calculate averages and percentages, and use t-tests and chi-square tests to assess the statistical significance of differences between subgroups.

Longitudinal Analyses. Analyses will also examine changes in characteristics during the program year, overall and by various subgroups. For questions about changes in children’s outcomes during a year of Head Start, we will calculate the average differences in outcomes from fall to spring for all children and for selected subgroups (for example, children who are dual language learners). We will use a similar approach for changes in family characteristics during the year. Outcomes that have been normed on broad populations of preschool-age children (for example, the Woodcock-Johnson IV Letter-Word Identification or the Peabody Picture Vocabulary Test, 5th Edition) will be compared with the published norms to assess how Head Start children compare with other children their age in the general population and how they have progressed relative to national norms.

Multivariate Analyses. We will use multiple approaches for questions relating characteristics of the classroom, teacher, or program to children’s outcomes at single points in time, changes during a year in Head Start, or relationships among characteristics of classrooms, teachers, programs, and classroom quality. Many of the questions can be addressed by estimating hierarchical linear models that take into account that children are nested within classrooms that are nested within centers within programs.

AI/AN FACES 2019

Similar to the FACES 2019 Classroom + Child Outcomes Core, the AI/AN FACES 2019 analyses will aim to describe children and families served by Region XI Head Start, including children’s outcomes, and relating family, classroom, and program characteristics to children’s outcomes. Analyses will employ a variety of methods, including cross-sectional and longitudinal approaches and descriptive statistics (means, percentages). Analyses will be conducted to identify patterns overall and for key groups (for example, for AI/AN children only). For questions about changes in children’s outcomes during a year of Head Start, we will calculate the average differences in outcomes from fall to spring for all children. For all analyses, we will calculate standard errors that take into account multilevel sampling and clustering at each level (program, center, classroom, child) as well as the effects of unequal weighting. We will use analysis weights, taking into account the complex multilevel sample design and nonresponse at each stage.

B2.C. Data Collection Procedures

Many data collection features are the same or build on procedures that proved successful for FACES 2014 or AI/AN FACES 2015. The period of each field data collection wave for FACES 2019 will be ten weeks long, beginning in September for the fall 2019 wave and in March for the spring 2020 wave. A member of the study team (led by Mathematica Policy Research), in conjunction with the Head Start program’s on-site coordinator (a designated Head Start program staff member who will work with the study team to recruit teachers and families and help schedule site visits), will schedule the data collection week based on the program’s availability. The study team will schedule no more than ten sites for visits each week. FACES data collection starts in fall 2019 with site visits to Head Start centers (120 for FACES 2019 and 37 for AI/AN FACES 2019) to sample classrooms and children for participation in the study. Approximately two to three weeks later, site visits will occur to these centers in fall 2019 to directly assess the school readiness skills of children sampled to participate in FACES (2,400 for FACES 2019 and 800 for AI/AN FACES 2019). In the fall, parent surveys will be released as soon as consent is obtained. In spring, all parent surveys will be released at the beginning of the field period. The study team will send out parent emails on a rolling basis as consents are received (Appendix K).6 Head Start teachers will rate each sampled child (approximately 10 children per classroom). These activities will occur a second time in spring 2020. The FACES 2019 program sample size will increase to 180 programs in the spring to collect program- and classroom-level data (AI/AN FACES will remain at 22 programs). The methods of data collection for this phase for both FACES 2019 and AI/AN FACES 2019 will feature site visitors conducting observations of classroom quality. Head Start program directors, center directors, and teachers will also complete surveys.

Below we outline the procedures for each of the FACES 2019 and AI/AN FACES 2019 data collection instruments. Anticipated marginal response rates are noted in Table B.3 for FACES 2019 and B.4 for AI/AN FACES 2019. The instruments being used in FACES 2019 and AI/AN FACES 2019 are updated versions of those used in FACES 2014 and AI/AN FACES 2015, respectively. The advance material is similar to those used in previous rounds, but have been modified based on changes to the study design and input from outside experts (See Supporting Statement A8 for additional information). Below is a list of the instruments that are currently being submitted. Items one through nine will be administered as part of FACES 2019 in fall 2019 and spring 2020. Items 10 through 18 will be administered as part of AI/AN FACES 2019 in fall 2019 and spring 2020.

  1. FACES 2019 classroom sampling form from Head Start staff (Attachment 5; fall and spring). Upon arrival at a selected center in fall 2019, a Field Enrollment Specialist (FES) will request a list of all Head Start-funded classrooms from Head Start staff (typically the on-site coordinator). Head Start staff may provide this information in various formats such as print outs from an administrative record system or photocopies of hard copy list or records. The FES will enter the information into a laptop computer. For each classroom, the FES will enter the teacher’s first and last names, the session type (morning, afternoon, full day, or home visitor), and the number of Head Start children enrolled into a web-based sampling program via the laptop computer. The sampling program will select about two classrooms for participation in the study. We plan to use this classroom sampling form with the 360 centers in all 180 programs participating in the fall 2019 or spring 2020 data collection.

  2. FACES 2019 child roster form from Head Start staff (Attachment 6; fall only). For each selected classroom in fall 2019, the FES will request a list of the names, dates of birth, and funding type of each child enrolled in the selected classroom from Head Start staff (typically the on-site coordinator). Head Start staff may provide this information in various formats such as print outs from an administrative record system or photocopies of hard copy list or records. The FES will use a laptop computer to enter this information into a web-based sampling program. The program will select up to 12 children per classroom for participation in the study. For these selected children only, the FES will then enter each child’s sex, home language, and parent’s name into the sampling program. Finally, the FES will ask Head Start staff (typically the on-site coordinator) to identify among the 24 selected children per center any siblings. The FES will identify the sibling groups in the sampling program and the sampling program will then drop all but one member of each sibling group, leaving one child per family.

  3. FACES 2019 parent consent form (Attachment 7; fall only). After sampling in fall 2019, the FES (in conjunction with the on-site coordinator) will attempt to obtain parental consent for all sampled children before the data collection visit.

  4. FACES 2019 Head Start parent survey (Attachment 8; fall and spring). On average, each parent survey is approximately 25 minutes long. In fall 2019 and spring 2020, parents will be sent an email and hard copy letter invitation (parents who provide an email address on their consent form will receive an email) to invite them to complete the survey on a rolling basis after we receive the consent form. The invitations for the parents will include a website address, login id, and password for completing the survey online and a toll-free telephone number should they choose to complete the survey by telephone (Appendix K). If needed, we will send parents an email or hard copy letter approximately three weeks after we receive the consent form to remind them to complete the survey. The reminders for parents will contain the same information provided in their invitation including the toll-free telephone number offering the option to complete the survey by telephone (Appendix K). We will have laptops available during the visit week for parents to complete the web survey at the center if they choose. Parents will receive a gift card in the amount of $30 for their participation.

  5. FACES 2019 Head Start child assessment (Attachment 9; fall and spring). The study team will conduct direct child assessments with each consented FACES 2019 child during the scheduled data collection week in fall 2019 and spring 2020. The on-site coordinator will schedule child assessments at the Head Start center. Parents are reminded of the child assessments the week before the field visit via flyers displayed in the center, and a reminder notification that will be sent home in children’s backpacks (Appendix K). On average, child assessments will take approximately 60 minutes. A trained assessor will use computer-assisted personal interviewing (CAPI) with a laptop to conduct the child assessments one-on-one, asking questions, and recording the child’s responses. Before the field visit, we will discuss center internet availability and strength with the on-site coordinator to determine whether we will be able to conduct the web based assessments without interruption. The child will receive a children’s book valued at $10 for his/her participation. Due to the nationwide outbreak of COVID-19, spring 2020 child assessment data collection was canceled.

  6. FACES 2019 Head Start teacher child report (TCR) (Attachment 10; fall and spring). In fall 2019 and spring 2020, Head Start teachers will be asked to complete a TCR for each consented FACES 2019 child in their classroom. The study team will send teachers a letter containing a website address, login ID, and password for completing the TCRs online and a study FAQ (Appendix K).7 During the onsite field visit, field staff will have hard copies of the TCR forms for teachers who would prefer to complete the forms on paper. Each TCR takes approximately 10 minutes to complete. Teachers will have approximately 10 FACES 2019 children in their classroom and will receive a $10 gift card per completed TCR.

  7. FACES 2019 Head Start teacher survey (Attachment 11; spring only). On average, each teacher survey will take approximately 30 minutes to complete. It will be a self-administered web instrument with a paper-and-pencil option in spring 2020. These cases will be released during the center’s data collection week once the FES completes classroom sampling. Information about the teacher survey will be included in the letter and study FAQ teachers receive describing the TCR (Appendix K). In the spring, teachers will receive one login ID and password to complete the teacher survey and immediately after they complete the teacher survey they will be automatically directed to begin the TCRs. During the onsite field visit, field staff will have hard copies of the surveys for teachers who prefer to complete the survey on paper.

  8. FACES 2019 Head Start program director survey (Attachment 12; spring only). On average, each program director survey is approximately 30 minutes in length. It will be a self-administered web instrument with a paper-and-pencil option in spring 2020. These cases are released at the beginning of the spring data collection period. The study team will send program directors a letter containing a website address, login ID, and password for completing the program director survey (Appendix K). FACES liaisons will follow up with directors requesting paper forms as needed.

  9. FACES 2019 Head Start center director survey (Attachment 13; spring only). On average, each center director survey is approximately 30 minutes in length. In spring 2020, it will be a self-administered web instrument released at the start of the field period with a paper-and-pencil option provided during the field visit. The study team will send center directors a letter containing a website address, login ID, and password for completing the center director survey (Appendix K). During the onsite field visit, field staff will have hard copies of the surveys for directors who prefer to complete the survey on paper.

  10. AI/AN FACES 2019 classroom sampling form from Head Start staff (Attachment 14; fall only). Upon arrival at a selected center in fall 2019, a FES will request a list of all Head Start-funded classrooms from Head Start staff (typically the on-site coordinator). Head Start staff may provide this information in various formats such as print outs from an administrative record system or photocopies of hard copy list or records. The FES will enter the information into a laptop computer. For each classroom, the FES will enter the teacher’s first and last names, the session type (morning, afternoon, full day, or home visitor), and the number of Head Start children enrolled into a web-based sampling program via the laptop computer. The sampling program will select two to four classrooms for participation in the study (depending on the program structure as described above). We plan to use this classroom sampling form with the 37 centers in 22 programs participating in the fall 2019 data collection.

  11. AI/AN FACES 2019 child roster form from Head Start staff (Attachment 15; fall only). For each selected classroom in fall 2019, the FES will request a list of the names and dates of birth of each child enrolled in the selected classroom from Head Start staff (typically the on-site coordinator). Head Start staff may provide this information in various formats such as print outs from an administrative record system or photocopies of hard copy list or records. The FES will use a laptop computer to enter this information into a web-based sampling program. The program will select 13 children per classroom for participation in the study. For these selected children only, the FES will then enter each child’s sex, home language, and parent’s name into the sampling program. Finally, the FES will ask Head Start staff (typically the on-site coordinator) to identify among the 26 to 52 selected children per center any siblings. The FES will identify the sibling groups in the sampling program and the sampling program will then drop all but one member of each sibling group, leaving one child per family.

  12. AI/AN FACES 2019 parent consent form (Attachment 16; fall only). After sampling, the FES (in conjunction with the on-site coordinator) will attempt to obtain parental consent for all sampled children before the data collection visit in fall 2019.

  13. AI/AN FACES 2019 Head Start parent survey (Attachment 17; fall and spring). On average, each parent survey is approximately 30 minutes long. Similar to FACES 2019, fall 2019 and spring 2020 data collection (see item four above), we will send parents an email and hard copy letter invitation on a rolling basis after receiving their consent form (parents who provide an email address on their consent form will receive the email) (Appendix L). If needed, we will send parents an email or hard copy letter approximately three weeks after we receive the consent form to remind them to complete the survey (Appendix L). Telephone interviewing will begin immediately after parents receive the advance letter asking them to answer the parent survey. We will have laptops available during the visit week for parents to complete the web survey if they choose.

  14. AI/AN FACES 2019 Head Start child assessment (Attachment 18; fall and spring). The study team will conduct direct child assessments with each consented AI/AN FACES 2019 child during the scheduled data collection week in fall 2019 and spring 2020. On average, child assessments will take approximately 60 minutes. The same procedures for the FACES 2019 child assessments will be followed (item five above). In particular, parents will be reminded of the child assessments the week before the field visit via flyers displayed in the center, and a reminder notification that will be sent home in children’s backpacks (Appendix L). Because of the remote locations of some programs, before the field visit, we will discuss center internet availability and strength with the on-site coordinator to determine whether we will be able to conduct the web-based assessments without interruption. Due to the nationwide outbreak of COVID-19, spring 2020 child assessment data collection was canceled.

  15. AI/AN FACES 2019 Head Start teacher child report (Attachment 19; fall and spring). Head Start teachers will be asked to complete a TCR for each consented AI/AN FACES 2019 child in their classroom in fall 2019 and spring 2020 following the same procedures used in the FACES 2019 study (item six above). Each TCR takes approximately 10 minutes to complete. In particular, the study team will send teachers a letter containing a website address, login ID, and password for completing the TCRs online and a study FAQ (Appendix L).8

  16. AI/AN FACES 2019 Head Start teacher survey (Attachment 20; spring only). On average, each teacher survey will take approximately 35 minutes to complete. The AI/AN FACES 2019 teacher surveys will be released during the center’s spring 2020 data collection week following the same procedures used in the FACES 2019 study (item seven above) (Appendix L).

  17. AI/AN FACES 2019 Head Start program director survey (Attachment 21; spring only). On average, each program director survey is approximately 20 minutes in length. Data collection for the AI/AN FACES program director survey will follow the same procedures used in spring 2020 for FACES 2019 (item eight above), with cases released at the beginning of the data collection period (Appendix L; also includes a letter for program directors with just one center).

  18. AI/AN FACES 2019 Head Start center director survey (Attachment 22; spring only). On average, each center director survey is about 20 minutes in length. The same procedures for the spring 2020 FACES 2019 center director survey will be followed (item nine above). (Appendix L; also includes a letter for staff who serve as directors of both selected centers.)

Note, in FACES 2019, the parent consent form, parent survey, and child assessment are available in English and Spanish. All other FACES 2019 instruments are available only in English. In AI/AN FACES 2019, instruments are available only in English.

B3. Methods to Maximize Response Rates and Deal with Nonresponse

Expected Response Rates

For FACES 2019 we expect high response rates for all respondents (see Table B.3). These expected response rates are based on those achieved in prior FACES studies. Expected response rates for FACES 2019 and final response rates for each wave of FACES 2014-2018 are shown in Table B.3.

Table B.3. FACES 2019 expected response rates and final response rates for FACES 2014-2018

Data Collection

FACES 2019 Expected Response Ratea

Final Response
Rates Fall 2014

Final Response Rates Spring 2015

Final Response Rates Spring 2017

Head Start program study participationb

100%

90%

90%

86%

Head Start center study participation c

100%

100%

100%

100%

Parent consent form d

85%

95%

n/a

n/a

Head Start parent survey e

85%

77%

74%

n/a

Head Start child assessment e

85%

95%

95%

n/a

Head Start teacher child report e

85%

98%

95%

n/a

Head Start teacher survey

85%

n/a

92%

91%

Head Start program director survey

85%

n/a

96%

93%

Head Start center director survey

85%

n/a

93%

91%

a We expect the same response rate in fall 2019 and spring 2020.

b Among participating programs in fall 2014, but among new programs sampled for spring 2015 and spring 2017

c Among participating programs in fall 2014, but among new participating programs for spring 2015 and spring 2017

d Among eligible children

e Among eligible, consented children

n/a = instrument not in field during this time period

Table B.4 reports the expected response rates for AI/AN FACES 2019 and the final response rates for fall 2015 and spring 2016 data collection for AI/AN FACES 2015. Our expected response rates for AI/AN FACES 2019 are based on these results.

Table B.4. AI/AN FACES 2019 expected response rates and final response rates for AI/AN FACES 2015-2016

Data Collection

AI/AN FACES 2019 Expected Response Ratea

Final Response
Rates Fall 2015

Final Response Rates Spring 2016

Head Start program study participation

80%

65%

n/a

Head Start center study participation

100%

97%

n/a

Parent consent form b

85%

95%

n/a

Head Start parent survey c

85%

83%

82%

Head Start child assessment c

85%

96%

96%

Head Start teacher child report c

85%

95%

97%

Head Start teacher survey

85%

n/a

96%

Head Start program director survey

85%

n/a

100%

Head Start center director survey

85%

n/a

97%

a We expect the same response rate in fall 2019 and spring 2020.

b Among eligible children

c Among eligible, consented children

n/a = instrument not in field during this time period

Dealing with Nonresponse and Nonresponse Bias

On most survey instruments, past experience in FACES and AI/AN FACES suggests we can expect very high response rates (particularly for those from Head Start staff—program and center directors and teachers) and very low item nonresponse. Surveys will be available on the web, which will make completing them easier for respondents. For those who have not responded or who have not completed their surveys, we will conduct non-response follow-up by using in-person methods when data collection staff are on site followed by reminder emails, postcards, letters (with paper versions of the surveys), phone calls, and possibly text messages. Finally, we will provide $500 in fall 2019 and $250 in spring 2020 to thank programs for participating in the study (approved under 0970-0151 on August 31, 2018 as part of the recruitment information package). This is to encourage participation across the program, centers, and staff, and the program director can use the amount to support the program at his or her discretion. We expect that high participation rates and weighting procedures will reduce the risk for nonresponse bias.

The AI/AN FACES 2015 Head Start program participation rate of 68 percent for fall 2016 fell below our expected target of 80 percent, which was based on our experience recruiting programs in FACES. In addition to expected local Institutional Review Board (IRB) requirements, many Region XI programs selected for AI/AN FACES 2015 also required the approval of a tribal council or other representative body in order to participate in the study. This contributed to the lower response rate when the tribal body declined to participate or when the time allotted for recruitment expired before tribal approval was obtained. Given that the program participation rate was less than 80 percent for Region XI Head Start programs sampled for AI/AN FACES 2015, we conducted an analysis of the potential for nonresponse bias for estimates from programs participating in the AI/AN FACES 2015 study. We will conduct similar analysis for AI/AN FACES 2019 as needed. (For more information, see the memorandum entitled “Nonresponse Bias Analysis for AI/AN FACES Program Participation,” submitted November 22, 2016.)

Nonresponse weights. As described in Section A16 of Supporting Statement Part A, as well as Section B1 above, we will produce analysis weights for surveys and other data collection activities that account for selection probabilities and differential nonresponse patterns, even when response rates are high. We will construct these weights in a way that will mitigate the risk for nonresponse bias (using the limited number of data elements that we have for both responding and nonresponding sample members, most likely program-level characteristics). Should response rates fall below 80 percent, we will conduct a nonresponse bias analysis, in accordance with OMB guidelines. See “maximizing response rates” later in this section for more information.

Parent surveys. Given that the FACES 2014-2018 parent survey response rate was less than 80 percent at both the fall 2014 and spring 2015 data collection waves, we conducted an analysis to assess nonresponse bias.9 We will conduct nonresponse bias analysis as needed in FACES 2019. (For more information, see the memorandum entitled “Nonresponse Bias Analysis for the FACES Core Study Parent Survey in Fall 2014 and Spring 2015,” submitted November 22, 2016.)

In light of the difficulties we experienced completing parent surveys in FACES 2014-2018, we made several changes to the approach for AI/AN FACES 2015 related to the survey invitation and incentive structure. We have simplified the proposed incentive structure to a single amount of $30, removed the delay in active calling, and offered additional on-site access for parents to complete the survey. We will apply those procedures for FACES 2019. For all instruments discussed below, Tables B.3 and B.4 show prior response rates.

Teacher Child Reports. These are brief reports about the children in each of the teacher’s classroom. Staff can complete these reports on the web or on paper, which we believe facilitates completion. We will also use in-person follow-up to collect these reports while data collection staff are on site. In FACES 2014-2018 and AI/AN FACES 2015 we achieved high response rates using in-person follow up and a $10 gift card for each completed teacher child report, which we believe is important to continue in this round to maintain high response rates.

Teacher surveys. We plan to employ procedures that are similar to those from FACES 2014-2018 and AI/AN FACES 2015 that resulted in high response rates. We believe that these very high rates of response reduce the potential for nonresponse bias. The survey is available on the web, and with a paper option. In the past, this resulted in high response rates and low item nonresponse.

Program and center director surveys. In FACES 2014-2018 and AI/AN FACES 2015, we obtained at least a 90 percent response to the director surveys, and thus little potential for nonresponse bias. We hope to continue achieving these high response rates, using the same procedures we have used previously. The web version makes it easier for respondents to complete the survey at their convenience.

Maximizing Response Rates

There is an established, successful record of gaining program cooperation and obtaining high response rates with center staff, children, and families in research studies of Head Start, Early Head Start, and other preschool programs. To achieve high response rates, we will continue to use the procedures that have worked well on FACES, such as multi-mode approaches, e-mail as well as hard copy reminders, and incentives. Marginal response rates for FACES 2014-2018 ranged from 74 percent to 98 percent across instruments. As outlined in a previous OMB clearance package for program recruitment (also 0970-0151, approved August 31, 2018) ACF will send a letter to selected programs, signed by the OHS deputy director describing the importance of the study, outlining the study goals, and encouraging their participation. Head Start program staff and families will be motivated to participate because they are vested in the success of the program. For AI/AN FACES 2019, experienced Mathematica site liaisons will receive training with additional sections on cultural awareness co-facilitated by Workgroup members. Each liaison will partner with AI/AN Workgroup members who serve as ongoing cultural mentors. Workgroup members also advised on the approaches for reaching out to parents and other sample members prior to fall 2015 of AI/AN FACES. If programs or centers are reluctant to participate in either study, Mathematica senior staff will contact them to encourage their participation.

Additionally, the study team will send correspondence to remind Head Start staff and parents about upcoming surveys and child assessments (Appendix K for FACES 2019; Appendix L for AI/AN FACES 2019). The web administration of Head Start staff and parent surveys will allow the respondents to complete the surveys at their convenience. The study team will ensure that the language of the text in study forms and instruments are at a comfortable reading level for respondents. Paper-and-pencil survey options will be available for Head Start staff who have no computer or Internet access, and parent surveys can be completed via computer (with the study team providing additional access at the center during the data collection visit) or by telephone. CATI and field staff will also be trained on refusal conversion techniques, and they will also receive AI/AN FACES training with additional sections on cultural awareness for AI/AN FACES 2019.

These approaches, most of which have been used in prior FACES studies, will help ensure a high level of participation. Obtaining the high response rate we expect to attain makes the possibility of nonresponse bias less likely, which in turn makes our conclusions more generalizable to the Head Start population. We will calculate both unweighted and weighted, marginal and cumulative, response rates at each stage of sampling and data collection. Following the American Association for Public Opinion Research (AAPOR) industry standard for calculating response rates, the numerator of each response rate will include the number of eligible completed cases. We define a completed instrument as one in which all critical items for inclusion in the analysis are complete and within valid ranges. The denominator will include the number of eligible selected cases.

B4. Tests of Procedures or Methods to be Undertaken

The study procedures and methods and most of the scales and items in the proposed parent survey, child assessment, and teacher child reports have been successfully administered in FACES 2014 and AI/AN FACES 2015. For AI/AN FACES 2019, similar procedures, methods, and measures were successfully used in AI/AN FACES 2015, which were reviewed by the members of the AI/AN FACES 2015 Workgroup and determined to be appropriate for AI/AN children and families during AI/AN FACES 2015. For FACES 2019, we pretested the teacher survey, program director survey, and center director survey with no more than 9 respondents each to obtain estimates of the overall duration of the surveys and test for question clarity and flow in new areas on content and curriculum and state and local systems. We made revisions to new questions across all staff surveys as a result of the pretest feedback. Additionally, we proposed removing some questions from the teacher survey due the length of time it took some teachers to complete the survey during the pretest. We will pretest the child assessment with no more than 9 respondents to test procedures for the new executive function measure and updated measures.

B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The team is led by Mary Mueggenborg and Dr. Meryl Barofsky, Federal Contracting Officer’s Representatives (CORs); Dr. Nina Philipsen Hetzner, Contract Project Specialist; Dr. Laura Hoard, Federal Project Officer; Dr. Lizabeth Malone, project director; Drs. Louisa Tarullo and Nikki Aikens, co-principal investigators; and Annalee Kelly, survey director. Additional staff consulted on statistical issues include Barbara Carlson, a senior statistician at Mathematica, and Drs. Margaret Burchinal and Marty Zaslow, consultants to Mathematica on statistical and analytic issues.

1 We will work with the Office of Head Start (OHS) to update the list of programs before finalizing the sampling frame. Grantees and programs that were known by OHS to have lost their funding or otherwise closed between the latest PIR and the time of sampling will be removed from the frame, and programs associated with new grants awarded since then will be added to the frame.

2 The freshening of the program sample for FACES 2019 will use well-established methods that ensure that the refreshed sample can be treated as a valid probability sample. Spring 2022 data collection will be included in a future information collection request.

3 The procedure offers all the advantages of the systematic sampling approach but eliminates the risk of bias associated with that approach. The procedure makes independent selections within each of the sampling intervals while controlling the selection opportunities for units crossing interval boundaries. Chromy, J. R. “Sequential Sample Selection Methods.” Proceedings of the Survey Research Methods Section of the American Statistical Association, Alexandria, VA: American Statistical Association, 1979, pp. 401–406.

4 If the number of children per class is not available at the time of classroom sampling, we will randomly sample three classrooms, and then randomly subsample two for initial release. If these two classrooms are not likely to yield 20 children, we will release the third classroom as well.

5 We will stochastically round the stratum sizes as needed.

6 If parents do not provide an email address, we will send hard copy invitations for the parent survey.

7 The web-based TCR is administered after the teacher survey. Teachers who complete the teacher survey online will be automatically routed to the TCRs without having to login a second time to complete them.

8 The web-based TCR is administered after the teacher survey similar to FACES 2019. Teachers who complete the teacher survey online will be automatically routed to the TCRs without having to login a second time to complete them.

9 As response rates decrease, the risk for nonresponse bias for an estimate increases if nonrespondents would have responded differently from respondents. Bias usually cannot be directly measured; in this case, however, we can do so. We have key outcomes (outcome data from the child assessments) for nearly all sampled children, so we examined what happens to estimates of those outcomes with and without children whose parents completed the parent survey.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDHHS
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy