Baby FACES OMB_SSB_revised2_for OPRE

Baby FACES OMB_SSB_revised2_for OPRE.pdf

Early Head Start Family and Child Experiences Survey (Baby FACES)—2018

OMB: 0970-0354

Document [pdf]
Download: pdf | pdf
The Early Head Start Family and Child
Experiences Survey (Baby FACES)—2018
OMB Information Collection Request
0970-0354

Supporting Statement
Part B
July 2017

Submitted By:
Office of Planning, Research and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officer:
Amy Madigan, Ph.D.

SUPPORTING STATEMENT PART B

CONTENTS
B1

RESPONDENT UNIVERSE AND SAMPLING METHODS ............................................................. 1
Target population ............................................................................................................................. 1
Sampling frame and coverage of target population ......................................................................... 2
Design of the sample ....................................................................................................................... 2
Size of the sample and precision needed for key estimates............................................................ 4
Expected response rate ................................................................................................................... 8
Expected item nonresponse rate for critical questions .................................................................... 8

B2

PROCEDURES FOR COLLECTION OF INFORMATION............................................................... 8

B3

METHODS TO MAXIMIZE RESPONSE RATES AND DEAL WITH NONRESPONSE ................ 12
Expected Response Rates ............................................................................................................ 12
Dealing with Nonresponse and Nonresponse Bias ....................................................................... 13
Maximizing Response Rates ......................................................................................................... 14

B4

TESTS OF PROCEDURES OR METHODS TO BE UNDERTAKEN ............................................ 16

B5

INDIVIDUAL(S) CONSULTED ON STATISTICAL ASPECTS AND INDIVIDUALS
COLLECTING AND/OR ANALYZING DATA ................................................................................. 17

REFERENCES ............................................................................................................................................ 18

TABLES
B.1

Expected sample counts for Baby FACES 2018 ............................................................................. 5

B.2

Precision of estimates and minimum detectable correlations .......................................................... 6

B.3

Minimum detectable effect sizes (between subgroups) ................................................................... 7

B.4

Expected response rates and number of responses ....................................................................... 8

B.5

Instruments and data collection mode ........................................................................................... 10

B.6

Baby FACES 2009 annual response rates, by cohort ................................................................... 12

B.7

Baby FACES 2018 pretests ........................................................................................................... 17

ii

SUPPORTING STATEMENT PART B

Contents of OMB Information Collection Request for Baby FACES 2018
Supporting Statement Part A
Supporting Statement Part B

Appendices
Appendix A. 60-Day Federal Register Notice
Appendix B. Comments Received on 60-Day Federal Register Notice
Appendix C. Conceptual Frameworks and Research Questions
Appendix D. Mathematica Confidentiality Pledge
Appendix E. Advance Materials
Appendix F. Brochure
Appendix G. Screen Shots
Appendix H. Baby FACES 2009 Instrument Information

Attachments (Study Instruments)
Attachment 1. Classroom/home visitor sampling form from Early Head Start staff
Attachment 2. Child roster form from Early Head Start staff
Attachment 3. Parent consent form
Attachment 4. Parent survey
Attachment 5. Parent Child Report
Attachment 6a. Staff survey (Teacher survey)
Attachment 6b. Staff survey (Home Visitor survey)
Attachment 7a. Staff Child Report (Teacher)
Attachment 7b. Staff Child Report (Home Visitor)
Attachment 8. Program director survey
Attachment 9. Center director survey

iii

SUPPORTING STATEMENT PART B

B1. Respondent Universe and Sampling Methods

The Administration for Children and Families (ACF) at the U.S. Department of Health and
Human Services (HHS) seeks approval to collect descriptive information for the Early Head
Start Family and Child Experiences Survey 2018 (Baby FACES 2018). The proposed data
collection builds upon a prior study (Baby FACES 2009; also under OMB 0970-0354) that
longitudinally followed two cohorts of children through their experience in the Early Head Start
(EHS) program. We learned a great deal about program participation over time and about
services received by children and families. However, the earlier design did not allow for
national-level estimates of service quality, nor inferences about children who enter the program
after 15 months of age. To fill these knowledge gaps and to answer additional questions about
how programs function, the Baby FACES 2018 design will include a cross-section of a
nationally representative sample of programs, centers, home visitors, teachers, classrooms, and
children and families (including pregnant women). This will allow for nationally representative
estimates at all levels at a point in time and will include the entire age span of enrolled children.
It will also aid ACF with planning, training and technical assistance, management, and policy,
which is particularly important given the recent implementation of the new Head Start Program
Performance Standards and adoption of the Head Start Early Learning Outcomes Framework.
We anticipate another information collection with a new cross-sectional sample of EHS
programs in 2020 (Baby FACES 2020); however, we expect to focus on different research areas
for that collection. Therefore, we will submit a separate request for the information collection for
2020.
Data collection activities for Baby FACES 2018 include a web-based survey of EHS
program directors (140) and center directors (493), in-person survey interviews with teachers
(798) and home visitors (599), telephone survey interviews with parents (2,310), and selfadministered surveys about sampled children’s development, which both parents (2,310) and
staff (2,743)1 will complete about sampled children. These are the actual responses we expect
accounting for nonresponse. We will also conduct observations of classroom quality in 840
classrooms. Below we provide information about the sampling methods to be used in Baby
FACES 2018. For information about sampling methods for Baby FACES 2009, see the approved
information collection request under OMB number 0970-0354 (approved 10/21/2008). Baby
FACES 2020 sampling methods will be similar to those described below for Baby FACES 2018;
detailed plans will be provided in a future request.
Target population
The target population for Baby FACES 2018 is a nationally representative sample of EHS
programs, centers, home visitors, classrooms, and teachers, as well as the families, children, and
pregnant women they serve.

1

We expect that 95 percent of staff (teachers and home visitors) with sampled and consented children will provide
reports on children, meaning that 1,097 staff will complete an average of 2.5 reports each for a total of 2,743
children receiving reports overall.

1

SUPPORTING STATEMENT PART B

Sampling frame and coverage of target population
We plan to select a probability proportional to size (PPS) sample of EHS programs from a
sample frame derived from the latest available Head Start Program Information Report (PIR),
which includes information from all Head Start and EHS programs (grantees and delegate
agencies). Before selecting the sample, we will exclude all Head Start-only programs (i.e.,
programs serving only preschool-aged children) as well as any EHS programs that are overseen
by ACF regional offices XI (American Indian and Alaska Native) and XII (Migrant and
Seasonal),2 any programs that are under transitional management, any programs outside the 50
states and the District of Columbia, and any programs that do not directly provide services to
children and families. According to the 2015-16 PIR, about 1200 EHS programs remain after
these exclusions.
The first stage of sample selection is EHS programs. Within each program we will select a
sample of centers and/or home visitors, depending on the type(s) of services the program
provides. Within each center we will select a sample of classrooms (and their associated
teachers), and a sample of children within classrooms. For each subsampled home visitor we will
select a sample of pregnant women and children from their caseloads.
Design of the sample
Baby FACES 2018 is a study using a cross-sectional sample design that will occur in spring
2018, following OMB approval. It includes a nationally representative sample of programs,
centers, home visitors, classrooms, and teachers and the families, children, and pregnant women
they serve, and will provide a comprehensive point-in-time snapshot of EHS programs, services,
and the population served.
We will select the sample in a way that balances the desire for precise and unbiased
estimates with the logistical realities of data collection. We will accomplish this using a complex
sample design that incorporates multistage sampling, stratification, and unequal selection
probabilities. We plan to select a sample of EHS programs with PPS from a sample frame
derived from the latest available Head Start PIR, excluding the programs described above.
We will use program characteristics from the PIR as explicit and implicit stratification
variables. In explicit stratification, the sample is allocated and selected separately within each
stratum, allowing for more control over how the sample is distributed across important
characteristics, which can include oversampling. The explicit stratification variables will be
whether the program is an EHS-Child Care Partnership grantee; then among non-grantees, we
will look at programs that are center-based only, home-based only, or provide both service
options. In implicit stratification, the sampling frame is sorted by one or more additional
important characteristics within explicit strata before selecting the sample, as a way of further
enhancing the sample’s representativeness. The implicit variables will be whether the program
has majority Spanish-speaking enrollees, whether they are located in a metropolitan or nonmetropolitan area, and the program’s ACF region. Although we are not planning to use explicit
stratification to oversample any type of program, we do plan to use it to ensure that the sample of
2

Separate studies of Migrant/Seasonal Head Start and American Indian/Alaskan Native Head Start programs are
ongoing.

2

SUPPORTING STATEMENT PART B

programs represents the most important characteristics in a way that is proportional to their
distribution in the population. The way in which we will allocate the sample across explicit strata
will attempt to maximize precision at the end of the multistage sampling process. We will then
select a PPS sample of programs using a sequential sampling procedure in SAS developed by
Chromy (1979). As with any PPS selection, we will appropriately account for programs that are
so large relative to others in their stratum that they are selected with certainty. Later in this
section, we discuss the measure of size we will use to allocate and select the program sample.
Baby FACES 2018 aims to sample 140 EHS programs. To help achieve this sampling goal,
first we will select an augmented PPS sample of more than 300 programs. Second, we will form
pairs of adjacent selections within strata (those with similar implicit stratification characteristics).
Finally, we will randomly select one program from each pair to release initially, with the other
member of the pair becoming its back-up. We will release the back-up only if the main release
turns out to be ineligible (e.g., closed or in imminent danger of losing its funding) or refuses to
participate. We will also select one or more extra pairs within each stratum, should any main pair
yield no eligible and participating sampled programs. Any programs released into the sample
initially or as a back-up will be properly accounted for in the weights and in the response rates.
As each sampled program is recruited into the study, we will obtain from them a list of all of
their centers and home visitors, along with characteristics such as number of classrooms (for
centers) and size of caseload and whether they provide services to pregnant women (for home
visitors). Based on a set of test samples we expect 88 percent of programs to provide centerbased services and 67 percent to provide home based (with 55 percent providing both). This will
result in a sample of 140 programs including 123 programs offering center-based services and 94
programs offering home based. Not all programs will provide center-based services and not all
will provide home visiting services, but we expect about 77 will provide both types of services.
We will sample centers and home visitors on a rolling basis; we describe this process in the
following several paragraphs.
To achieve desired precision of estimates at the center and classroom levels, we will sample
an average of 4 centers per program with center-based services. However, we expect that about
half of center-based programs have fewer than 4 centers, so we plan to sample more than 4
centers in other programs to achieve our target of 493. If a program has up to 4 centers, we will
include all centers in the sample. Otherwise, we will select 4 or more centers with PPS, selecting
centers with certainty if their size (number of classrooms per center) is large enough relative to
the other centers. Within centers we will sample on average 1.7 classrooms (for a total of 840
classrooms).
Similarly, we expect that about two-thirds of the 140 EHS programs in our sample (94
programs) will have home-based services, and we are aiming for 630 sampled home visitors in
all to achieve desired precision of estimates. To reach this target of 630 sampled home visitors,
we will need to sample an average of 6 to 7 home visitors per program. However, we expect that
about half of home-based programs have fewer than 6 to 7 home visitors, so we plan to sample
more than 6 to 7 in other programs to achieve our target of 630. If a program has up to 6 to 7
home visitors, we will include all in the sample. Otherwise, we will select 6 to 7 or more home
visitors with PPS, selecting them with certainty if their size (home visitor caseload) is large
enough relative to the other home visitors. Within the sample of home visitors in each program,

3

SUPPORTING STATEMENT PART B

we will randomly subsample half of the selected home visitors from which to sample children.
We will sample children from each service type (center or home-based) by selecting 3 children
per sampled classroom and subsampled home visitor.
The specific procedures for sampling at levels below the program level are described here. A
few weeks before the first data collection visit, we will send a field enrollment specialist (FES)
to each selected center to obtain a list of all classrooms in each sampled center and the age range
of children in each classroom using the classroom/home visitor sampling form from EHS staff
(Attachment 1). The FES will enter this information into a laptop sampling program. If a center
has only one or two classrooms, the sampling program will include all classrooms in the sample;
otherwise, it will select a systematic sample of two classrooms, implicitly stratified by whether
the room is predominantly an infant or toddler classroom. A systematic sample is a sample in
which one selects every nth element from a list after a random starting point. We expect this
process will yield 840 center-based classrooms in the sample. Using the child roster form from
EHS staff (Attachment 2) the FES will then obtain classroom rosters for each of the two sampled
classrooms and enter that information into the laptop sampling program. The laptop sampling
program will then select a systematic sample of three children per classroom (implicitly
stratifying by date of birth); we expect that about two children per classroom will have parental
consent and complete the data collection instruments. If we happen to sample more than one
child from the same household in a given center, the laptop program will randomly subsample
one to remain in the study sample to minimize burden on the family.
Around the same time, the FES will visit each program office for those sampled programs
that provide home-based services. Using the classroom/home visitor sampling form from EHS
staff (Attachment 1), he or she will obtain from the program a list of children and pregnant
women to whom each subsampled home visitor provides services, along with their date of
enrollment and either their date of birth (children) or due date (women). After the FES enters this
information into the laptop sampling program, the program will select a systematic sample of
three children and/or pregnant women per subsampled home visitor (implicitly stratifying by
pregnant woman versus child, and by age within the child category). We expect that two children
and/or pregnant women per home visitor will have study consent and complete all or most data
collection instruments. If we happen to sample more than one child or pregnant woman from the
same household, the laptop program will randomly subsample one to remain in the sample to
minimize burden on the family.
Size of the sample and precision needed for key estimates
Sample size. After sampling, the FES (in conjunction with EHS program staff) will obtain
parental consent for all sampled children and from sampled pregnant women before the data
collection visit. After accounting for the sibling subsampling, lack of consent, and instrument
nonresponse, we expect to have 1,680 center-based children and 630 home-based children and
pregnant women in the sample, for a total of 2,310 across all 140 EHS programs (Table B.1). We
expect the sample will include 140 program directors (one per program), 493 center directors,
840 teachers (798 responding), and 630 home visitors (599 responding).

4

SUPPORTING STATEMENT PART B

Table B.1. Expected sample counts for Baby FACES 2018

Sampling stage

All
programs

Programs
with only
centerbased
services

Programs
with both
center- and
home-based
services

Programs
with only
home-based
services

Programs

Total

140

46.2

77

16.8

Home visitors

Mean per program

NA

0

6.7

6.7

Total

630

0

517

113

Subsampled home
visitors for child
sampling

Mean per program

NA

0

3.4

3.4

Total

315

0

258.5

56.5

Centers

Mean per program

NA

4

4

0

Total

493

185

308

0

Per center

NA

1.7

1.7

0

Total

840

315

525

0

Participating homebased children/pregnant
women

Per home visitor

NA

0

2

2

Total

630

0

517

113

Participating centerbased children

Per classroom

NA

2

2

0

1,680

630

1,050

0

Classrooms

Total

Precision needed for key estimates. Baby FACES 2018 has a complex, multistage
clustered sample design. Such a design has many advantages, but there is a cost in terms of the
precision of estimates. Clustering and unequal weighting increase the variance of estimates, and
this can be quantified in terms of the design effect.3 Table B.2 shows the precision of estimates
for each cross-section after accounting for expected design effects; Table B.3 shows the
minimum detectable effect sizes for comparing subgroups (with approximated subgroup sizes).
In the tables, we make the following assumptions. We assume a type I error rate of 0.05
(two-sided) and power of 0.80. For estimates shown in the tables, we assume a design effect due
to unequal weighting of 1.2, mostly due to nonresponse adjustment, assuming that our multistage
PPS sample results in fairly even cumulative sampling weights for these estimates. Based on
previous findings from similarly designed studies of Head Start (Aikens et al. 2012), we assume
the following intraclass correlation coefficients (ICC) to estimate the design effect due to
clustering:


For estimates of classroom quality
-

ICC = 0.20 for between-program variation

3

The design effect is the ratio of the variance of the estimate (properly accounting for the impact of the sample
design on the variance) divided by the variance of the estimate one would have obtained from a simple random
sample of the same size. For example, a design effect of 1.5 means that the complex sample design inflated the
variance of a particular estimate by 50 percent, effectively reducing the sample size by one-third.

5

SUPPORTING STATEMENT PART B



For estimates of home visitors (or home visitors combined with classroom teachers)
-





ICC = 0.20 for between-center within-program variation
ICC = 0.20 for between-program variation

For estimates of home-based children (or home- and center-based children combined)
-

ICC = 0.05 for between-program variation

-

ICC = 0.05 for between-home visitor (or center) within-program variation

For estimates of center-based children
-

ICC = 0.05 for between-program variation

-

ICC = 0.05 for between-center within-program variation

-

ICC = 0.05 for between-classroom within-center variation

Table B.2. Precision of estimates and minimum detectable correlations

Sampled
Home visitors

Responding
sample

Effective
sample
size

95 percent
confidence
intervals (half
widths) for
outcome
proportion of
0.50

Minimum
detectable
correlations

630

599

232.7

.064

.184

Teachers and home
visitors

1,470

1,397

416.4

.048

.137

All children

3,465

2,310

1,030.6

.031

.087

Home-based children

945

630

393.0

.049

.141

Subsampled home
visitors with study
children

315

299

169.5

.075

.215

In Table B.2, we can see that, under the Baby FACES 2018 sample design, we will be able
to make percentage-based estimates of home visitors within plus or minus 6.4 percentage points
with 95 percent certainty. For estimates of teachers plus home visitors, or of home-based
children, we will be able to make percentage-based estimates within plus or minus 4.8 or 4.9
percentage points. For all children, we will be able to make estimates within plus or minus 3.1
percentage points.

6

SUPPORTING STATEMENT PART B

Table B.3. Minimum detectable effect sizes (between subgroups)
Subgroup 1

Home visitors
Teachers and home visitors
All children
Home-based children

Subgroup 2

Description

Proportion

Description

Proportion

Minimum
detectable
effect

More than 5 years
of experience

.70

5 or fewer years
of experience

.30

.320

More than 5 years
of experience

.70

5 or fewer years
of experience

.30

.229

Lower risk

.75

High risk

.25

.166

Not DLL

.60

DLL

.40

.152

Lower risk

.75

High risk

.25

.291

Note:
Effect sizes are in standard deviation-sized units.
DLL = dual language learner.

In Table B.3, we show some examples of the precision of various subgroup comparisons.
For example, we can see that we will be able to detect underlying differences of .229 standard
deviations between teachers and home visitors with more than 5 years of experience and teachers
and home visitors with 5 or fewer years of experience, with 80 percent power. For all children,
we will be able to detect underlying differences of .166 standard deviations between lower risk
and higher risk children and underlying differences of .152 standard deviations between dual
language learner (DLL) and non-DLL children.
Weighting. The purpose of analysis weights is to enable us to compute unbiased estimates
based on sample survey responses from the study population. Weights take into account both the
probability of selection into the sample and differential response patterns that may exist in the
respondent sample. After data collection, we will construct weights at the program, center, home
visitor, classroom/teacher, and child levels. We will know the selection probabilities for each
stage of sampling from the original sample selection, and we will adjust them for any back-up
sample releases. The inverse of the selection probability is the sampling weight. The
nonresponse (nonparticipation) adjustments at each stage will attempt to mitigate the risk of
nonresponse bias by adjusting the sampling weights for each participant or respondent to account
for other similar sample members that did not participate or respond. We will do this through
weighting class adjustments. In this technique, we will be essentially using the inverse of the
response rate (or response propensity) to inflate the respondents’ sampling weights to account for
non-responding sample members with similar characteristics. Although this method is used to
reduce bias, it will also increase the design effect due to unequal weighting, over and above the
design effect from the complex sample design itself.
We will use the program weights as components of center- and home visitor-level weights,
and the center weights as components of classroom-level weights.

7

SUPPORTING STATEMENT PART B

Expected response rate
We expect response rates of 100 percent for the program and center director surveys,
80 percent for the parent survey and the Parent Child Report, and at least 95 percent for data
collection activities conducted during the visit week (see Section B3 for details on the basis for
these expected response rate estimates). Table B.4 provides expected response rates and expected
number of responses for each study instrument.
Table B.4. Expected response rates and number of responses
Number of consented
sample members

Expected response rate
(percentage)

Expected number of
responses

1. Parent survey

2,887

80

2,310

2. Parent Child Report

2,887

80

2,310

1,470
(840 classroom teachers,
and 630 home visitors)

95

1,397

1,155

95

2,743
(each of the 1,097
respondents will report
on 2.5 children on
average)

5. Program director survey

140

100

140

6. Center director survey

493

100

493

Data source

3. Staff survey (Teacher
survey and Home Visitor
survey)
4. Staff Child Report

Note:

We have assumed that 33 percent of the programs have centers only, 12 percent have home visiting only,
and 55 percent have both centers and home visitors. We will be selecting an average of 4 centers per
program and 2 classrooms per center. This yields a total of 840 classrooms. For home visitors, we will
select a total of 6.7 home visitors per program for a total of 630, but will subsample 3 per program for
sampling families.

Expected item nonresponse rate for critical questions
This data collection does not contain any especially critical questions that would require
follow-up if missing. Furthermore, based on our experience with the previous round of Baby
FACES, we expect a very low item nonresponse rate (5 percent or less) in general. Although
some of the more sensitive questions, such as those concerning race, income, and depression,
may garner higher item nonresponse, none of these is critical.
B2. Procedures for Collection of Information

Baby FACES 2018 will collect data from EHS parents and several sources among EHS staff
(program directors, on-site coordinators, center directors, teachers, and home visitors). Many
data collection features are the same or build on procedures that proved successful for Baby
FACES 2009 with enhancements to increase efficiency and lower costs. We will introduce the
role of the Field Enrollment Specialist (FES), a field staff member who will conduct sampling of
home visitors, classroom, and children/families on site. All instruments were pretested and
revised as necessary to ensure that items were behaving as expected and were of the targeted
length. Table B.5 lists the instruments, sample size, and data collection mode. We will use
computer-assisted telephone interviews for parent surveys, as these have been successful in the
past, and studies of similar populations have observed comparatively low proportions of

8

SUPPORTING STATEMENT PART B

responses to web-based surveys. We will similarly provide a paper copy of the Parent Child
Report to parents by mail ahead of the data collection visit and will follow up with parents who
do not respond by the data collection week. We will conduct surveys with teachers and home
visitors in person, as this method resulted in high response rates in Baby FACES 2009 and is
efficient because field staff are already on site. To make it more convenient for staff to complete
forms on multiple study children, we will create a web-based version of the Staff Child Reports,
with an option for a paper-and-pencil version if preferred. Finally, we will create web-based
survey instruments for the program and center directors because we expect they will complete
the surveys before the visit (while still allowing for the possibility of in-person follow-up by
field staff for program and center directors who do not complete the web version). Appendix G
shows screen shots of the web surveys.
Program recruitment will begin upon receipt of OMB clearance, expected by October 1,
2017. Field data collection will last 14 weeks, beginning in February 2018. A member of the
study team, in conjunction with the EHS program’s on-site coordinator (a designated EHS
program staff member who will work with the study team to recruit teachers and families and
help schedule site visits), will schedule the data collection week based on the program’s
availability. The study team will schedule an average of 11 site visits each week. Site visits to
programs with centers will average four days, whereas visits to home visiting-only programs will
average two days.
Below, we outline the procedures for each of the data collection instruments (and anticipated
marginal response rates). The instruments used in Baby FACES 2018 reflect the conceptual
framework and research questions for this round (see Section A2 in Supporting Statement Part
A). We drew some items and measures from Baby FACES 2009 and other similar surveys; we
also developed some new items and measures when needed. The survey instruments and forms
(Attachments 1-9) are annotated to identify sources of questions from existing studies as well as
questions we developed for this study. The supplemental materials—advance letters, invitations,
and reminders—are similar to those used in previous rounds but have been modified based on
changes to the study design. We include these materials in the format respondents will receive
them in Appendix E. We will also use a brochure (Appendix F) explaining the study to recruit
programs and inform participants. The current information collection request covers spring 2018
instruments only.

9

SUPPORTING STATEMENT PART B

Table B.5. Instruments and data collection mode
Instrument

Respondents

Data collection mode

1. Classroom/ home visitor sampling form
(from EHS staff)

587

Computer-assisted data entry

2. Child roster form (from EHS staff

587

Computer-assisted data entry

2,887

Paper and pencil

4. Parent survey

3.

Parent consent form

2,310

Computer-assisted telephone survey

5. Parent Child Report

2,310

Paper and pencil

6. Staff survey (Teacher survey and Home
Visitor survey)

1,397

In person

7. Staff Child Report

2,743a

Web-based or paper and pencil

8. Program director survey

140

Web-based

9. Center director survey

493

Web-based

Note: the table assumes total numbers of respondents, taking into account expected response rates.
a Refers to the actual number of Staff Child Reports. Each of 1,097 responding staff will complete 2.5 reports on
average.



Classroom/home visitor sampling form (from EHS staff; Attachment 1). The processes for
selecting home visitors and classrooms are similar; however, home visitors are selected at
the program level and classrooms at the center level. To select home visitors, the FES will
request a list of all EHS-funded home visitors (if applicable) from EHS staff (typically the
on-site coordinator) upon arrival at the program. EHS staff may provide this information in
the format most appropriate to the program’s record keeping system. The FES will
separately enter home visitor information into a tablet computer. The web-based sampling
program will draw a sample of home visitors as described in Section B1, including a sample
of six to seven home visitors per program to complete the staff survey (Attachment 6b) and,
from within that group, a sub-sample of three per program from whom we will sample
children and/or pregnant women on their caseloads (described below).To select classrooms,
upon arrival at a Head Start center, the FES will request a list of all EHS-funded classroom
from EHS staff. A web-based sampling program will then draw a sample of classrooms (for
programs offering the center-based option) as described in Section B1.



Child roster form (from EHS staff; Attachment 2). For each selected classroom, EHS staff
(typically the on-site coordinator) will provide the names and dates of birth for all EHS
enrolled children. Likewise, for each of the home visitors in the selected subsample
(described above), EHS staff will provide the names and dates of birth (or due dates) of each
child/pregnant woman in the home visitors’ case load. EHS staff may provide this
information in the format most appropriate to the program’s record keeping system. The
FES will use a tablet computer to enter this information into the web-based sampling
program. The sampling program will select a systematic sample of three children/pregnant
women per classroom and per home visitor (implicitly stratified by date of birth to ensure
we include a wide range of ages in the sample). If any of the sampled children are siblings or
otherwise from the same household, we will use the program to randomly select one of them
to remain in the sample.

10

SUPPORTING STATEMENT PART B



Parent consent form (Attachment 3). After sampling, the FES (in conjunction with the onsite coordinator) will attempt to obtain parental consent for all sampled children and consent
from sampled pregnant women before the data collection visit. We expect about 83 percent
of parents will consent to the study. The consent form will be available in English and
Spanish.



Parent survey (Attachment 4). On average, we expect the parent survey will last 30
minutes. Parents will complete the computer assisted telephone interview with a trained
interviewer. Trained telephone interviewers will administer the parent survey after obtaining
consent, about one or two weeks before the on-site data collection week. Data collection for
the parent survey will continue throughout the data collection period until we achieve a
response rate of 80 percent. Overall, after accounting for both lack of consent and nonresponse, we expect to have two consented, responding children per classroom and two
consented, responding children or pregnant women per home visitor. Respondents may
choose to complete the survey in English or Spanish.



Parent Child Report (Attachment 5). On average, each Parent Child Report will take 15
minutes to complete. We will mail hard copies of the instrument to parents one week before
the scheduled visit to their EHS program. During the data collection week, on-site data
collectors will collect completed instruments and distribute extra copies as necessary. They
will also remind parents (either in person or on the phone) to complete the instrument and
return it (to the on-site data collectors). The Parent Child Report will be available in both
English and Spanish and the version mailed to the parents will be based on the language
indicated on the signed consent form. In addition, there are four versions of the Parent Child
Report based on the age of the child (younger than 8 months, 8 to 16 months, 17 to 30
months, and 31 months and older). We will mail the correct age version based on the date of
birth of the child. We anticipate a response rate of 80 percent.



Staff (Teacher/Home Visitor) survey (Attachments 6a and 6b). We expect each staff
survey will take approximately 30 minutes to complete. Trained data collectors will conduct
the Teacher and Home Visitor surveys in person using paper and pencil during the on-site
data collection week. We will administer the staff survey by telephone interview to any staff
who could not complete the surveys in-person with our data collectors. We anticipate a
response rate of 95 percent for these surveys.



Staff Child Report (Attachments 7a and 7b). We will ask teachers and home visitors to
complete a Staff Child Report for each consented child/pregnant woman in their classroom
or caseload. Hard-copy forms, along with instructions for staff to complete the web version
of the forms will be distributed during the on-site data collection week. We expect that each
Staff Child Report will take 15 minutes to complete. All Staff Child Reports will be in
English. We will offer four versions of the form based on the age of the child (younger than
8 months, 8 to 16 months, 17 to 30 months, and 31 months and older). We will offer a fifth
version for home visitors to complete about sampled pregnant women on their caseloads.
We anticipate a 95 percent response rate for the Staff Child Reports.



Program director survey (Attachment 8). We expect that each program director survey
will take 30 minutes to complete. One week before the scheduled visit to their EHS
program, we will send program directors a link for completing the survey online. During the
on-site data collection week, trained data collectors will follow up in person with program

11

SUPPORTING STATEMENT PART B

directors who have not completed the survey online and administer the survey using a paper
form. We anticipate a 100 percent response rate for program director surveys, with 50
percent completed by web and 50 percent completed in person.


Center director survey (Attachment 9). We expect that each center director survey will
take 20 minutes to complete. One week before the scheduled visit to their EHS program, we
will send center directors a link for completing the survey online. During the on-site data
collection week, trained data collectors will follow up in person with center directors who
have not completed the survey online and administer the survey using a paper form. We
anticipate a 100 percent response rate for center director surveys, with 50 percent completed
by web and 50 percent in person.

B3. Methods to Maximize Response Rates and Deal with Nonresponse

Expected Response Rates
As described in Sections B1 and B2, we expect high response rates for all respondents: 100
percent for the program and center director surveys, 80 percent for the parent survey and the
Parent Child Reports, and at least 95 percent for the staff surveys, Staff Child Reports, and
classroom observations conducted during the visit week (see Table B.4). These expected
response rates are based on those achieved in prior rounds of Baby FACES 2009 data collection,
with a similar population, using similar modes and incentive levels (Vogel et al. 2015).
Unweighted response rates for each wave of Baby FACES 2009 by cohort are shown in Table
B.6.
Table B.6. Baby FACES 2009 annual response rates, by cohort
2009 percentage
(number completed)

Instrument (mode)

1-year
Newborn
old
cohort cohort

Program director interview
(telephone interview)
Parent Interview
(telephone interview)

Both

2010 percentage
(number completed)
Newborn
cohort

1-year
old
cohort Both

100
(89)

2011 percentage
(number completed)
Newborn
cohort

1-year
old
cohort Both

100
(89)

2012 percentage
(number
completed)
Newborn
cohort

100
(89)

Not administered

90.2
(175)

91.9
(719)

91.6
(894)

80.0
(108)

79.1
(475)

79.3
(583)

87.5
(84)

77.9
(361)

79.6
(445)

72.6
(61)

Parent SAQ
(paper and pencil)

--

--

--

--

89.5
(537)

--

90.6
(87)

85.1
(394)

86.0
(481)

83.3
(70)

Teacher Interview
(in-person interview)

--

--

93.1
(229)

--

--

98.9
(267)

--

--

98.7
(232)

100
(44)

Home Visitor Interview
(in-person interview)

--

--

96.7
(323)

--

--

97.0
(225)

--

--

99.4
(174)

100
(29)

95.3
(185)

98.1
(748)

95.5
(933)

94.8
(128)

95.8
(575)

95.6
(703)

96.6
(93)

96.1
(445)

96.2
(538)

97.6
(82)

--

--

--

--

98.7
(220)

--

--

--

99.1
(231)

95.5
(42)

Staff Child Report
(paper and pencil)
CLASS-T
(observation)
Note:

Center directors were not part of Baby FACES 2009.

SAQ = Self-Administered Questionnaire; CLASS-T = Classroom Assessment Scoring System, Toddler version.

12

SUPPORTING STATEMENT PART B

These response rates are at or above those that OMB recommends to minimize nonresponse
bias, and we believe they will be more than adequate to address the research questions.
Dealing with Nonresponse and Nonresponse Bias
On most survey instruments, past experience in Baby FACES 2009 suggests we can expect
very high response rates (particularly for those from EHS staff—program and center directors
and teachers and home visitors) and very low item nonresponse. Because of the high response
rates in the past we did not conduct a nonresponse bias analysis due to the low risk for such bias;
however, the weights did take into account any differential response patterns across children by
program type (based on service type and program size). Adjusting weights by other
characteristics was limited by the lack of child-level data elements from the sample frame
available for both responding and nonresponding sample members. Our currently proposed
approach to weighting is similar to what we have used in the past and, with some enhancements
in survey operations, we believe we will maintain or increase response rates and limit differential
response. In particular, we plan to implement web versions of several surveys, which will make
completing them easier for respondents. We will use in-person follow-up when data collection
staff are on site for those who have not responded or who have not completed their surveys.
Finally, we will provide $250 to thank programs for participating in the study. This is to
encourage participation across the program, centers, and staff, and the program director can use
the $250 to support the program at his or her discretion. We expect that high participation rates
and weighting procedures will reduce the risk for nonresponse bias.
Parent surveys and Parent Child Reports. We have attempted to make the parent survey
and Parent Child Report even easier to complete than in Baby FACES 2009 by making them
considerably shorter for Baby FACES 2018. The parent survey in Baby FACES 2009 was 30
minutes long with a 27-minute self-administered questionnaire (the analogue to the Parent Child
Report) that parents completed during a one-hour in-home child assessment. Now, the Parent
Child Report is 15 minutes long and there is no in-home assessment component. We are
proposing to provide gifts of appreciation linked to completing each piece ($20 for the survey
and $5 for the Parent Child Report); we believe this is necessary to achieve desired response
rates and reduce differential response by respondent characteristics that may introduce bias (see
Table A.3 and discussion of literature that finds small incentives decrease nonresponse bias in
Supporting Statement Part A).We achieved high parent response rates in the longitudinal followup in Baby FACES 2009 even with longer instruments using a similar incentive structure (see
Table B.6), and we anticipate similar response rates in Baby FACES 2018.
Staff surveys (Teacher surveys and Home Visitor surveys). We plan to employ
procedures that are similar to those from Baby FACES 2009 that resulted in response rates of 93
percent or higher (in some cases 100 percent; see Tables A.3 and B.6). We believe that these
very high rates of response reduce the potential for nonresponse bias. On-site data collection
staff will arrange for times to administer the surveys in person during the data collection week.
In the past, this resulted in high response rates and low item nonresponse. To further encourage
participation, we propose to give a children’s book (approximately $10 value) to each teacher or
home visitor who completes a survey.

13

SUPPORTING STATEMENT PART B

Staff Child Reports. These are brief reports about the children and families who are part of
the study and served by each staff member. We have implemented a web option in Baby FACES
2018 which we believe will facilitate completion by staff. We will also use in-person follow-up
to collect these reports while data collection staff are on site. In Baby FACES 2009 we achieved
response rates of 95 percent or higher using in-person follow up and a small gift for each
completed Staff Child Report ($5), which we believe is important to continue in this round to
maintain high response rates.
Program director surveys. In Baby FACES 2009 we achieved 100 percent response to
program director surveys, and thus no nonresponse bias. We hope to continue achieving these
high response rates and are planning to implement a web version for the first time. The web
version will make it easier for respondents to complete the survey at their convenience. We have
also shortened the surveys for program directors in Baby FACES 2018 compared to the past,
which we believe will also help reduce nonresponse and risk of bias. We propose providing $250
to programs4 to reflect our appreciation for the program’s overall participation in the study (e.g.,
helping to coordinate visits by field staff, and obtain parental consent).We also believe it will
help to establish a strong relationship with the programs and encourage their participation in
2020.
Center director surveys. Center director surveys are new in Baby FACES 2018; we expect
that their availability on the web will produce high response rates, similar to those for the
program director survey.
Nonresponse weights. As described in Section A16 of Supporting Statement Part A, as well
as Section B1 above, we will produce analysis weights for surveys and other data collection
activities that account for selection probabilities and differential nonresponse patterns, even
when response rates are high. We will construct these weights in a way that will mitigate the risk
for nonresponse bias (using the limited number of data elements that we have for both
responding and nonresponding sample members, most likely program-level characteristics).
Should response rates fall below 80 percent, we will conduct a nonresponse bias analysis, in
accordance with OMB guidelines.
Maximizing Response Rates
Past research studies of EHS and similar programs have demonstrated an established,
successful record of gaining program cooperation and obtaining high response rates with EHS
staff and parents. To achieve high response rates, we will continue to use the procedures that
worked well on Baby FACES 2009, such as multimodal approaches (often with in-person
follow-up), email as well as hard-copy reminders, and gifts of appreciation.
These approaches, most of which we have used in prior rounds of Baby FACES (and
FACES), will help ensure a high level of participation. We expect that, using these approaches,
we can achieve response rates of 80 percent or higher for each data collection activity. We
discussed expected response rates for each activity in Section B1 and listed response rates for
similar activities in Baby FACES 2009 (see Table B.6).

4

The $250 will be provided to program directors for the benefit of the program (not for the directors’ personal use).

14

SUPPORTING STATEMENT PART B

Obtaining the high response rate we expect to attain makes the possibility of nonresponse
bias less likely, which in turn makes our conclusions more generalizable to the EHS population.
We will calculate both unweighted and weighted, as well as marginal and cumulative, response
rates at each stage of sampling and data collection. Following the American Association for
Public Opinion Research (AAPOR) industry standard for calculating response rates, the
numerator of each response rate will include the number of eligible completed cases.
To maximize response rates for this information collection, we will take the following steps:


Recruiting programs and centers. After sampling, we will contact programs and provide a
full-color brochure and brief study description to introduce them to the study (Appendix E).
We will use the same approach used in Baby FACES 2009 by assigning a “Baby FACES
liaison” to recruit and be the point of contact for each program. The Baby FACES liaison
will be a specific member of the Mathematica study team. This process worked well, and
Baby FACES 2009 achieved a 94 percent program consent rate. The Baby FACES liaison
will work with the program director to identify an on-site coordinator to assist with
contacting selected centers and informing them of the study, obtaining enrollment lists, and
scheduling the on-site data collection visit. Two to three weeks before the on-site visit, the
Baby FACES liaison will also send center directors a letter informing them of the study.



Advance notification for the web-based and self-administered surveys. Program and
center directors will receive an advance email notification inviting them to take part in the
study (see materials in Appendix E). The advance email includes a brief overview of the
study purpose, a description of the data collection activity in which we are asking directors
to participate, and an estimate of the amount of time required to complete the activity. It will
also include information needed to complete the web-survey (such as log-in credentials).
Respondents will also receive a number they can call should they have any questions about
their participation in the study. Approximately one week before the scheduled on-site data
collection week, we will send teachers, home visitors, and parents the Staff Child Reports
and Parent Child Reports, respectively. The advance notification to teachers and home
visitors will include instructions to access the Staff Child Reports online. We will request
that participants complete these before the on-site visit week and give any completed hard
copies to the study team during the on-site visit.



Reminder notifications. Over the course of the data collection period, we will send up to
six email reminders to program and center directors who are invited to complete the survey;
we will also make up to two reminder calls to nonresponders. We will make the first call one
week before our scheduled visit, asking directors if we could conduct the survey as an inperson interview during our on-site data collection week. We will make a second reminder
call after the on-site data collection week to anyone who did not complete the survey either
online or in person. We will also make reminder calls regarding the parent surveys. Finally,
on-site data collectors will remind staff and parents to complete the surveys as well as Staff
Child Reports or Parent Child Reports and will have additional copies on hand.

15

SUPPORTING STATEMENT PART B



Trained and experienced data collection staff. The Baby FACES liaison assigned to the
program will conduct reminder calls to program and center directors. All liaisons will be
trained members of the study team, and many have significant experience from similar
studies. All staff assigned to the study will participate in extensive project-specific training
to ensure they are ready to respond effectively to respondents’ questions and conduct the
survey interview by phone if requested. The training will also focus on developing skills for
securing respondents’ cooperation as well as averting and converting refusals.



Flexibility in language of administration. Spanish versions of the parent survey and the
Parent Child Report will be available to Spanish-speaking respondents. During telephone
contact, interviewers will identify Spanish-speaking respondents and connect them to speak
with a certified Spanish-language interviewer. Mathematica employs staff who have
experience conducting interviews in Spanish.



Incorporating in-person administration into the study design. We expect to administer
up to approximately half of the web-based program and center director surveys in person
during our on-site visit week. For the Staff Child Reports, we will distribute and collect the
hard-copy self-administered versions of the surveys to any teacher or home visitor who has
not completed the survey online.



Gifts of appreciation. As described in Section A9 of Supporting Statement Part A, we plan
to offer respondents a gift of appreciation for responding to several data collection activities.

B4. Tests of Procedures or Methods to be Undertaken

Many of the scales and items in the proposed parent survey, staff survey, Staff Child Report,
Parent Child Report, and program director survey have been successfully administered in Baby
FACES 2009. Measures new to Baby FACES 2018, including those in the program director and
center director surveys, were selected in part because they had been validated and shown to have
good psychometric properties with populations similar to the Baby FACES sample. The study
team has also developed new items for measuring constructs for which existing measures are not
currently available. These items have drawn ideas for phrasing and language from prior research
on EHS and child care. The survey instruments and forms (Attachments 1-9) are annotated to
identify sources of questions from existing studies as well as questions we developed for this
study. In addition, in winter 2017, we conducted pretests with parents, teachers, home visitors,
program directors, and center directors using a using a variety of modes: in-person, on the
telephone, as well as self-administered. We conducted a debrief after each pretest to (1) ensure
that questions were understandable, used language familiar to respondents, and were consistent
with the concepts they aimed to measure; (2) identify typical instrumentation problems such as
question wording and incomplete or inappropriate response categories; (3) measure the response
burden; and (4) confirm there were no unforeseen difficulties in administering the instrument.
Instruments were revised as needed after the pretests. Table B.7 provides the type of respondent,
number of pretests conducted, and the mode of the pretest. The same question was not asked of
more than 9 people.

16

SUPPORTING STATEMENT PART B

Table B.7. Baby FACES 2018 pretests
Instrument

Pretests completed

Mode

1. Classroom/ home visitor sampling form (from EHS staff)

NA

Not applicable

2. Child roster form (from EHS staff)

NA

Not applicable

3. Parent survey

3

Telephone

4. Parent Child Report

5

Self-administered

5. Staff survey (Teacher)

3
1

In person
Telephone

6. Staff survey (Home Visitor)

4

Telephone

7. Staff Child Report (Teacher)

4

Self-administered

8. Staff Child Report (Home Visitor)

4

Self-administered

9. Program director survey

4
2

Telephone
Self-administered

10. Center director survey

4

Telephone

B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting
and/or Analyzing Data

Mathematica Policy Research and consultants Dr. Margaret Burchinal of the Frank Porter
Graham Child Development Center at the University of North Carolina-Chapel Hill, Dr. Jon
Korfmacher of the Erikson Institute, and Dr. Virginia Marchman of Stanford University are
conducting this project under contract number HHSP233201500035I. Mathematica developed
the plans for statistical analyses for this study. To complement the study team’s knowledge and
experience, we also consulted with a technical working group of outside experts, as described in
Section A8 of Supporting Statement Part A.
The following individuals at ACF and Mathematica are leading the study team:
Amy Madigan, Ph.D.
Project Officer
Office of Planning, Research and Evaluation
Nina Hetzner, Ph.D.
Contract Social Science Research Analyst
Business Strategy Consultants
Cheri Vogel, Ph.D.
Project Director
Mathematica Policy Research
Yange Xue, Ph.D.
Co-principal Investigator
Mathematica Policy Research
Harshini Shah, Ph.D.
Deputy Survey Director
Mathematica Policy Research
Barbara Carlson, M.A.
Senior Statistician
Mathematica Policy Research

Amanda Clincy, Ph.D.
Social Science Research Analyst
Office of Planning, Research and Evaluation
Jenessa Malin, Ph.D.
SRCD Policy Fellow
Office of Planning, Research and Evaluation
Kimberly Boller, Ph.D.
Co-principal Investigator
Mathematica Policy Research
Laura Kalb, B.A.
Survey Director
Mathematica Policy Research
Eileen Bandel, Ph.D.
Measurement Task Lead
Mathematica Policy Research

17

SUPPORTING STATEMENT PART B

REFERENCES

Aikens, N., Moiduddin, E., Xue, Y., Tarullo, L., and West, J. (2012). Data Tables for Child
Outcomes and Classroom Quality in FACES 2009 Report. OPRE Report 201137b.
Washington, DC: Office of Planning, Research and Evaluation, Administration for Children
and Families, U.S. Department of Health and Human Services.
Chromy, J. R. “Sequential Sample Selection Methods.” Proceedings of the American Statistical
Association, Survey Research Methods Section, 1979, pp. 401–406.
Vogel, Cheri A., Pia Caronongan, Yange Xue, Jaime Thomas, Eileen Bandel, Nikki Aikens,
Kimberly Boller and Lauren Murphy. “Toddlers in Early Head Start: A Portrait of 3-YearOlds, Their Families, and the Programs Serving Them: Technical Appendices.” OPRE
Report No. 2015-28. Washington, DC: Office of Planning, Research and Evaluation,
Administration for Children and Families, U.S. Department of Health and Human Services,
2015.

18


File Typeapplication/pdf
AuthorMathematica Staff
File Modified2017-07-14
File Created2017-07-14

© 2024 OMB.report | Privacy Policy