FACES 2014 Spring 2017 OMB Part A_OMB passsback_clean final

FACES 2014 Spring 2017 OMB Part A_OMB passsback_clean final.docx

Head Start Family and Child Experiences Survey (FACES 2014-2018)

OMB: 0970-0151

Document [docx]
Download: docx | pdf


Head Start Family and Child Experiences Survey (FACES 2014–2018) OMB Supporting Statement for Data Collection

Part A: Justification

May 7, 2014

Update December 2016

Updated March 2017



CONTENTS

A. JUSTIFICATION 7

A.1. Circumstances Making the Information Collection Necessary 8

A.2. Purpose and Use of the Information Collection 12

A.3. Use of Improved Information Technology 17

A.4. Efforts to Identify Duplication and Use of Similar Information 18

A.5. Impact on Small Businesses or Other Small Entities 18

A.6. Consequences of Not Collecting Information or Collecting Information Less Frequently 18

A.7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5 18

A.8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency 18

A.9. Explanation of Any Gift to Respondents 20

A.10. Assurance of Privacy Provided to Respondents 22

A.11. Justification for Sensitive Questions 23

A.12. Estimates of Annualized Burden Hours and Costs 24

A.13. Estimates of Other Total Cost Burden to Respondents and Record Keepers 33

A.14. Cost to the Federal Government 33

A.15. Explanation for Program Changes or Adjustments 33

A.16. Plans for Tabulation and Publication and Project Time Schedule 34

A.17. Reason(s) Display of OMB Expiration Date is Inappropriate 37

A.18. Exceptions to Certification for Paperwork Reduction Act Submissions 37

REFERENCES 38



APPENDICES

appendix a: authorizing statues

APPENDIX b: conceptual models

appendix c: study introduction materials

APPENDIX d: classroom observation

appendix e: omb history

APPENDIX f: OMB PUBLIC COMMENTS

APPENDIX G: confidentiality pledge

Appendix h: advance materials

appendix i: screenshots

APPENDIX J: SPRING 2015 ADVANCE MATERIALS

APPENDIX K: AI/AN FACES FALL 2015 ADVANCE MATERIALS

APPENDIX L: COMPARISON OF CORE FACES AND AI/AN FACES INSTRUMENTS

APPENDIX M: FACES 2014 PARENT EXPERIMENT RESULTS MEMO

APPENDIX N: AI/AN FACES SPRING 2016 ADVANCE MATERIALS

APPENDIX O: AI/AN FACES CONFIDENTIALITY AGREEMENT

APPENDIX P: SPRING 2017 ADVANCE MATERIALS

APPENDIX Q: SPRING 2017 PROGRAM INFORMATION PACKAGE



TABLES

A.1 FACES 2014–2018 Core Instruments, Sample Size, Type of Administration, and Periodicity 9

A.2 FACES 2014–2018 Family Engagement Expert Panel Members 19

A.3 AI/AN FACES Workgroup Members 20

A.4 FACES 2014–2018 Previously Approved Token of Appreciation Structure Compared to Structure of Prior Rounds 21

A.5.a Approved Estimated Information Gathering Annual Response Burden and Approved Information Gathering Annual Cost-Completed 25

A.5.b Approved Estimated Information Gathering Annual Response Burden and Approved Information Gathering Annual Cost-Ongoing 27

A.6 Final Response Rates for Fall 2014 Approved Information Requests 28

A.7 Final Response Rates for Spring 2015 Approved Information Requests 28

A.8 Final Response Rates for Fall 2015 AI/AN FACES Approved Information Requests 30

A.9 Final Response Rates for Spring 2016 AI/AN FACES Approved Information Requests 31

A.10 Estimated Current Annual Response Burden and Current Annual Cost 31

A.11 Estimated Future Annual Response Burden and Future Annual Cost 32

FIGURES

A.1 FACES 2014–2018 Study Structure 9



ATTACHMENTS

ATTACHMENT 1: Classroom sampling form from Head Start staff

ATTACHMENT 2: Child roster form from Head Start staff

ATTACHMENT 3: HEAD START CORE CHILD ASSESSMENT

ATTACHMENT 4: HEAD START CORE PARENT SURVEY

ATTACHMENT 5: HEAD START FALL SUPPLEMENTAL PARENT SURVEY

ATTACHMENT 6: HEAD START CORE TEACHER CHILD REPORT

ATTACHMENT 7: HEAD START SPRING SUPPLEMENT PARENT SURVEY

ATTACHMENT 8: HEAD START CORE TEACHER SURVEY (REVISED SPRING 2017)

ATTACHMENT 9: HEAD START CORE PROGRAM DIRECTOR SURVEY (REVISED SPRING 2017)

ATTACHMENT 10: HEAD START CORE CENTER DIRECTOR SURVEY (REVISED SPRING 2017)

ATTACHMENT 11: HEAD START PARENT QUALITATIVE INTERVIEW (FAMILY ENGAGEMENT)

ATTACHMENT 12: HEAD START STAFF QUALITATIVE INTERVIEW (FSS ENGAGEMENT)

ATTACHMENT 13: HEAD START STAFF (FSS) Roster form

ATTACHMENT 14: early care and education providers survey for plus study (5E-Early ED Pilot)

ATTACHMENT 15: early care and education providers survey for plus study (FPTRQ)

attachment 16: HEAD START CHILD ASSESSMENT for plus study (Ai/An FACES)

attachment 17: HEAD START PARENT SURVEY FOR PLUS STUDY (AI/AN FACES)

ATTACHMENT 18: head start teacher child report for plus study (AI/an faces)

ATTACHMENT 19: HEAD START CORE PARENT SURVEY FOR PLUS STUDY (AI/AN FACES SPRING 2016)

ATTACHMENT 20: HEAD START CORE TEACHER SURVEY FOR PLUS STUDY (AI/AN FACES)

ATTACHMENT 21: HEAD START PROGRAM DIRECTOR CORE SURVEY FOR PLUS STUDY (AI/AN FACES)

ATTACHMENT 22: HEAD START CENTER DIRECTOR CORE SURVEY FOR PLUS STUDY (AI/AN FACES)

ATTACHMENT 23: EARLY CARE AND EDUCATION ADMINISTRATOR SURVEY FOR PLUS STUDY (HEAD START PROGRAM PERFORMANCE STANDARDS)

ATTACHMENT 24: EARLY CARE AND EDUCATION PROVIDERS SURVEY FOR PLUS STUDY (5E-EARLY ED)

ATTACHMENT 25: TELEPHONE SCRIPT FOR PROGRAM DIRECTORS (REVISED SPRING 2017)

ATTACHMENT 26: TELEPHONE SCRIPT FOR ON-SITE COORDINATORS (REVISED SPRING 2017)

ATTACHMENT 27: OPRE Privacy Impact Assesssment

A. JUSTIFICATION

The Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families (ACF), U.S. Department of Health and Human Services (HHS), is collecting data for the Head Start Family and Child Experiences Survey (FACES). FACES 2014–2018 features a new “Core Plus” study design. Through this design, FACES will provide data on a set of key indicators more rapidly and with greater frequency than in past rounds of FACES (Core studies), and will allow for studying more complex issues and topics in greater detail and with increased efficiency (Plus studies). The overall design of the FACES 2014–2018 Core and the procedures that are used to select and recruit the sample and conduct data collection are, for the most part, similar to those used in FACES 2009 (OMB number 0970-0151).

The proposed FACES design includes multiple components as noted above, and therefore will involve multiple information collection requests. The current information collection request is for spring 2017 Core and Plus Study data collection, including surveys with teachers, program directors, and center directors.

Previously approved information collection requests for FACES 2014-2018 include the following:

  • Sampling plans for Head Start programs, centers, classrooms, and children, as well as the procedures for recruiting programs and selecting centers (approved April 7, 2014).

  • Fall 2014 data collection activities, including selecting classrooms and children for the study, conducting child assessments and parent interviews, and obtaining Head Start teacher reports on children’s development (approved July 7, 2014).1

  • Spring 2015 core data collection activities that included selecting classrooms in additional Head Start programs; conducting classroom observations; surveying teachers, center directors, and program directors; and interviewing parents and staff for FACES Plus studies (approved February 20, 2015).

  • Fall 2015 AI/AN FACES Plus Study data collection activities that included selecting Head Start classrooms and children for the study2, conducting child assessments and parent surveys, and obtaining Head Start teacher reports on children’s development (approved August 7, 2015).3

  • Spring 2016 AI/AN FACES Plus Study data collection activities that included conducting classroom observations; surveying teachers, center directors, and program directors; and surveying parents (Approved March 2, 2016).

A.1. Circumstances Making the Information Collection Necessary

a. Background

ACF has contracted with Mathematica Policy Research (Mathematica) and its subcontractors, Juárez and Associates and Educational Testing Service, under contract number HHSP23320095642WC/HHSP2337052T, to collect information on Head Start Performance Measures. FACES 2014–2018 extends a previously approved data collection program (OMB number 0970-0151) to a new sample of Head Start programs, families, and children. FACES 2014–2018, similar to previous FACES rounds, will collect information from a national probability sample of Head Start programs to ascertain what progress Head Start has made toward meeting program performance goals. There are two legislative bases for the FACES data collection: the Government Performance and Results Act of 1993 (P.L. 103-62), requiring that the Office of Head Start (OHS) move expeditiously toward development and testing of Head Start Performance Measures, and the Improving Head Start for School Readiness Act of 2007 (P.L. 110-134), outlining requirements on monitoring, research, and standards for Head Start (Appendix A). FACES provides the mechanism for collecting data on nationally representative samples of programs, children, and families served by Head Start in order to provide OHS, other federal government agencies, local programs, and the public with valid and reliable national information.

b. Overview of the Study

In 2014, FACES enters its 17th year of serving as a source of timely, periodic, contextualized data about the national Head Start program and its participants. OPRE and OHS engaged in a comprehensive redesign process to renovate FACES for improved effectiveness and efficiency. Enhanced flexibility and responsiveness are central features of the new design so FACES will be a fluid and responsive data collection system to meet the evolving policy and programmatic needs of Head Start. Built on a foundation constructed to report on key characteristics and indicators of programs, classrooms, and child outcomes (Core studies), FACES 2014–2018 also provides the opportunity for several types of integrated Plus studies. These could include topical studies and special studies of greater complexity. More explicitly than past rounds of FACES, the Core Plus study design meets the need for a systems change perspective―one designed to measure an interconnected system in which decisions at one level act as drivers or inhibitors at the next level. It also embodies a continuous program-improvement ethic—the elements measured are those that Head Start has the capacity to change and refine over time. Thus, FACES 2014–2018 represents a major step toward supporting the development of improved services at all levels of the Head Start program.

Approximately 230 Head Start programs and 460 Head Start centers will be selected to participate in FACES 2014–2018. The Core will include a nationally representative sample of 180 programs; an additional 50 programs may be selected for Plus studies. As presented in Figure A.1, the Core Plus design features two Core studies—the Classroom + Child Outcomes Core and the Classroom Core—and Plus studies to include additional survey content of policy or programmatic interest to be determined.

While the Plus studies were originally intended to be conducted in tandem with Core data collection activities, the additional time needed for consultation around the AI/AN FACES Plus Study data collection (see Section A.8) warranted an off-cycle data collection from the Classroom + Child Outcomes Core study, such that it began in Fall 2015.

Figure A.1. FACES 2014–2018 Study Structure


Fall 2013

Spring 2014

Fall
2014

Spring 2015

Fall 2015

Spring 2016

Fall 2016

Spring
2017

Fall 2017

Spring 2018

Core

Design

Classroom + Child

Classroom + Child

Shape1 Shape2 Reporting

Design

Classroom

Shape3 Shape4 Reporting

Plus

Design (topics TBD)

Topical Module and/or Special Study

Shape6 Shape5 Reporting

Design

Topical Module and/or Special Study

Shape7 Shape8 Reporting

The Classroom + Child Outcomes Core occurred in fall 2014 and spring 2015. At both time points, FACES assessed the school readiness skills of 2,400 Head Start children from 60 of the 180 programs, surveyed their parents, and asked the children’s teachers to rate children’s social and emotional skills (see Table A.1). In spring 2015, the number of programs in the FACES sample increased from the 60 that are used to collect data on children’s school readiness outcomes to all 180 programs for the purpose of conducting observations in 720 Head Start classrooms. Surveys with program directors, center directors, and teachers are also conducted in the spring. Therefore, the Classroom + Child Outcomes Core collects child-level data along with program and classroom data from 60 programs while only program and classroom data is gathered from an additional 120 programs. In spring 2017, the Classroom Core will be conducted focusing on program and classroom data collection only for all 180 programs.

Table A.1. FACES 2014–2018 Core Instruments, Sample Size, Type of Administration, and Periodicity

Instrument

Sample Size

Type of Administration

Fall
2014

Spring 2015

Spring 2017

Classroom + Child Outcomes Core


Classroom sampling form from Head Start staffa

180

CADE on the web

X

X


Child roster form from Head Start staff

60

CADE on the web

X



Direct child assessmenta

2,400

CAPI with tablet computer

X

X


Head Start teacher child ratinga

2,400

Web with paper option

X

X


Parent surveya

2,400

Web/CATI

X

X


Head Start classroom observation

720

CADE with tablet computer


X


Head Start teacher survey

720

Web with paper option


X


Program director survey

180

Web with paper option


X


Center director survey

360

Web with paper option


X


Classroom Core





Classroom sampling form from Head Start staff

180

CADE on the web



X

Head Start classroom observation

720

CADE with tablet computer



X

Head Start teacher survey

720

Web with paper option



X

Program director survey

180

Web with paper option



X

Center director survey

360

Web with paper option



X

a Classrooms for the 60 programs participating in the Classroom + Child Outcomes Core were sampled in fall 2014. Classrooms for the 120 programs participating in the Classroom Core were sampled in spring 2015.

b Information gathered from 60 programs; all other components are collected from all 180 programs.

CAPI = Computer-assisted personal interviewing; CATI = Computer-assisted telephone interviewing; CADE = Computer-assisted data entry

The goal of both Core studies is to describe (1) the quality and characteristics of Head Start classrooms, programs, and staff for specific program years; (2) the changes or trends in the quality and characteristics of classrooms, programs, and staff over time; and (3) the factors or characteristics that predict differences in classroom quality. The Classroom + Child Outcomes Core study also adds a focus on describing (4) the school readiness skills and family characteristics of Head Start children for specific program years; (5) the changes or trends in children’s outcomes and family characteristics over time; and (6) the factors or characteristics at multiple levels that predict differences in children’s outcomes. Across the two Core studies, several types of questions will be addressed (see Appendix B for the FACES conceptual frameworks), to include the following:

  • What are the characteristics and observed quality of Head Start classrooms? Are these improving over time?

  • What are the characteristics and qualifications of Head Start teachers and management staff? Are these changing over time?

  • What are the characteristics of Head Start programs? Are these changing over time?

  • Does classroom quality vary by characteristics of programs, teachers, or classrooms?

  • What are the demographic characteristics and home environments of children and families served by Head Start? Are these changing over time?

  • What are the average school readiness skills of the population of Head Start children in fall and spring of the Head Start year? How do Head Start children compare with children of similar ages in the general population4?

What is the association between observed classroom quality and children’s school readiness skills? Between child and family characteristics and children’s school readiness skills?

In spring 2015, FACES included a Plus topical module focused on family engagement. This Plus feature was conducted within the 60 programs participating in child-level data collection in the Classroom + Child Outcomes Core study. Within each of these 60 programs, we randomly selected three family services staff (FSS) from among those working in the program. We also selected a subsample of six parents per program (within the two sampled centers), for a total of 360 parents and 180 FSS. The topical module included one-hour interviews with these FSS and with this random subsample of parents. There was an additional 5 minutes of parent survey content for all 2,400 parents participating in the child-level data collection (i.e., the Head Start spring parent supplement survey). There was also an additional 5 minutes of teacher survey content for all 240 teachers participating in the child-level data collection (i.e., the Family and Provider Teacher Relationship Questionnaire [FPTRQ]; Attachment 15). Although the experiences and participation of families have always played a central role in Head Start, recent years have seen a growing emphasis on developing and using strategies to make parent and family engagement activities systematic and integrated within Head Start programs. Several activities contribute to this goal, including development of Office of Head Start’s Parent, Family, and Community Engagement Framework; the provision of resources by the National Center for Parent, Family, and Community Engagement; and the piloting of instruments focused on parent engagement and parent-staff relationships in Head Start. The Family Engagement Plus Study provides information about the engagement and service provision experiences of Head Start families. It also provides information about the direct providers of services to parents and families whose voices have not been captured in national studies. With the exception of a case study component in the 1997 cohort, FACES has not collected in-depth qualitative data on the experiences of families participating in Head Start programs or the staff who provide family support services to them.

The family engagement study explores several questions:

  • What does family engagement look like in Head Start?

  • How do FSS work with families, and what program supports do they receive?

  • How are comprehensive services provided in Head Start?

  • Do family engagement and/or service provision differ by family characteristics?

  • What changes do families identify as a result of Head Start?

  • What are the background characteristics of FSS?

Additionally, in spring 2015, FACES included a Plus study to pilot a new measure of program functioning. This Plus feature was conducted within the 120 programs participating in classroom-only-level data collection. The 480 classroom teachers participating were asked to complete the pilot version of the Five Essentials Measurement System for Early Education (5E-Early Ed) educator survey.

In fall 2015, FACES introduced a new Plus study—a descriptive study of children and families who attend Head Start tribal programs in Head Start Region XI, referred to as the American Indian and Alaska Native Head Start Family and Child Experiences Survey (AI/AN FACES). Historically, FACES has not included Region XI programs, children and families in its national Head Start samples. As a result, we have little data that can be used to assess the service needs of the children and families in Region XI—approximately 87 percent of whom are American Indian or Alaska Native—and to help inform policies and practices for addressing these needs. AI/AN FACES has been designed to fill this information gap. AI/AN FACES will address the following research questions:

  • What are the demographic characteristics and home environments of children and families served by Region XI Head Start? What are the needs of the children and families who are being served?

  • What are the average school readiness skills of the population of Region XI Head Start children in fall and spring of the Head Start year? How do Head Start children compare with children of similar ages in the general population? How do they perform relative to other Head Start children attending programs in Regions I-X?

  • What characteristics of children’s Head Start experiences and home life are associated with better child outcomes?

In spring 2017, FACES will include a Plus topical module focused on programs’ perceptions of and planning for implementing the new Head Start program performance standards (https://eclkc.ohs.acf.hhs.gov/policy/presenting). This Plus module will be conducted in all 180 programs participating in the Classroom Core study. The topical module will include additional survey items asked as part of the program and center director surveys. ACF has undertaken a major effort to revise and simplify the standards to reflect current research and best practices for supporting quality services to children and families. This module will provide information on how programs are thinking about implementing policies and practices to meet the new standards (if not already in place). It will enrich understanding on Head Start program organization and functioning.

The programs’ planning on Head Start program performance standards module explores several questions:

  • In what areas will programs need to make changes to meet the new standards?

  • What are challenges associated with meeting the new standards?

  • What planning is taking place to ensure they meet the new standards?

  • How are programs implementing the new standards focused on service duration (the number of hours of services that must be offered to children each program year) and curriculum (having a curriculum that meets the new standards, providing support for implementing the new curriculum)?

Additionally, in spring 2017, FACES plans to continue its exploration of program functioning by asking teachers about their center’s climate, professional development, teaching, family engagement, and program leadership. All teachers in the Classroom Core will be asked to complete measures from the Five Essentials Measurement System for Early Education (5E-Early Ed) educator survey. They will be randomly assigned to receive a subset of measures.

A.2. Purpose and Use of the Information Collection

Major study activities to address the FACES 2014–2018 research questions will include:

  • Selecting a nationally representative sample of Head Start programs, recruiting them to participate in the study, gathering information from those programs to develop a center sampling frame, and selecting a nationally representative sample of Head Start centers (approved under OMB #0970-0151 on April 7, 2014)5

  • Sampling classrooms within those centers (approved under OMB #0970-0151 on July 7, 2014)

  • Sampling children and recruiting families of Head Start enrollees to participate in the study (approved under OMB #0970-0151 on July 7, 2014)

Collecting data from children and families, Head Start staff, and Head Start classrooms (approved under OMB #0970-0151 for child and family data on July 7, 2014 and for spring 2015 data collection on February 20, 2015)

  • Collecting data as part of potential Plus studies to include topical studies and special studies of greater complexity (approved under OMB #0970-0151 for spring 2015 and fall 2015 Plus data collection, on February 20, 2015 [spring 2015 data collection] and August 7, 2015 [fall 2015 data collection]).

  • Analyzing and reporting findings (approved under OMB #0970-0151 for fall 2014, on July 7, 2014, for spring 2015 data collection on February 20, 2015, for AI/AN FACES fall 2015 data collection on August 7, 2015, and for AI/AN FACES spring 2016 data collection on March 2, 2016.)

The overall design of FACES 2014–2018—the sampling plan, instruments, procedures, and data analysis plan—draws from the design of FACES 2009 and earlier rounds, but we propose some changes in approach and instruments. Like previous rounds, FACES 2014–2018 uses a multi-stage sample design with four stages: (1) Head Start programs, (2) centers within programs, (3) classrooms within centers, and (4) children within classrooms. We describe sampling procedures more fully in section B.1. and data collection procedures more fully in section B.2.

We will use the data collected as part of the FACES 2014–2018 Core to provide descriptions of the characteristics, experiences, and outcomes for children and families served by Head Start and to observe the relationships among family and program characteristics and outcomes. We will use the data collected as part of the spring 2015 and spring 2017 Plus topic modules, similarly, to provide descriptions of family engagement and program functioning. Findings from FACES 2014–2018 will provide information on Head Start Performance Measures and help guide OHS, national and regional training and technical assistance providers, and local programs in supporting policy development and program improvement.

Further, we will use the data collected as part of AI/AN FACES to provide rich, descriptive information about Region XI children, their parents, programs, classrooms and teachers—with a particular focus on the gains children make in critical school readiness skills over the course of one Head Start year. These data will inform the decisions that are made for the good of children and families in Region XI and also at the national level as we will have a better picture of Head Start overall by including Region XI.

  1. Approved Information Collection Requests Concluded

In fall 2014, the field enrollment specialists (FESs) visited each sampled center to gather information to select the sample of classrooms (previously approved) and, for the 60 programs involved in the child-level data collection, the child sample (previously approved).6 For these 60 programs, visits occurred three weeks before the scheduled date of the fall 2014 data collection. FESs worked with center staff and the on-site coordinator (OSC), a liaison between the program and the study team, to distribute consent materials to parents of selected children. Consent materials included a consent letter and form (Appendix C.1 and C.2), a set of frequently asked questions (study FAQ) (Appendix C.3), and a study brochure (Appendix C.4). FESs also distributed study FAQs to teachers of selected classrooms. Finally, FESs provided centers with study flyers (Appendix C.5) for staff to display during the weeks prior to the data collection visit to remind staff and parents about the upcoming data collection visit. For the remaining 120 programs, FES visits (focusing on only classroom sampling) occurred at the start of the classroom observation week in spring 2015. This same procedure will occur with all programs in spring 2017.

Direct child assessments (previously approved) in fall 2014 and spring 2015, as well as teacher ratings (previously approved), will document children’s cognition and general knowledge, language use and emerging literacy, social and emotional development, approaches to learning, and physical development. Parent surveys will obtain data on parent’s and children’s activities, experiences with health care, and parents’ feelings and attitudes about themselves (previously approved).

In spring 2015 we added to the fall 2014 activities to include program director (previously approved), center director (previously approved), and teacher (previously approved) surveys that will provide data on their employment and educational background, program goals and philosophy, and curriculum and classroom activities.7 A subsample of teachers were asked to complete a second survey about their center’s climate, professional development, teaching, family engagement, and program leadership (under the “Early Care and Education Providers Survey for Plus Study” master burden request, previously approved) for the purpose of examining a new measure’s psychometric properties for future FACES use to gather descriptive information on program functioning.

Additionally, a Plus topical module was included for studying family engagement that involved all parents completing additional survey items as part of the Core survey (i.e., the Head Start spring supplement parent survey, previously approved) and interviews with a subsample of Head Start staff (FSS; previously approved) and parents (previously approved). The spring supplement survey asked questions about parent-staff relationship and communication and community support. Interviews with a subsample of Head Start parents and staff included modules on parent involvement in Head Start and program outreach and engagement practices, along with open-ended questions on various family engagement topics. Staff were selected based on a roster of all FSS in the program, focusing on those in the two selected centers, gathered from the On-Site Coordinator (previously approved). (See Part B for a description of the sampling approach for staff and parents for the Plus study). Consent was gathered for individuals participating in the interviews (previously approved). All 240 teachers selected for FACES in those programs were asked to complete a set of items about parent-staff relationships (under the “Early Care and Education Providers Survey for Plus Study” master burden request, previously approved).

The primary goal of the family engagement module is to highlight patterns regarding practices and experiences overall and for key subgroups—for exploratory and hypothesis-generating purposes. Any suggestive findings will be used to help generate hypotheses about family engagement efforts and service provision and to inform future research efforts. For example, while program performance standards and policies mandate the types of family engagement efforts that are initiated (the “what” of engagement), we are currently limited in our knowledge of the “how”—the ways in which staff perform the day-to-day work of engaging with families, their successes and challenges, and the ways in which they individualize their practices. By providing suggestive information about the mechanisms of program efforts and identifying potential areas of strength and need, findings from the data can point to areas for study in future FACES data collections.

The AI/AN FACES data collection activities and instruments mirror those of Core FACES (fall 2014 and spring 2015)8: field enrollment specialists (FESs) visited each sampled center to gather information to select the sample of classrooms (previously approved) and the child sample (previously approved).9 Visits occurred a few weeks before the scheduled date of the fall 2015 data collection. FESs worked with center staff and the OSC to distribute recruitment and consent materials to parents of selected children, including a consent letter and form (Appendix K.1), a set of frequently asked questions (study FAQ) (Appendix K.2), and a study brochure (Appendix K.3). FESs also distributed study FAQs to teachers of selected classrooms. Finally, FESs provided centers with study flyers (Appendix K.4) for staff to display during the weeks prior to the data collection visit to remind staff and parents about the upcoming data collection visit.

Fall 2015 AI/AN FACES data collection activities included direct child assessments (previously approved), as well as teacher ratings (previously approved), to document children’s language use and emerging literacy, mathematics knowledge and skills, executive functioning, social and emotional development, approaches to learning, and physical health and development. Parent surveys obtained data on parent’s and children’s activities, experiences with health care providers, and parents’ feelings and attitudes about themselves (previously approved).

Spring 2016 AI/AN FACES data collection activities included parent surveys to obtain data on parent’s and children’s activities, family health, cultural connections, and neighborhood characteristics (previously approved), and staff surveys for program directors (previously approved), center directors (previously approved), and teachers (previously approved) that provided data on their employment and educational background, program goals and philosophy, curriculum and classroom activities, and native culture and language at the center.10

  1. Information Collection Requests Current

Spring 2017 FACES data collection activities will follow the FACES Classroom Core study design for 180 selected programs beginning with a field enrollment specialist (FES) visit to each sampled center at the start of the classroom observation week to gather information to select the sample of classrooms (Attachment 1, previously approved)11. FESs will work with center staff and the on-site coordinator (OSC), a liaison between the program and the study team, to distribute study FAQs to teachers of selected classrooms. Data collection activities will include classroom observations (Appendix D) as well as staff surveys for program directors (Attachment 9, revised), center directors (Attachment 10, revised), and teachers (Attachment 8, revised) that will provide data on their employment and educational background, program goals and philosophy, and curriculum and classroom activities. Revisions to this instrument consist of minor updates to improve clarity of questions and responses based on updates made for the AI/AN FACES instrument or spring 2015 experience. Additionally, some questions have been dropped based on lack of item variability or frequency of change anticipated.

Additionally, a Plus topical module is planned for studying programs’ planning for the new Head Start program performance standards. Program and center directors will complete additional survey items as part of the Core survey (under the “Early Care and Education Administrators Survey for Plus Study,” Attachment 23). The topical module will provide data on programs’ responses to the new Head Start program performance standards; specifically, program and center directors will report on areas they need to make changes to meet the new standards, as well as associated challenges, and how they are going about implementing standards focused on service duration (the number of hours of services that must be offered to children each program year) and curriculum (having a curriculum that meets the new standards, providing support for implementing the new curriculum). Finally, teachers will be asked to complete additional items as part of the Core survey (i.e., the “Early Care and Education Providers Survey for Plus Study,” Attachment 24) about their center’s climate, professional development, teaching, family engagement, and program leadership (using subscales from the 5E-Early Ed educator survey).

  1. Future Information Collection Requests

Future information collection requests will cover remaining components of the FACES study. Head Start staff or parents may be selected for other Plus topical modules or special studies that would involve qualitative interviews or supplemental surveys for additional content. For Plus studies, the study team may collect data (for future collection requests) through direct child assessments, web-based surveys, or telephone interviews, depending on the nature of the study. Quantitative or qualitative data collection methods may be used.

The instruments to support the Plus studies anticipated for future submission were described in the first Federal Register notice for the FACES 2014–2018 data collection, published in the Federal Register, Volume 79, pp. 11445-11446 on February 28, 2014 (Reference number FR 2014-04032). We will submit these future requests directly to OMB and allow for a 30-day public comment period under the Paperwork Reduction Act prior to use when these materials are fully developed.

A.3. Use of Improved Information Technology

The proposed data collection builds on the techniques that reduced burden in FACES 2009 while adding enhancements to further reduce burden. As done in FACES 2009, the study team will administer child assessments using computer-assisted personal interviewing (CAPI) to facilitate the routing and calculation of basal and ceiling rules, thereby lessening the amount of time required to administer the assessments and reducing burden on the child. To further enhance the assessment experience for the child and reduce assessment time, we will also present the child with assessment images on a second tablet screen (separate from the computer screen viewed by the assessor) rather than on an easel. Parent surveys will be web-based or administered using computer-assisted telephone interviewing (CATI). With the introduction of web-based surveys with a low-income population, we conducted an experiment in fall 2014 to understand how response rates and costs are affected by this new option. In particular, we were interested in whether it is cost-effective to use a web survey as compared to a telephone-administered survey with a low-income population and whether parents’ choice of a web survey is a function of how this option is introduced to them. Each program’s parents were randomly assigned in the fall to one of two groups to complete the parent survey: (1) a web-first group or (2) a choice group. The web-first group received a web-based survey initially with CATI follow-up after three weeks. The choice group received the option of either web-based or CATI administration starting at the beginning of data collection. Please see Part B, Section B.2 Data Collection Procedures for more details. Appendix M presents a summary of the experiment’s findings and recommendations. Based on the fall 2014 results, we recommended in spring 2015 (1) giving all parents the choice between telephone and web, (2) reducing the delay in active calling from three weeks to two weeks and (3) continuing to offer a $5 bonus for responding early and another $5 for responding online. For fall 2015 data collection, we (1) eliminated the delay in active and eliminated the $5 bonus opportunities (see section A.9). We gave Head Start teachers the option of completing their Head Start Teacher Child Report (TCR) forms on the web or on paper. Head Start teachers, program directors, and center directors will have the option of completing their spring 2015/2017 survey on the web or on paper. Spring 2015 Plus study interviews with parents and staff will be administered by interviewers using semi-structured paper-pencil guides or CATI. Appendix I provides example screenshots of parent and TCR web-based instruments. AI/AN FACES data collection mirrors the Core: parents are given the option of a Web or telephone completion, and Head Start staff are given the option of Web or paper and pencil completion.

A.4. Efforts to Identify Duplication and Use of Similar Information

There is no evidence of other studies that offer comprehensive information on program quality, child outcomes, services, and characteristics of Head Start staff, children, and families. Previous cohorts of FACES would not have captured new program initiatives or changes to the population served by Head Start in the past few years.

Although we identified and adapted many useful survey items from other studies for use in FACES, none of those studies have collected comparable data on a nationally representative sample of Head Start children and families. No available studies combine the four sources of primary data (staff surveys, classroom observations, and, if part of child-level data collection, child assessments and parent surveys) that will be collected in FACES 2014–2018. Also, there is no other source for detailed child-level information that may be used to describe changes in the population served by Head Start over time. However, FACES captures information for children attending the population of Head Start centers, as opposed to other studies, such as Head Start CARES, which examines a randomized trial of interventions.

A.5. Impact on Small Businesses or Other Small Entities

No small businesses are impacted by the data collection in this project.

A.6. Consequences of Not Collecting Information or Collecting Information Less Frequently

From the start of FACES in 1997 through the most recent round in 2009, FACES has been fielded in three-year intervals to be a descriptive study of the population served by Head Start and to monitor program performance, examining both continuity and change. During the FACES redesign process, stakeholders expressed a desire for more timely data (Moiduddin et al. 2012). FACES 2014–2018 will help to enhance the timeliness and accessibility of information by collecting classroom and program data every two years and child-level data every four years. This periodicity is necessary to examine trends and changes over time. Each round of data collection occurs within a single program year, so could not be done less frequently.

A.7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances requiring deviation from these guidelines.

A.8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

The first Federal Register notice for the FACES 2014–2018 data collection was published in the Federal Register, Volume 79, pp. 11445-11446 on February 28, 2014 (Reference number FR 2014-04032). Two substantive public comments and three requests to see the study instruments were received during the 60-day comment period. Draft instruments were sent as requested. The first comment, from a retired Head Start teacher, emphasized the importance of including rural programs in the study. A response to the commenter noted that FACES is designed to be a nationally representative study, and programs in both urban and rural areas are included in the study; additionally FACES Core now includes a larger sample of programs than in the past which will allow additional analyses across program types. The second comment, from the executive director of the National Head Start Association, proposed recommendations to help document the two-generation work Head Start does with families. A response to the commenter noted that the topics recommended are under consideration for inclusion in spring 2015 or future plus study data collection materials (see Appendix F for a copy of this public comment and response). The second Federal Register notice for the FACES 2014–2018 data collection was published in the Federal Register, Volume 79, pp. 27620-27621 on May 14, 2014 (Reference number FR 2014-11054). No comments or requests were received during that comment period. The third Federal Register notice was published in Volume 79, pp. 73077-73078 on December 9, 2014 (Reference number FR 2014-28776). Two requests for instruments were received. Draft instruments were sent as requested. The fourth Federal Register notice was published in Volume 80, pp. 30250-30251 on May 27, 2015 (reference number FR 2015-12726). No comments or requests were received during that comment period. The fifth Federal Register notice was published in Volume 80, pp. 70231 on November 13, 2015 (reference number FR 2015-28870). No comments or requests were received during that comment period. Copies of the 60-day notice and previous 30-day notices are included in Appendix E.

Previous rounds of FACES and the FACES redesign involved many individuals and organizations. The new FACES Core Plus study design and content reflect the redesign project, which gathered information from key stakeholders, examined programmatic and policy priorities, and reviewed study design and measurement strategies. The redesign project held two expert panel meetings—one on research priorities and one on methods—which led to the design options. For FACES 2014–2018, we will engage outside experts on particular topics as they emerge. We will obtain their feedback through written products, telephone conversations, or webinars. To date, we have consulted experts concerning the measurement of family engagement in the parent survey. Members of the family engagement expert panel are listed below in Table A.2.

Table A.2. FACES 2014–2018 Family Engagement Expert Panel Members

Member Name

Affiliation

Oscar Barbarin

Tulane University

Juliet Bromer

Herr Research Center for Children and Social Policy, Erikson Institute

Toni Porter

Innovation, Policy and Research, Bank Street College of Education

Joshua Sparrow

Harvard University

Heather Weiss

Harvard University

Additionally, for the AI/AN FACES Plus study a workgroup was formed in recognition of the unique nature of conducting research with this population. The Workgroup includes Region XI Head Start directors and community leaders and early childhood researchers with experience in tribal communities (Table A.3). Together with FACES study senior staff and federal officials, the Workgroup has undertaken a collaborative effort to address cultural issues in providing advice on how the study should be designed and carried out (including recruitment approaches). The group has met regularly to (1) identify key research questions and information needs, (2) define the population of interest and to develop the sampling design, (3) select appropriate measures to assess the growth and development of children served by Region XI Head Start programs and to describe characteristics of children’s homes and families, Head Start classrooms and programs, (4) identify research methods and practices that would be effective in tribal communities, and (5) develop effective strategies for securing the participation of programs in the study and the approval of the tribal communities. Workgroup members provide input on the design and reporting of the analyses of data from the study; thus, helping to ensure that the questions of greatest interest to Region XI and tribal communities are answered.

Table A.3. AI/AN FACES Workgroup Members

Member Name

Affiliation

Jessica Bames-Najor

Michigan State University, Tribal Early Childhood Research Center

Ann Belleau

Inter-Tribal Council of Michigan; NIHSDA

Patty Brown

National Center for Tribal Child Care Innovation

Myrna Dingman

Pueblo of San Felipe

Hiram Fitzgerald

Michigan State University, Tribal Early Childhood Research Center

Jacki Haight

Port Gamble S’Klallam; NIHSDA

Kirstin (Hisatake) Nilles

Confederated Tribes of Warm Springs

Charmaine Lundy

Kenaitze Indian Tribe

Racquel Martinez

Tanana Chiefs; NIHSDA

Laura McKechnie

Sault Ste. Marie Tribe

Douglas Novins

University of Colorado Anschutz Medical Campus, Centers for American Indian & Alaska Native Health, Tribal Early Childhood Research Center

Michelle Sarche

University of Colorado Anschutz Medical Campus, Centers for American Indian & Alaska Native Health, Tribal Early Childhood Research Center

Sharon Singer

Navajo Nation; NIHSDA

Teresa Smith

Kenaitze Indian Tribe

Lana Toya

Pueblo of Jemez

Monica Tsethlikai

Arizona State University

Mavany Calac Verdugo

Rincon Band of Luiseno Indians; NIHSDA

Nancy Whitesell

University of Colorado, Tribal Early Childhood Research Center

A.9. Explanation of Any Gift to Respondents

Current request – Spring 2017 Data Collection

No tokens of appreciation are planned for Spring 2017.

Previously reviewed and approved

Participation in FACES will place some burden on program staff, families, and children. To offset this burden, we have developed a structure for respondents to receive tokens of appreciation based on the one used effectively in FACES 2009 and attempts to acknowledge respondents’ efforts in a respectful way. Table A.4. presents the proposed structure. The token of appreciation values for teachers and parents have been modified since FACES 2006 and 2009. The token of appreciation for teachers completing a TCR is higher than the amount used in prior rounds. In FACES 2006 and FACES 2009, teachers received $5 for each TCR they completed, and an additional $2 per form if they completed it on the web. This was to encourage teachers’ use of the web option since web-based surveys contain built-in range and logic checks and branching instructions, thus effectively eliminating most of the errors inherent in paper instruments. In both studies, a majority of teachers opted for the web option; therefore, we did not feel the differential incentive amount was needed in FACES 2014-2018. Teachers will receive a $10 incentive for each TCR that they complete, a $3 increase over the amount provided for web completes in FACES 2009 in recognition of the fact that teachers in FACES 2014 are being asked to complete both parts of the teacher survey (teacher background and classroom information and the TCRs) online. Previously, only the TCRs were completed online. Parents would receive a token of appreciation for each survey they complete. In FACES 2006 and 2009, parents received $35 after completing their parent interview in person or by phone. Because the length of the parent survey has been reduced from 60 minutes to 20 minutes (plus the 5 minute parent supplement survey), the amount of the incentive has been reduced. FACES 2014 also now uses a tiered approach, with small additional amounts offered for web and early completion, to reflect the lower costs associated with web completion and reduction in number of follow-ups required. We believe that increasing the number of surveys completed by web as compared to phone will lower the overall data collection cost.

Table A.4. FACES 2014–2018 Previously Approved Token of Appreciation Structure Compared to Structure of Prior Rounds



FACES 2006

FACES 2009

FACES 2014-2018

FACES Component

Respondent

Token of Appreciation

Token of Appreciation

Token of Appreciation



Previously approved



Data collection site visit

Program in child-level data collection

Fall and Spring: $500

Fall: $500

Spring: $250

Fall: $500

Spring: $250


Program in class/program-only data collection

NA

NA

Spring: $250 (to include FES visit for classroom sampling)

Teacher child report

Teacher

Fall and Spring:
$7 per web form

$5 per paper form

Fall and Spring:
$7 per web form

$5 per paper form

Fall and Spring: $10 per form

Parent survey

Parent

Fall and Spring:
$35

Fall and Spring:
$35

Fall and Spring: $15 (additional $5 if completed within 3 weeksa of receiving survey; additional $5 if completed on the web)

Child assessment

Child

Fall and Spring:
Children’s book (valued at $10)

Fall and Spring:
Children’s book (valued at $10)

Fall and Spring:
Children’s book (valued at $10)

Family engagement parent interview

Parent

NA

NA

Spring: $25

5E-Early Ed pilot educator survey

Teacher

NA

NA

Spring: $20

Teacher child report (AI/AN FACES)

Teacher

NA

NA

Fall and Spring: $10 per form

Parent survey (AI/AN FACES)

Parent

NA

NA

Fall and Spring: $25

Child assessment (AI/AN FACES)

Child

NA

NA

Fall and Spring:
Children’s book (valued at $10)

aThree weeks in fall 2014, two weeks in spring 2015. See Appendix M

NA= Not Applicable

In spring 2015, parents received a token of appreciation of $25 for completing the Family Engagement Study parent interview. Additionally, teachers completing the 5E-Early Ed pilot survey received a token of appreciation of $20. Similar to the Teacher Child Reports, the additional time for the second survey requires efforts beyond the typical work day.

For AI/AN FACES, we offered parents $25 regardless of whether the survey was completed online, by phone or in person and when they completed the survey (parents who completed the survey immediately and those who did so later on were offered the same token). In light of the lower response rates experienced in the FACES Core parent survey, we believe that offering parents several different ways of responding to the survey together with the different ways they could earn incentives tied to how and when they responded may have been confusing to parents. Offering parents a token of appreciation of $25 for a 30-minute survey is consistent with what we have offered them in the past for a longer survey. Teachers receive $10 for each TCR they complete. Children receive a book valued at $10 for participating in the child assessments.

A.10. Assurance of Privacy Provided to Respondents

Respondents will receive information about privacy protections before they are asked to participate in the study. The study team will repeat this information at the start of each survey and interview. All interviewers and data collectors will be knowledgeable about privacy procedures and will be prepared to describe them in detail or to answer any related questions respondents raise.

We have crafted carefully worded consent forms (Appendix C.1 and C.2; Appendix J.5 and J.7; Appendix K.1) that explain in simple, direct language the steps we will take to protect the privacy of the information each sample member provides. We will assure parents both as they are recruited and before each wave of data collection that their responses and their child’s assessment scores will not be shared with the Head Start program staff or the program. We will assure both parents and staff that their responses will be reported only as part of aggregate statistics across all participants (see Appendix H.10; Appendix J.1, J.2, J.3, J.4, J.7, and J.8; Appendix K.9; Appendix N.1, N.2, N.3, N.4, N.5; and Appendix P.1, P.2, P.3, P.4 for staff in particular). ACF will obtain signed, informed consent from all parents before their participation and obtain their consent to assess their children. The FACES study FAQ and brochure (Appendix C.3, C.4; Appendix K.2, K.3 for AI/AN FACES) make it clear that parents may withdraw their consent at any time.

To further ensure privacy, the study team will remove personal identifiers that could be used to link individuals with their responses from all completed questionnaires and store the hard copy questionnaires under lock and key at the study team offices. The study team has extensive corporate administrative and security systems to prevent the unauthorized release of personal records, including state-of-the-art hardware and software for encryption that meets federal standards; physical security, including limited key card access and locked data storage areas; and other methods of data protection (for example, requirements for regular password updating). Mathematica secures individually identifiable and other sensitive project information and strictly controls access to sensitive information on a need-to-know basis. Data on tablet computers will be secured through hard drive encryption that meets federal standards, as well as through operation and survey system configuration and a password. Any computer files that contain this information will also be locked and password-protected. Survey, interview and data management procedures that ensure the security of data and privacy of information will be a major part of training. Additionally, Mathematica will require its entire staff to sign a confidentiality statement (Appendix G, Appendix O for AI/AN FACES).

We have obtained Institutional Review Board clearance and a National Institutes of Health certificate of confidentiality to help ensure the privacy of study participants. OPRE completed a Privacy Impact Assessment (PIA) (Attachment 27). Information will not be maintained in a paper or electronic system from which they are actually or directly retrieved by an individuals’ personal identifier.

A.11. Justification for Sensitive Questions

To achieve its primary goal of describing the characteristics of the children and families served by Head Start, we will be asking parents and teachers a few sensitive questions, including some aimed at assessing feelings of depression, use of services for emotional or mental health problems, and reports of family violence or substance abuse. We have used this information in past FACES reports to describe the Head Start population and staff and to examine child outcomes and change in those outcomes over time. Parents will also be asked about household income and all staff will be asked to report their salaries. The sensitive questions have been used in previous rounds of FACES and obtain important information for understanding behaviors and family needs. The invitation will inform participating parents that the survey will ask sensitive questions (Appendices H.2 through H.5; Appendix K.5 and K.6). The invitation will also inform parents and staff that they do not have to answer questions that make them uncomfortable and that none of the responses they provide will be reported back to program staff.

Additionally, we recognize that AI/AN families and staff can be suspicious of research given past violations of trust, and we will highlight the efforts taken to collaborate with Region XI Head Start directors and community leaders and early childhood researchers with experience in tribal communities on creating culturally appropriate and relevant measures that will benefit AI/AN Head Start programs.

A.12. Estimates of Annualized Burden Hours and Costs

The proposed data collection does not impose a financial burden on respondents, and respondents will not incur any expense other than the time spent participating.

a. Approved Information Collection Requests Ongoing

The total burden remaining for the previously approved and completed instruments that will continue to collect information is estimated to be 3,749 hours annually and the total burden for previously approved and ongoing information collection requests is estimated to be 295 hours. Table A.5.a includes data collection activities for fall 2014 and spring 2015 in the 60 programs participating in child-level data collection—child assessments, parent surveys, and teacher child reports. It also includes data collection activities in programs participating in the AI/AN Plus study during fall 2015 and spring 2016. Table A.5.b lists these scripts, for a study team member to speak with program directors and on-site coordinators about the centers in their Head Start program, and study review materials for center directors.

Final response rates for Fall 2014 are provided in Table A.6. The parent response rate of 77 percent fell below our expected target of 86 percent. The parent survey experiment (described in Section A.3) included a three-week delay when study staff began to actively contact parents in order to complete the survey by phone. This delay could have adversely impacted the response rate, especially in the later weeks of the data collection period. All consented parents were contacted in the spring, even if they did not complete the fall survey. In an effort to remediate the fall response rate issues for the spring data collection, we released fall nonrespondent cases first to allow more time for contact and to complete data collection for these cases. We also shortened the interval between when a parent was invited to complete the survey and when active calling began from three to two weeks.

Table A.7 presents the final response rates for spring 2015 data collection. The data collection included recruiting an additional 120 programs, continuing fall activities in the 60 programs (child assessments, parent surveys, and teacher child reports), conducting Plus interviews in those programs, and administering staff surveys in all 180 programs. As in fall 2014, the final spring 2015 parent survey response rate of 73 percent is lower than we expected based on our experience surveying parents in FACES 2006 and 2009. In light of the difficulties we experienced completing parent surveys in FACES this past year, we made several changes to the approach for AI/AN FACES. We simplified the incentive structure to a single amount (described in A.9), removed the delay in active calling, and offered additional on-site access for parents to complete the survey.



Table A.5.a Approved Estimated Information Gathering Annual Response Burden and Approved Information Gathering Annual Cost-Completed

Instrument

Total Number of Respondents

Annual Number of Respondents

Number of Responses per Respondent

Average Burden Hour per Response

Estimated Annual Burden Hours

Average Hourly Wagea

Total
Annual Cost

Head Start core parent consent form

2,400

800

1

0.17

136

$16.20

$2,203.20

Head Start core child assessment

2,400

800

2

0.75

1,200

n.a.

n.a.

Head Start core parent survey

2,400

800

2

0.33

528

$16.20

$8,553.60

Head Start fall parent supplement survey

2,400

800

1

0.08

64

$16.20

$1,036.80

Head Start spring parent supplement survey

2,400

800

1

0.08

64

$16.65

$1,065.60

Head Start core teacher child report

240

80

20

0.17

272

$28.28

$7,692.16

Head Start core teacher survey

720

240

1

0.50

120

$27.45

$3,294.00

Head Start program director core survey

180

60

1

0.50

30

$27.45

$823.50

Head Start center director core survey for plus study

360

120

1

0.42

50

$27.45

$1,372.50

Head Start parent qualitative interview (Family Engagement)

360

120

1

1.00

120

$16.65

$1,998.00

Head Start staff qualitative interview (FSS Engagement)

180

60

1

1.00

60

$27.45

$1,647.00

Head Start staff (FSS) roster form

60

20

1

0.17

3

$27.45

$82.35

Head Start parent engagement interview consent form

360

120

1

0.17

20

$16.65

$333.00

Head Start staff engagement interview consent form

180

60

1

0.17

10

$27.45

$274.50

Early care and education providers survey for Plus study (5E-Early Ed pilot)

480

160

1

0.33

53

$27.45

$1,454.85

Early care and education providers survey for Plus study (FPTRQ)

240

80

1

0.08

6

$27.45

$164.70

Classroom
sampling form from Head Start staff

397

133

1

0.17

23

$28.28

$650.44

Child roster form from Head Start staff

157

52

1

0.33

17

$28.28

$480.76

Head Start parent consent form for Plus study (AI/AN FACES)

1,034

345

1

0.17

59

$16.60

$979.40

Head Start child assessment for plus study (AI/AN FACES)

1,000

334

2

0.75

501

n.a.

n.a.

Head Start parent survey for plus study (AI/AN FACES)

800

267

1

0.50

134

$16.60

$2224.40

Head Start teacher child report for plus study (AI/AN FACES)

80

27

24

0.17

110

$28.82

$3170.20

Head Start core parent survey for plus study (AI/AN FACES Spring 2016)

880

294

1

0.50

147

$16.95

$2491.65

Head Start core teacher survey for plus study (AI/AN FACES)

80

27

1

0.58

16

$28.25

$452.00

Head Start program director core survey for plus study (AI/AN FACES)

22

7

1

0.33

2

$28.25

$56.50

Head Start center director core survey for plus study (AI/AN FACES)

37

12

1

0.33

4

$28.25

$113.00

Estimated Total





3,749


$42,614.11

a Average Hourly wage is based on the most recent Current Population Survey weekly earnings available at the time of the original request.

n.a. = not applicable

Table A.5.b Approved Estimated Information Gathering Annual Response Burden and Approved Information Gathering Annual Cost-Ongoing

Instrument

Total Number of Respondents

Annual Number of Respondents

Number of Responses per Respondent

Average Burden Hour per Response

Estimated Annual Burden Hours

Average Hourly Wagea

Total
Annual Cost

(25) Telephone script for program directors

230

77

2

1.00

154

$27.53

$4,239.62

(26) Telephone script for on-site coordinators

230

77

2

0.75

116

$27.53

$3,193.48

Letter for Center director

460

154

2

0.08

25

$27.53

$688.25

Estimated Total





295


$8,121.35

a Average Hourly wage is based on the most recent Current Population Survey weekly earnings available at the time of the original request.

n.a. = not applicable



Table A.6. Final Response Rates for Fall 2014 Approved Information Requests

Data Collection

Expected Response Rate

Final Response Rate

Head Start program

100%

90%

Head Start centera

100%

100%

Head Start core parent consent formb

90%

95%

Head Start core child assessmentc

92%

95%

Head Start core parent surveyc

86%

77%

Head Start fall parent supplement surveyc

86%

77%

Head Start core teacher child reportc

93%

98%

a Among participating programs

b Among eligible children

c Among eligible, consented children

Table A.7. Final Response Rates for Spring 2015 Approved Information Requests

Data Collection

Expected Response Rate

Final Response Rate

Head Start programa

100%

92%

Head Start centerb

100%

99%

Head Start core child assessmentc

92%

95%

Head Start core parent surveyc

75%

73%

Head Start spring parent supplement surveyc

75%

73%

Head Start core teacher child reportc

93%

95%

Head Start core teacher survey

83%

93%

Head Start core program director survey

100%

97%

Head Start core center director survey

100%

93%

Head Start parent engagement interview consent form

n.a.d

59%

Head Start parent qualitative interview (Family Engagement)

85%

83%

Head Start staff engagement interview consent form

n.a.d

90%

Head Start staff qualitative interview (FSS Engagement)

90%

89%

Early care and education providers survey for Plus study (5E-Early Ed Pilot)

80%

91%

Early care and education providers survey for Plus study (FPTRQ)

83%

95%

a Among the new programs sampled for spring 2015 Classroom Core

b Among participating new spring 2015 programs

c Among eligible, consented children

d Family Engagement study had a target of 360 parent and 180 Head Start staff completed interviews.


Given the response rate for the Core parent survey was less than 80 percent for both the fall 2014 and spring 2015 data collection waves, we conducted an analysis to assess nonresponse bias for the survey.12 In this analysis, we compared estimates of child outcomes for parent survey respondents and nonrespondents and looked for significant differences between the two groups. We then examined whether the child-level nonresponse-adjusted weights mitigated the bias. We did this analysis separately for the parent survey in the fall and spring. To examine differences between respondents and nonrespondents for program-level characteristics and parent survey contact options (for example, whether they had the ability to send or receive text messages) obtained from the consent form, we focused on all sampled and consented children. To examine differences between respondents and nonrespondents for child outcomes, we focused on those sampled and consented children with completed child assessments.

More than three-quarters of the variables we examined did not have significantly different distributions between respondents and nonrespondents, even before nonresponse adjustments to the weights. Among those that did have different distributions, nonresponse adjustments to the weights generally either resolved or lessened those differences (to 2 percentage points or less).13 It is important to note that among the parents of the 2,462 children who were in the study in the fall, 2,105 (85.5 percent) completed at least one of the two surveys. Among the parents of the 2,206 children who were in the study in both fall 2014 and spring 2015, 1,951 (88.4 percent) completed at least one of the two surveys. This is important because those parents who completed the spring survey but did not complete the fall survey were asked key demographic questions from that fall survey instrument in the spring. Therefore, most spring or program-year weights require that either the fall or spring parent interview be completed, but not necessarily both. Because of this, we feel researchers should feel comfortable making child-level estimates from the FACES 2014 Classroom + Child Outcomes Core study using the appropriate weights. (For more information, see the memorandum entitled “Nonresponse Bias Analysis for the FACES Core Study Parent Survey in Fall 2014 and Spring 2015,” submitted November 22, 2016.)

Table A.8 presents final response rates for fall 2015 AI/AN FACES data collection. The Head Start program response rate of 68 percent fell below our expected target of 80 percent, which was based on our experience recruiting programs in FACES 2006 and 2009 in Regions I -X. In addition to expected requirements, many Region XI programs selected for AI/AN FACES also required the approval of a tribal council or other representative body in order to participate in the study. This contributed to the lower response rate when the tribal body declined to participate or when the time allotted for recruitment expired.

Table A.8. Final Response Rates for Fall 2015 AI/AN FACES Approved Information Requests

Data Collection

Expected Response Rate

Fall 2015 Sample
size

Final Response Rate

Head Start program

80%

31

68%

Head Start centera

100%

36

97%

Head Start core parent consent form

90%

1034

95%

Head Start core child assessmentb

83%

984

95%

Head Start core parent surveyb

83%

984

83%

Head Start core teacher child reportb

83%

984

97%

a Among participating programs. One program’s visit for selecting classrooms and children was delayed until spring 2016 in order to complete local tribal approval processes.

b Among eligible, consented children

Given the program participation rate was less than 80 percent for Region XI Head Start programs sampled for AI/AN FACES, we conducted an analysis of the potential for nonresponse bias for estimates from programs participating in the AI/AN FACES study. We examined whether the distributions of a set of program-level variables from the Head Start Program Information Report differed between participating and nonparticipating programs.14 None of the variables we examined had statistically significantly different distributions between participating programs and nonparticipating programs before nonresponse adjustments were made to the sampling weights. That is, we were unable to reject the null hypotheses that participating programs did not differ from nonparticipating programs although, given the small effective sample size, we likely did not have sufficient power to reject any of the null hypotheses. However, some estimated percentages did appear to differ between participating and nonparticipating programs before weights were applied. Nonresponse adjustments to the weights mostly improved these distributions, with most differences becoming less than 3 percentage points, although in one variable (percent of children with disabilities) they resulted in greater deviations than initially observed. Because of the small sample size used for this nonresponse bias analysis, researchers should be cautious in interpreting its findings. For program size, urbanicity, and the percentage of children who are AI/AN, we saw small differences before nonresponse adjustments and even smaller differences after those adjustments. In turn, this likely means that the program-level nonparticipation will have minimal impact on child-level estimates that will result from these participating programs, because child-level weights are built upon the final adjusted program weights. Furthermore, the study was designed to produce child-level, not program-level estimates, with a primary focus on point estimates, rather than comparisons between child subgroups. Our child sample size exceeded those laid out in the study design. Therefore, we believe researchers should feel comfortable using the AI/AN child-level data, along with the appropriate weights. (For more information, see “Nonresponse Bias Analysis for AI/AN FACES Program Participation,” submitted November 22, 2016.)

Table A.9 presents final response rates for spring 2016 AI/AN FACES data collection, which came within expectations for each instrument.

Table A.9. Final Response Rates for Spring 2016 AI/AN FACES Approved Information Requests

Data Collection

Expected Response Rate

Spring 2016 Sample Size

Final Response Rate

Head Start core child assessmentb

95%

980

96%

Head Start core parent surveyb

80%

980

82%

Head Start core teacher child reportb

95%

980

97%

Head Start core teacher survey

90%

74

96%

Head Start core program director survey

90%

21

100%

Head Start core center director survey

90%

36

97%

a Among participating programs

b Among eligible, consented children



  1. Current Information Collection Requests

Table A.10 presents the current request to cover data collection activities related to selection of classrooms and surveys with Head Start staff. We expect the total annual burden to be 275 hours for all of the instruments in the current information collection request.

Table A.10. Estimated Current Annual Response Burden and Current Annual Cost

Instrument

Total Number of Respondents

Annual Number of Respondents

Number of Responses per Respondent

Average Burden Hours per Response

Estimated Annual Burden Hours

Average Hourly Wage

Total
Annual Cost

(1) Classroom sampling form from Head Start staff

360

120

1

0.17

20

$31.23

$624.60

(8) Head Start core teacher survey

720

240

1

0.5

120

$31.23

$3,747.60

(9) Head Start core program director survey

180

60

1

0.5

30

$31.23

$936.90

(10) Head Start core center director survey

360

120

1

0.42

50

$31.23

$1,561.50

(23) Early care and education administrators survey for Plus study (Head Start Program Performance Standards)

540

180

1

0.08

14

$31.23

$437.22

(24) Early care and education providers survey for Plus study (5E-Early Ed)

720

240

1

0.17

41

$31.23

$1280.43

Estimated Total





275


$8588.25



  1. Total Burden Hour Request

The total burden of approved information collection in addition to the new request is 4,319 hours per year over three years. Annual burden for ongoing and new data collection under 0970-0151 will be 570 hours annually following approval of this ICR.

  1. Future Information Collection Requests

Table A.11 presents future data collection activities related to the spring 2017 program and classroom components as well as to potential Plus study activities. We expect the total annual burden to be 1,068 hours for all of the instruments in the future data collection. The estimated future burden reflects the original 60-day master request remaining after accounting for the current information requests.

Table A.11. Estimated Future Annual Response Burden and Future Annual Cost

Instrument

Total Number of Respondents

Annual Number of Respondents

Number of Responses per Respondent

Average Burden Hours per Response

Estimated Annual Burden Hours

Average Hourly Wage

Total Annual Cost

Head Start parent qualitative interview

441

147

1

1.00

147

$17.25

$2,535.75

Head Start staff qualitative interview

420

140

1

1.00

140

$31.23

$4,372.20

Head Start child assessment for Plus study

348

116

2

0.75

174

n.a.

n.a.

Head Start parent supplemental survey for Plus study

1,350

450

2

0.08

72

$17.25

$1,242.00

Head Start teacher child report for Plus study

70

23

20

0.17

78

$31.23

$2,435.94

Head Start teacher survey for Plus study

102

34

2

0.50

34

$31.23

$1,061.82

Head Start program director survey for Plus study

42

14

2

0.50

14

$31.23

$437.22

Head Start center director survey for Plus study

60

20

2

0.42

17

$31.23

$530.91

Early care and education administrators survey for Plus study

558

186

2

0.50

186

$31.23

$5,808.78

Early care and education providers survey for Plus study

618

206

2

0.50

206

$31.23

$6433.38

Estimated Total





1,068


$24,858.00

n.a. = not applicable

  1. Estimates of Annualized Costs

To compute the total estimated annual cost, we multiplied the total annual burden hours by the average hourly wage for each adult participant, based on median weekly wages from the Bureau of Labor Statistics, Current Population Survey estimates (second quarter of 2016). The results appear in Tables A.10 (current requests) and A.11 (future requests) below. For teachers, program directors, center directors, Head Start staff, and other early care and education program staff we used the median salary for full-time employees over age 25 with a bachelor’s degree ($31.23 per hour). For parents, we used the median salary for full-time employees over the age of 25 who are high school graduates with no college experience ($17.25 per hour).



A.13. Estimates of Other Total Cost Burden to Respondents and Record Keepers

Not applicable.

A.14. Cost to the Federal Government

The total cost for the Spring 2017 FACES data collection related to the instruments is $3,559,867. These costs include the sampling, data collection, data processing, and analysis. Including the previously approved costs (OMB Number 0970-0151, approved April 7, 2014 for recruitment, July 7, 2014 for fall 2014 data collection, February 20, 2015 for spring 2015 data collection, August 6, 2015 for fall 2015 data collection and March 2, 2016 for spring 2016 data collection), the total cost of data collection to date would be $19,202,29215 or $6,401,431 annually.

A.15. Explanation for Program Changes or Adjustments

The changes made to the FACES 2014–2018 data collection reflect OPRE’s and OHS’s interest in ensuring that FACES provides timely and ongoing information about Head Start program performance, including program improvement efforts, program quality, and outcomes for children and families. As detailed above, unlike FACES 2009, FACES 2014–2018 will consist of three Core waves of data collection—fall 2014, spring 2015, and spring 2017—encompassing the Classroom + Child Outcomes Core and the Classroom Core, with Plus studies being a part of those waves or at different time points to be determined. At the program level, the sample size in FACES 2014–2018 is larger than the sample size of FACES 2009, allowing for more powerful analysis of program quality (180 programs rather than 60 programs). At the child level, the Classroom + Child Outcomes Core sample will represent all children enrolled in Head Start at baseline, not just those entering the program for the first time, as in FACES 2009. Previously, FACES followed the newly enrolled children through one or two years of Head Start and then through the spring of kindergarten. The FACES Core design will focus more on the children’s experiences and outcomes during their time in Head Start. FACES 2014–2018 will also differ from FACES 2009 in the mode and length of parent and Head Start staff surveys (more web-based and shortened surveys) to reduce respondent burden and support reporting on key indicators. Additionally, Plus studies have been incorporated in response to policy and programmatic interest, such as family engagement or program functioning and expanding the Head Start population studied with a nationally representative sample of Region XI (AI/AN) Head Start programs.

A.16. Plans for Tabulation and Publication and Project Time Schedule

a. Analysis Plan

The analyses will aim to (1) describe Head Start programs and classrooms; (2) describe children and families served by Head Start, including children’s outcomes; (3) relate classroom and program characteristics to classroom quality; and (4) relate family, classroom, and program characteristics to children’s outcomes. Analyses will employ a variety of methods, including cross-sectional and longitudinal approaches, descriptive statistics (means, percentages), simple tests of differences across subgroups and over time (t-tests, chi-square tests), and multivariate analysis (regression analysis, hierarchical linear modeling [HLM]). For all analyses, we will calculate standard errors that take into account multilevel sampling and clustering at each level (program, center, classroom, child) as well as the effects of unequal weighting. We will use analysis weights, taking into account the complex multilevel sample design and nonresponse at each stage.

Cross-sectional Analyses. Descriptive analyses will provide information on characteristics at a single point in time, overall and by various subgroups. For example, for questions on the characteristics of Head Start programs, classrooms, or teachers (for example, average quality of classrooms or current teacher education levels) and the characteristics of Head Start children and families (for example, family characteristics or children’s skills at the beginning of the Head Start year), we will calculate averages (means) and percentages. We will also examine differences in characteristics (for example, children’s outcomes or classroom quality), by various subgroups. We will calculate averages and percentages, and use t-tests and chi-square tests to assess the statistical significance of differences between subgroups.

Changes or Trends over Time. Analyses will also examine changes or trends in characteristics over time, overall and by various subgroups. For questions about changes in children’s outcomes during a year of Head Start, we will calculate the average differences in outcomes from fall to spring for all children and for selected subgroups (for example, children who are dual language learners). We will use a similar approach for changes in family characteristics during the year. Outcomes that have been normed on broad populations of preschool-age children (for example, the Woodcock-Johnson III Letter-Word Identification or the Peabody Picture Vocabulary Test, 4th Edition) will be compared with the published norms to judge how Head Start children compare with other children their age in the general population and how they have progressed relative to national and published norms.

To examine changes in classroom or program-level characteristics across years, we will use t-tests and chi-square tests for simple comparisons—one year versus another. We will use trend analysis for examining whether child outcomes or family, classroom, or program characteristics are changing across multiple years and rounds of FACES. To compare children’s outcomes across prior and current FACES cohorts, we will employ a regression framework to examine the relationships between children’s outcomes and the year in which the outcomes were measured, controlling for child and family characteristics.

Multivariate Analyses. We will use multiple approaches for questions relating characteristics of the classroom, teacher, or program to children’s outcomes at single points in time, changes during a year in Head Start, or relationships among characteristics of classrooms, teachers, programs, and classroom quality. Many of the questions can be addressed by estimating hierarchical linear models that take into account that children are nested within classrooms that are nested within centers within programs. Analyses examining whether there are policy-relevant thresholds or cut points of classroom quality will also use HLMs to account for the clustering of children within classrooms and of classrooms within programs.

For the first time, children returning for a second year of Head Start will be included in the FACES sample. Gains for children in their second year can now be measured directly. However, gains across two years in the program will be synthetically estimated, piecing together the gains for the first- and second-year children, assuming two groups of children in one program year can adequately represent one group of children in two consecutive years of the Head Start program. To the extent this assumption does not hold, it can be addressed in the analysis by controlling for key covariates. We can also devise a weighting scheme to model characteristics relative to which children will stay in Head Start through their second year.

Plus Topical Module Analysis. All interview items collected from parents and Head Start staff as part of the family engagement plus study will be done with semi-structured paper-pencil guides and audio recorded for transcription and later coding. Analysis of the collected data will involve qualitative coding for themes or patterns, as well as descriptive analysis of survey data. Analyses will be conducted to identify themes and patterns overall and for key subgroups. Data will be coded by trained staff, using Atlas.Ti.

Plus Pilot Survey Analysis. In spring 2015, the 5E-Early Ed educator survey data collected through FACES was used to test the measurement characteristics of this set of new measures. We used Rasch analysisto assess the reliability of the measures and modified the set of items and the response category set to maximize reliability (>=0.80) while keeping the set of items as small as possible. We examined item fit statistics (calculated as aggregations of individual residuals across people within items) to determine how well the survey responses fit the Rasch model. Individual item fit statistics greater than about 1.3 indicate possible multidimensionality or other violation of principles of good measurement. We calculated item difficulties to confirm that the order of the items agrees with our understanding of the concept. The data was analyzed to identify areas for improvement such as misfitting items and poorly constructed response scales. We also examined differential item functioning among subgroups of teachers to verify that the items are uniformly applicable across all groups (for example, age or teaching experience). The developers’ analyses of FACES data, in combination with other developer local data, demonstrated overall acceptable reliability for subscales. These analyses also identified possible improvements, including revision, deletion, or addition of items for some subscales (Ehrlich et al. 2016) and informed changes for the next version of the instrument currently being tested by the developer.

AI/AN FACES Plus Study Analysis. Similar to the Classroom + Child Outcomes Core, the analyses will aim to describe children and families served by Region XI Head Start, including children’s outcomes and relate family, classroom, and program characteristics to children’s outcomes. Analyses will employ a variety of methods, including cross-sectional and longitudinal approaches and descriptive statistics (means, percentages). Analyses will be conducted to identify patterns overall and for key subgroups (for example, by age, gender, or geographic region or for AI/AN children only) through simple tests of differences across subgroups and over time (t-tests, chi-square tests) and multivariate analysis. Additionally, we will explore comparisons between children in Region XI and children in Region I-X (using the FACES fall 2014-spring 2015 data). For all analyses, we will calculate standard errors that take into account multilevel sampling and clustering at each level (program, center, classroom, child) as well as the effects of unequal weighting. We will use analysis weights, taking into account the complex multilevel sample design and nonresponse at each stage.

Further, there is interest in understanding how standardized measures of children’s development function with an AI/AN population. To learn about the effectiveness of using these early childhood measures with AI/AN children, the following psychometric information can be examined for the various child assessment scores: reported response range, weighted mean and standard deviation, unweighted mean and standard deviation, and Cronbach alpha. Additional psychometric analyses may be explored related to whether the items perform the same way for AI/AN children in Region XI as compared to the children participating in FACES (Regions I-X). For example, analysis may entail examining the difficulty of the item and its position relative to other items (with most of these measures assuming the items increase in difficulty).

Spring 2017 Plus Topical Module Analysis. Descriptive analyses for survey items collected from the Spring 2017 program functioning modules (directors’ planning for new Head Start program performance standards and teachers’ report on the 5E-Early Ed) will provide information on characteristics at a single point in time, overall and by various subgroups, such as program auspice and size. We will calculate averages (means) and percentages. We will also examine differences in characteristics, by various subgroups. We will calculate averages and percentages, and use t‑tests and chi-square tests to assess the statistical significance of differences between subgroups.

b. Time Schedule and Publications

We expect to develop the following products after each round of Core study data collection (fall 2014, spring 2015, and spring 2017): (1) a set of descriptive tables on key indicators, (2) a technical report, (3) up to three 10- to 12-page research spotlights, and (4) up to three 1- to 2-page practice/policy snapshots.

The fall 2014 report will describe the population of children enrolled in Head Start that year. Likely topics include children’s demographic and language characteristics; children’s cognitive, social-emotional, and physical well-being; and family economic well-being at the beginning of the program year. The spring 2015 report will include data on children, teachers, classrooms, and programs. It will include children’s data from the fall and spring to examine development across the program year both for the full sample and for subgroups based on, for example, family background or length of children’s Head Start experience. The spring 2017 report will include data on programs and classrooms as well as from teachers. Using data from the director surveys and the program information report, it will provide a profile of Head Start programs—both structural features and approaches to implementing particular policies and processes. It will present information on classroom quality and teacher practices, summarize quality at the program level, and examine whether classroom quality varies by characteristics of programs, teachers, or classrooms. Classroom + Child Outcomes Core issue briefs will examine topics introduced in the fall 2014 and spring 2015 reports with greater depth or for particular subgroups. Classroom Core issue briefs will focus on specific topics related to program quality and services or on classroom quality over time.

We will also prepare a report focused on family engagement, to highlight patterns and themes from the family engagement questions included in the spring 2015 (1) parent spring supplement survey and (2) interviews with a subsample of Head Start staff and parents. The report will address the study questions and provide descriptive information about what’s happening in programs around family engagement and service provision, the background characteristics of FSS, how families and staff work together, and how practices and experiences may differ across families (or staff). Results from the psychometric analyses of 5E-Early Ed pilot educator survey may be developed into scholarly publications or technical manuals.

For the AI/AN FACES Plus study, a similar set of products would be prepared. The exact topics for a given report or brief will be determined in collaboration with the AI/AN FACES Workgroup but would include descriptions of Region XI programs, classrooms, children, and families.

At the end of each Core study in spring 2015 and spring 2017, we will also produce the following products: (1) public use files and (2) technical reports/user’s guides that detail the study design, analysis methods, nonresponse and nonresponse bias, and the psychometric properties of the measures. We will follow a similar approach to documentation and reporting for Plus studies. We will integrate the documents and data from the Plus studies with the Core documents and data when they occur in the same data collection period.16 The AI/AN FACES Plus study will have its own public use file and user’s guide produced at the end of the Plus study in spring 2016.

A.17. Reason(s) Display of OMB Expiration Date is Inappropriate

The OMB number and expiration date will be displayed at the top of the cover page or first Web page for each instrument used in the study. For CATI or CAPI instruments, we will display this information on the introduction screens.

A.18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this data collection.


REFERENCES

Administration on Children, Youth, and Families. “Charting our Progress: Development of the Head Start Program Performance Measures.” Washington, DC: U.S. Department of Health and Human Services, 1995.

Ehrlich, S. B., Pacchiano, D. M., Stein, A. G., & Luppescu, S. (2016). Essential organizational supports for early education: The development of a new survey tool to measure organizational conditions. Chicago, IL: University of Chicago Consortium on School Research and Ounce of Prevention Fund.

Love, John M., Louisa B. Tarullo, Helen Raikes, and Rachel Chazan-Cohen. “Head Start: What Do We Know About Its Effectiveness? What Do We Need to Know?” In Blackwell Handbook of Early Childhood Development, edited by Kathleen McCartney and Deborah Phillips. Malden, MA: Blackwell Publishing, 2006.

Moiduddin, Emily. “A Portrait of Head Start Program and Center Leaders.” Presentation at the National Research Conference on Early Childhood, Washington DC, July 2016.Moiduddin, Emily, Julia Lyskawa, Louisa Tarullo, Jerry West, and Elizabeth Cavadel. “FACES Redesign: Stakeholder Input on Information Needs.” Final report submitted to the Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. Washington, DC: Mathematica Policy Research, January 19, 2012.

1 The fall 2014 approval included spring 2015 data collection for child assessments and teacher child reports (TCRs).

2 One program’s visit for selecting classrooms and children was delayed until spring 2016 in order to complete local tribal approval processes.

3 The fall 2015 approval included spring 2016 data collection for child assessments and teacher child reports (TCRs).

4 Head Start children will be compared to publisher normative data for same-age children.

5 FACES recruitment materials, approved in April 2014, have been revised to remove references to past data collection activities to maintain focus on the current data collection activities for spring 2017. Revised materials can be found in Appendix Q and Attachments 25 and 26.

6 The FES will ask Head Start staff (typically the On-Site Coordinator) for a list of all Head Start-funded classrooms and for each classroom, the teacher’s first and last names, the classroom session type (morning, afternoon, full day, or home visitor), and the number of Head Start children enrolled.

7 In the spring waves of data collection, classroom observations will assess the quality of equipment, materials, teacher-child interactions, and instructional practices in the classroom. No burden is associated with the observation, and thus we will not discuss it further in this package; see Appendix D for the components of the classroom observation.

8 AI/AN FACES instruments are based on the Core instruments, with some modifications. See Appendix L for more information.

9 The FES asked Head Start staff (typically the On-Site Coordinator) for a list of all Head Start-funded classrooms and for each classroom, the teacher’s first and last names, the classroom session type (morning, afternoon, full day, or home visitor), and the number of Head Start children enrolled.

10 In spring 2016 data collection, classroom observations were conducted as done in the Core in spring 2015 as noted in the prior section.

11 FES visit procedures and materials (Attachment 1) will remain unchanged from spring 2015 for spring 2017 data collection.

12 As response rates decrease, the risk for nonresponse bias for an estimate increases if nonrespondents would have responded differently from respondents. Bias usually cannot be directly measured; in this case, however, we can do so. We have key outcomes (outcome data from the child assessments) for nearly all sampled children, so we examined what happens to estimates of those outcomes with and without children whose parents completed the parent survey.

13 Although there is no rule of thumb for how large a bias is acceptable, the larger it is, the more caution is merited in analysis. In a modeling context, potential bias due to nonresponse can be mitigated by controlling for any possibly problematic variables in an analysis. For analyses that require a completed fall parent survey, a conservative approach would be to control for teacher-reported child disability status in the fall, which was more likely to be present for respondents. For analyses that require a completed spring parent survey, one could control for whether the Head Start program was under public school auspices (as reported on the Head Start PIR), from which parents were more likely to respond.

14 We have no information beyond the program level (for example, centers, classrooms, children) for nonresponding programs.

15 The total costs include modifications for previously approved data collection efforts.

16 The 5E-Early Ed pilot educator survey conducted in spring 2015 will not be included on such files.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy