VIQI OMB Supporting Statement A - REVISED - 05082018 clean - with incentives substudy

VIQI OMB Supporting Statement A - REVISED - 05082018 clean - with incentives substudy.doc

Variations in Implementation of Quality Interventions (VIQI)

OMB: 0970-0508

Document [doc]
Download: doc | pdf




Variations in Implementation of Quality Interventions (VIQI)



OMB Information Collection Request

New Collection




Supporting Statement

Part A

Original Submission: November 2017

Updated as of April 2018


Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers:


Ivelisse Martinez-Beck

Amy Madigan



A1. Necessity for the Data Collection

The Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS), Office of Planning, Research, and Evaluation (OPRE) has launched the Variations in Implementation of Quality Interventions (VIQI): Examining the Quality-Child Outcomes Relationship in Child Care and Early Education Project. VIQI is a large-scale, experimental study that aims to inform policymakers, practitioners, and stakeholders about effective ways to support the quality and effectiveness of early care and education (ECE) centers for promoting young children’s learning and development by building rigorous evidence that aims to: 1) identify dimensions of quality within ECE settings that are key levers for promoting children’s outcomes; 2) inform what levels of quality are necessary to successfully support children’s developmental gains; 3) identify drivers that facilitate and inhibit successful implementation of interventions aimed at strengthening quality; and, 4) understand how these relations vary across different ECE settings, staff and children – all noted gaps in the knowledge base guiding policy, investments, and practice in the ECE field. VIQI is being conducted by MDRC in partnership with Abt Associates, Frank Porter Graham Child Development Institute, and MEF Associates.


Background

Prompted by converging evidence about the importance of early childhood for creating a foundation for lifelong success and concern that children from low-income and racially and ethnically diverse families tend to face greater risk for poorer outcomes than their higher- income peers, public support and government investments in ECE are at an all-time high. The literature and theory point to classroom quality – or the quality of children’s learning opportunities and experiences in the classroom – as being potentially influential for promoting child outcomes. Yet, there is considerable variation in the overall quality of ECE services, with instructional quality – a hypothesized key driver of children’s gains – often being low across ECE programs nationally despite a focus on quality improvement at national, state and local levels. Further, these relationships may vary with children of different ages. Indeed, there are still many open questions about how best to design and target investments to ensure that children, particularly low-income children, receive and benefit from high-quality, ECE programming on a large scale.


There is a growing, but imperfect, knowledge base about which dimensions of quality are most important to strengthen, and what levels of quality need to be achieved to promote child outcomes across ECE settings. The ECE literature has identified several basic dimensions of classroom quality – such as structural, process and instructional quality – that are hypothesized to promote child outcomes. Nonexperimental evidence portrays an intriguing pattern of correlational findings suggesting that quality may need to reach certain levels before effects on child outcomes become evident and that different dimensions of quality may interact with each other in synergistic ways to affect child outcomes. But, existing evidence has not pinpointed the exact levels that are consistently linked with child outcomes. Further, there is relatively little causal evidence showing that efforts to strengthen ECE quality will yield improvements in child outcomes. Without such rigorous evidence, it is difficult to draw policy and practice implications. What the field needs is a stronger, causal evidence base that provides a better understanding of the quality-child outcomes relationship, the dimensions of quality that are most related to child outcomes, and the program and classroom factors that aid delivery of quality teaching and caregiving in ECE settings.


VIQI aims to fill a particular gap in the ECE literature by conducting a rigorous experimental study testing two promising interventions that consist of curricular and professional development supports and target different dimensions of classroom quality to build evidence about the effectiveness of the interventions and investigate the relationship between classroom quality (and each of its dimensions) and children’s outcomes in mixed-aged ECE classrooms that serve both three- and four-year-old children. Further, VIQI aims to focus on three-year-old children being served in mixed-aged ECE classrooms because much of the ECE literature to date has centered on four-year-old children. Below we situate the study design within the broader literature about quality of ECE and child outcomes.


Legal or Administrative Requirements that Necessitate the Collection

There are no legal or administrative requirements that necessitate the collection. ACF is undertaking the collection at the discretion of the agency.


A2. Purpose of Survey and Data Collection Procedures

Overview of Purpose and Approach


To achieve these aims, the VIQI Project will be partnering with community-based child care and Head Start programs and centers that serve 3- and 4-year-old children. VIQI will be conducted in two phases (See Table 1 for an overview of the study phases). The first phase includes a year-long pilot study that will pilot two curricular and professional development models. The information generated from the pilot study will be used to refine the design and data collection instruments for the second phase of the project. The second phase entails a year-long impact evaluation and process study that involves testing two curricular and professional development models that aim to strengthen teacher practices, the quality of classroom processes, and children’s outcomes.


The curricular and professional development models selected are based upon prior evidence and theory of change underscoring their potential for generating impacts on child outcomes and differential impacts on different dimensions of quality on a large scale to allow us to rigorously examine the nature of the quality-to-child outcomes relationship.


The pilot study will take place in about 40 community-based and Head Start ECE centers (evenly split, to the extent possible) located in about three metropolitan areas in the United States. The impact evaluation and process study will take place in about 165 community-based and Head Start ECE centers spread across seven different metropolitan areas in the United States.


To leverage the extent to which the curricular and professional development models rigorously affect different dimensions of quality and child outcomes, the VIQI project will consist of a 3-group experimental design in the pilot study and a 3-group experimental design in the impact evaluation and the process study in which the initial quality and other characteristics of ECE centers are measured. In the impact evaluation and process study, the centers will be stratified based upon select information collected – by setting type (e.g., Head Start and community-based ECE centers) and initial levels of quality – and randomly assigned to one of the intervention conditions where they will be offered different curricular and professional development supports aimed at strengthening the quality of classroom and teacher practices, or to a business-as-usual comparison condition.


In the pilot study, about 40 centers in about three metropolitan areas will participate in the VIQI Project. Information about center and staff characteristics and classroom and teacher practices will be collected 1) to recruit and randomly assign centers; 2) to describe how the different interventions are implemented and are experienced by centers and teachers; 3) to assess the extent to which the interventions are implemented with fidelity, including identifying challenges and barriers to implementing the interventions with fidelity; 4) to assess the extent to which our data collection strategies and measures appropriately capture key constructs of interest; 5) to explore changes over time in different dimensions of quality for each of the interventions to gauge the interventions’ potential for achieving sufficiently large impacts on quality in the Impact Evaluation and Process Study; and, 6) to describe the characteristics of families and children being served by centers participating in the pilot study. These insights will be used to inform the extent to which it may be possible, and what adjustments may be necessary to activities and supports provided to install the interventions, to generate sufficiently large impacts on quality and, in turn, child outcomes to address the guiding questions of interest in the impact evaluation and process study. In addition to this, information about the background characteristics of all consented families and children being served in the centers will be collected in the pilot. We may also collect measures of children’s skills at the beginning and end of the pilot study for a subset of children in these centers. The information will then be used to adjust and to refine the research design and measures that will be used in the impact evaluation and process study the following year.


In the impact evaluation and the process study, about 165 centers in seven metropolitan areas will participate in the VIQI project. Information about center and staff characteristics and classroom and teacher practices will be collected 1) to recruit, stratify, and randomly assign centers; 2) to identify subgroups of interest; 3) to describe how the interventions are implemented and are experienced by centers and teachers; 4) to document the treatment differentials across research conditions; and 5) to assess the impacts of each of the interventions on different dimensions of quality, when compared to a business-as-usual comparison condition for the all participating centers in the impact evaluation, and separately for subgroups of interest. In addition, information about the background characteristics of families and children being served in the centers will be collected, as well as measures of children’s skills at the beginning and end of the year-long impact evaluation for a subset of children in these centers. This information will also be used 1) to define subgroups of interest defined by family and child characteristics, and 2) to assess the impacts of each of the interventions on children’s skills for all participating centers in the impact evaluation, and separately for subgroups of interest. Lastly, the information on quality, teacher practices, and children’s skills, and the impacts on these outcomes, will be used in a set of analyses that will rigorously examine the nature of the quality-to-child outcomes relationship for all participating centers in the impact evaluation and separately for subgroups of interest.






Table 1. Overview of the VIQI Pilot Study, Impact Evaluation and Process Study


Pilot Study

Impact Evaluation and Process Study

Goals

Refine and assess feasibility of Impact Evaluation and Process Study designs; anticipate and address implementation barriers; document potential treatment contrast and counterfactual conditions; pilot and streamline measures; describe the participating centers and classrooms, and children that are being served during the pilot study; explore the extent to which the interventions have the potential to generate differences in different dimensions of quality (and possibly children’s outcomes), if implementation of the interventions is with strong enough fidelity to warrant doing so)

Build rigorous evidence; describe the group of participating centers and classrooms, and children being served, during the impact evaluation and process study; determine effects of interventions on quality and children; examine effects of quality on children; explore subgroups to inform when and for whom interventions are more or less effective; document counterfactual conditions and treatment contrast; document implementation fidelity and drivers

Timing of data collection instruments

Q1 2018 (or upon receipt of approval of this OMB package) - Q2 2019

Q3 2019- Q2 2021

Metropolitan areas

About 3 cities

About 7 cities (including pilot cities)

Sample

40 centers, 120 classrooms (3 classes/center)

480 3-year-olds (4 per class)

Split across CBO and HS

165 centers, 495 classrooms (3 classes/center)

1980 3-year-olds (4 per class)

Split across CBO and HS

Stratified by initial quality

Duration of implementation of interventions

1 academic year (Q3 2018 – Q2 2019)

1 academic year (Q3 2020 – Q2 2021)

Conditions

3 conditions: Centers randomly assigned within each locality across CBO and HS to 2 interventions with PD or to a BAU control condition; 15 centers (45 classrooms) in each intervention condition and 10 centers (30 classrooms) BAU control condition

3 conditions: Centers randomly assigned within each locality across CBO and HS and stratified by initial quality to 2 interventions with PD or to a BAU control condition; 55 centers (165 classrooms) in each condition – sample split 33:33:33 across conditions in 3-group design

Data sources

  • Instruments for Screening and Recruitment of ECE Centers (protocols for phone calls and in-person visits for landscaping, screening and recruitment of centers)

  • Baseline Instruments (administrator, teacher/assistant teacher, coach surveys; two time points of observations of classrooms; parent/guardian information form for children served in center; child assessments)

  • Follow-up Instruments (administrator, teacher/assistant teacher, coach surveys; three time points of observations of classrooms; child assessments; teacher report on children)

  • Fidelity of Implementation Instruments (weekly teacher/assistant teacher logs; bi-weekly coach logs in intervention conditions only; fidelity observations; one-on-one or small group interviews with administrators, coaches, teachers/assistant teachers)

  • Instruments for Screening and Recruitment of ECE Centers (protocols for phone calls and in-person visits for landscaping, screening and recruitment of centers)

  • Baseline Instruments (administrator, teacher/assistant teacher, coach surveys; two time points of observations of classrooms; parent/guardian information form for children served in center; child assessments)

  • Follow-up Instruments (administrator, teacher/assistant teacher, coach surveys; three time points of observations of classrooms; child assessments; teacher report on children)

  • Fidelity of Implementation Instruments (weekly teacher/assistant teacher logs; bi-weekly coach logs in intervention conditions only; fidelity observations; one-on-one or small group interviews with administrators, coaches, teachers/assistant teachers)


The study received a generic OMB clearance on 02/25/2017 to gather information from staff responsible for Head Start and child care programs on existing services and populations served to better understand the landscape of early care and education programs and to aid in the refinement of the study design (OMB #0970-0356). The current request covers all of the data collection activities for the pilot study, impact evaluation, and process study. The data collection activities will include the following:


  1. Instruments for Screening and Recruitment of ECE Centers that will be used in the pilot study, impact evaluation, and process study to assess ECE centers’ eligibility, to inform the sampling strategy, and to recruit ECE centers to participate in the VIQI Project;


  1. Baseline Instruments for the Pilot Study, Impact Evaluation, and Process Study will be used to collect background information about centers, classrooms, center staff, and families and children being served in the centers. All of the instruments will be administered at the beginning of the pilot study, impact evaluation, and process study;


  1. Follow-Up Instruments for the Pilot Study, Impact Evaluation, and Process Study will be used to inform how centers, classrooms, teachers, and children changed and to assess the impacts of each of the interventions over the course of the pilot study, impact evaluation, and process study. All of the instruments will be administered at the end of the pilot study, impact evaluation, and process study; and,

  2. Fidelity of Implementation Instruments for Pilot Study and Process Study will be used to document administrators’, coaches’, teachers’ and assistant teachers’ experiences with the curricular and professional development models they are being trained on and are delivering to document treatment differentials across research conditions, and to provide context for interpreting the findings of the impact evaluation.


The data collection activities described in this request are intended to obtain complementary, but not overlapping, information. Each of the planned data sources and data collection will gather unique and crucial information to capture implementation of the interventions, the drivers of implementation, and differential impacts of the interventions on children’s learning experiences in ECE classrooms and their developmental outcomes to inform the quality-to-child outcomes relationship. In the design of the data collection instruments, the study team has aimed to reduce and minimize duplication of information being collected, except for when multiple perspectives enhance and provide different lenses and insights into the constructs and processes of interest. In all cases, attention has been paid to leveraging existing or administrative data sources whenever possible. However, as is often the case in early childhood education settings, there is limited or no existing administrative data sources that can inform the constructs and processes of interests. This necessitates each of the planned data collection activities under this request to gather crucial information needed to successfully achieve the research aims and goals underlying the VIQI Project. Detailed information for each data collection activity is provided in the section: Universe of Data Collection.



Data collection timeline


The timeline of the two phases of the VIQI Project and planned data collection activities is shown in Table 1, and more details are provided below in Section A16. The Pilot Study is scheduled to begin in Winter 2018 (or following OMB approval of this request) and end in Spring 2019. The activities are as follows:


  • Starting Winter 2018 (or following OMB approval of this request), VIQI will begin screening potential centers to assess their eligibility for meeting the sampling criteria for the Pilot Study until the targeted number of centers (about 40 centers) are successfully recruited;


  • Beginning in Fall 2018, Baseline Instruments, including self-administered surveys distributed to center administrators (e.g., directors or executive directors), lead and assistant teachers, and coaches serving the participating centers, as well as two-time points of classroom observations, will be collected. In addition, self-administered parent/guardian information forms will be distributed to parents/guardians of children being served in the participating classrooms and centers and a subset of these children may be asked to complete a set of direct child assessments. The baseline data collection will be conducted in Fall 2018;


  • Beginning in Fall 2018, Fidelity of Implementation Instruments will be collected. This will include collection of logs completed by lead teachers and assistant teachers, as well as logs completed by coaches. Administrators, lead teachers, assistant teachers, and coaches will also be asked to participate in semi-structured interviews conducted in small group or one-on-one format in Winter 2019 to talk about their experiences installing the interventions and completing the data collection instruments. A subset of classrooms will be observed by an external observer using an implementation fidelity observation protocol to document their fidelity of implementation and the treatment contrast in teaching practices across different conditions; and


  • Beginning in Winter 2019 and ending in Spring 2019, Follow-Up Instruments will be collected. This will consist of classroom observations at three points in time in the Winter and/or Spring, as well as self-administered surveys collected from administrators, lead teachers, assistant teachers, and coaches in Spring. In addition, a subset of children being served in participating centers may be asked to complete a set of direct child assessments and lead teachers will be asked to complete reports on those selected children in Spring.


The Impact Evaluation and Process Study are scheduled to begin in Summer/Fall 2019 and end in Spring 2021. However, the timing and design of the Impact Evaluation and Process Study will be finalized based upon learnings gained from the Pilot Study. These activities are as follows:


  • Starting Summer/Fall 2019, VIQI will begin screening potential centers to assess their eligibility for meeting the sampling criteria for the Impact Evaluation and Process Study until all 165 centers are successfully recruited;


  • Upon centers being recruited into the VIQI Project, Baseline Instruments will be collected. This battery of instruments includes self-administered surveys distributed to center administrators, lead and assistant teachers, and coaches serving the participating centers and two time points of classroom observations. These baseline data collection activities will begin in Winter 2020 and end in Fall 2020, with the majority of data being collected prior to random assignment. In addition, in Fall 2020, self-administered parent/guardian information forms will be distributed to parents/guardians of children being served in the participating centers and a subset of these children will be asked to complete a set of direct child assessments.


  • Beginning in Fall 2020, Fidelity of Implementation Instruments will be collected. This will include collection of logs completed by lead teachers and assistant teachers, as well as logs completed by coaches. Administrators, lead teachers, assistant teachers, and coaches will also be asked to participate in semi-structured interviews conducted in small group or one-on-one format in Winter 2021 to talk about their experiences installing the interventions and completing the data collection instruments. A subset of classrooms will be observed to document their fidelity of implementation and the treatment contrast in teaching practices across different conditions; and,


  • Beginning in Winter 2021 and ending in Spring 2021, Follow-Up Instruments will be collected. This will consist of classroom observations at three points in time in the Winter and/or Spring, and self-administered surveys collected from administrators, lead teachers, assistant teachers, and coaches in Spring. In addition, a subset of children being served in participating centers will be asked to complete a set of direct child assessments and lead teachers will be asked to complete reports on those selected children in Spring.


Research Questions


The VIQI project will aim to address a set of fundamental questions for the early care and education field across several key areas. The specific questions vary with the phase of the VIQI project, as follows:


Pilot Study Research Questions

  • Did screening and recruitment protocols operate as expected in recruiting the targeted group of study participants during the Pilot Study? What modifications to the screening and recruitment protocols are necessary for the Impact Evaluation and Process Study?

  • Did data collection protocols and procedures operate as expected during the Pilot Study? What modifications to data collection procedures are necessary for the Impact Evaluation and process Study?

  • To what extent did the measures operate as expected during the Pilot Study? What modifications to the measures are necessary for the Impact Evaluation and the Process Study?

  • How were the interventions implemented during the Pilot Study? How were the critical components of the interventions that are hypothesized to drive effects of the interventions on quality implemented during the Pilot Study? What successes and challenges were encountered for each research condition over the course of the Pilot Study? What is the readiness and capacity of the developers to support the installation of the interventions during the pilot year? What modifications to the protocols and procedures for installing the interventions are necessary for the Impact Evaluation and Process Study?

  • Did quality improve during the pilot year in centers implementing the interventions? What do these improvements in quality suggest about the likelihood of achieving impacts on quality in the Impact Evaluation and Process Study that are sufficient in magnitude to assess the nature of quality-child outcomes relationship?

  • Who are the families and children being served by the centers participating in the Pilot Study? What are the potential effects of each intervention on children’s outcomes?

  • Based upon insights from the Pilot Study, what adjustments to the study design and research questions for the Impact Evaluation and the Process Study are recommended?



Impact Evaluation Research Questions

  • What are the effects of the interventions on different dimensions of quality, teacher, and child outcomes? For whom and under what circumstances are the interventions more or less effective?

  • What are the causal effects of different dimensions of quality on children’s outcomes?

  • Are there thresholds in the effects of quality on child outcomes?

  • Do the effects of quality on child outcomes differ, depending on child, staff and center characteristics, including centers that vary in their initial levels of quality?









Process Study Research Questions

  • What are the characteristics of the participants at the center, staff, and child levels in the Impact Evaluation? Which of these characteristics are drivers of fidelity of implementation? How do these drivers relate with each other?

  • What are the implementation systems (e.g., professional development, training, coaching, assessments) that support the delivery of the interventions in classrooms? How much variation is there in participation of these supports? What drivers seem to support or inhibit participation?

  • To what degree are the interventions delivered in the classrooms as intended? How much variation is there in fidelity of implementation of the interventions? What drivers seem to facilitate or inhibit successful implementation and fidelity to the intended intervention model(s)?

  • What is the relative treatment contrast achieved in teacher practices targeted by the intervention(s)?



Brief Review of Scientific Literature

Below, we situate the study design within the broader literature on what we know and what we need to know about how to deliver quality early care and education at scale to produce positive impacts for young children, particularly those with low-income or disadvantaged backgrounds.


Impact of ECE on children’s outcomes. As ECE policy has come to the forefront, healthy debate has ensued about how to best ensure that investments reliably and effectively support young children. Some research shows that ECE produces substantial impacts on children’s early learning and longer-term outcomes (Heckman et al., 2013; Yoshikawa et al., 2013). Seminal studies, such as the often cited examples of High Scope/Perry Preschool and Abecedarian Projects, show that some model, high-quality, intensive programs can have large and lasting impacts for disadvantaged children with returns of $4 to $10 in benefits per dollar spent by preventing later risky behavior and boosting academic and labor market success (Currie, 2001; Heckman et al., 2010; Masse & Barnett, 2002). Looking to the current context, several recent large-scale evaluations of publicly funded ECE programs have also produced positive short-term effects on a range of children’s early outcomes (Burchinal et al., 2015). Yet, effects are generally smaller in magnitude than those of earlier studies. Thus, despite evidence of positive effects in earlier studies, the research does not provide clear guidance about how to design, target and implement public investments in ECE programming to ensure that young children benefit from the programs on a large scale, particularly across the mix of settings that provide such services.


Relationship between classroom quality and child outcomes. The ECE field has broadly defined classroom quality along several dimensions:


  1. Structural quality features, or how programs and classrooms are designed and configured, as well as the characteristics of classrooms and the staff that work directly with children (e.g., teacher-student ratios, class size and composition, teacher compensation, education and training, classroom safety);

  2. Process quality features, or the quality of children’s interactions with teachers and others in the classroom, including the warmth and sensitivity of these interactions and the overall classroom management and organization; and,

  3. Instructional quality features, which includes a constellation of developmentally appropriate, intentional teaching practices and organized activities that aim to create a language-rich environment through discourse and use of vocabulary; promote children’s higher-order skills (e.g., broader world knowledge, language and vocabulary, deeper understanding of concepts within domains, and critical analytic and problem solving skills) and extend children’s learning; intentionally align or individualize children’s instructional experiences to their skill level in different developmental domains (e.g., language, literacy, math, science or social-emotional) with the inclusion of instructional content that is rich and broad in scope and follows a developmental sequence within specific domains.


A review of large-scale studies and evaluations reveals considerable variation in quality (Burchinal et al., 2015); almost all evaluations of Head Start (HS) and pre-k programs report adequate process quality (as measured by the CLASS), but quality of instructional support is low to moderate (Burchinal et al., 2015; Moiduddin et al., 2012). Further attempts to understand how these different dimensions of quality influence children’s gains have yielded intriguing patterns of findings that raise critical questions about which dimensions matter for young children, at what thresholds, and how they might interact with each other. The literature suggests that structural features set the stage for supporting more positive interactions rich in content and stimulation, but do not guarantee such interactions will occur; accordingly, they are not consistently linked with children’s gains (Cassidy et al., 2005; NICHD ECCRN, 2002; Zaslow et al., 2010). Aspects of process quality are hypothesized to be more closely linked with children’s gains. Yet, recent studies have found that commonly used measures of quality, which capture a mix of structural and/or process quality features, have not been consistently or strongly linked with child outcomes (Burchinal, Kainz & Cai, 2011; Weiland, Ulvestad, Sachs & Yoshikawa, 2013). Some non-experimental work suggests that certain levels or thresholds of quality must be met before associations between quality and child outcomes become evident, while other work underscores the potential interplay between thresholds and quality dimensions underlying child development (Burchinal et al., 2016). Elsewhere, correlations of measures of instruction within specific content areas (such as math or language) appear to have larger effects on children in those domains than do more global measures of process quality (Burchinal et al., 2016). Emerging findings like these highlight specific instructional “moves” or teacher practices that might be more directly linked with improvements in children’s learning and development. But, the evidence to date has not been consistent. Therefore, while the literature suggests the quality-child outcome relationship may be nonlinear and there are interactions among quality dimensions, further work is needed to more clearly document the exact nature of these effects on children’s learning and development.


Curricular and professional development models for strengthening the quality of ECE. A review of the ECE literature concludes that one of the most effective strategies for strengthening the quality of ECE is by implementing curricular models aligned with ongoing professional development supports such as coaching and training.


Curricula. Implementing explicit, intentional curricula – which provides content, tools, and specific pedagogical guidance for instruction (Stein, Remillard, & Smith, 2007) – is thought to promote children’s gains because it ensures continuing emphasis on particular skills needed for children’s school success, keeps children engaged and challenged, and maintains classroom quality (Klein & Knitzer, 2006). A review of ECE curricula finds 3 general categories with varying theories of change and levels of effectiveness across a variety of ECE settings, populations and circumstances:

  • Whole-child” curricula (e.g., Creative Curriculum, High Scope) provide a range of activities aimed at creating safe, organized and well-managed, warm and nurturing environments aimed at promoting a wide range of child outcomes. Typically, teachers select activities without explicitly focusing on a scope and sequence of learning;

  • Domain-specific curricula, often delivered as curricular enhancements, focus on enhancing instructional quality in a single content area and follow a set scope and sequence where prerequisite skills are taught and mastered before more complex skills are introduced; and,

  • Integrated or interdisciplinary curricula purposefully aim to enhance instructional quality in more than one content area and follow a set scope and sequence, like single domain-specific curricula, where prerequisite skills are taught and mastered before more complex skills are introduced. The content and activities aim to have a high degree of interconnections across areas through the creation of a language-rich environment aimed at reinforcing children’s learning and skill development across domains, but also developing children’s higher-order thinking skills that cut across multiple content and skill areas.

Professional development (PD). A review of the PD literature suggests that targeted PD models have been shown to enhance classroom quality and teachers’ practices, especially when well-designed training is followed by in-classroom coaching that helps teachers transfer what they learned in training to their work with children (Joyce & Showers, 1981). Within the PD literature, coaching models typically expect coaches to: 1) build relationships with teachers; 2) observe, model, and advise in the classroom; 3) meet with teachers to discuss classroom practices, provide support and feedback, and assist with problem-solving; and 4) monitor progress toward identified goals. Coaching differs from the typical ECE PD, generally “one-shot” workshops that do not allow for breadth or depth of exploration of a particular topic. In fact, prior studies indicate that training alone is not enough to improve teacher practices over time (Koh & Neuman, 2009; Neuman & Cunningham, 2009). Across the evaluation literature, most curricula with demonstrated effectiveness are coupled with integrated, intensive PD. However, this review also suggests that such interventions can generate robust impacts on teacher practice, but do not always yield corresponding improvements in child outcomes, suggesting that more evidence is needed to guide the combination of targets – which dimensions of quality, content areas, and children’s skills – should be the foci of curricular and PD supports to reliably strengthen the effectiveness of ECE programming at scale.


Challenges to replication at scale and implications for the VIQI project. Last, scaling effective, high-quality ECE services can be challenging, in part, due to the variety of service systems providing ECE in the U.S. Multiple federal, state and private funding streams, laws and regulations oversee and support ECE services delivered via a mix of settings and teachers with varying education and training. Studies often find effects are not replicated when an intervention is scaled and tested. Two potential explanations are a decrease in fidelity to the program model when an intervention is scaled up due to relatively uncontrolled real-world settings (Hulleman & Cordray, 2009) and changes in the counterfactual context. Regardless of how well an intervention is implemented, programs operating the “business-as-usual” or counterfactual conditions may implement something similar and, subsequently, exhibit similar outcomes. This is particularly relevant in today’s evaluation context. Earlier studies like Abecedarian and Perry Preschool compared intervention groups to control groups that received few, if any, services. Due to ECE growth in ensuing decades, more recent studies typically compare intervention groups to control groups of children also heavily served by ECE. As found in the Head Start Impact Study and the later Head Start Variation Study, the changing counterfactual condition likely depresses the size of net impacts when tested at scale (Bloom & Weiland 2015; U.S. Department of Health & Human Services, 2010). Despite these issues, evaluations rarely collect detailed information on how well the intervention is being delivered, or on a parallel assessment of key outputs in counterfactual conditions, to understand whether the desired strength of treatment contrast has been achieved. This makes it difficult to know if inconsistency in impacts is due to problems with “infidelity,” intervention’s theory of change, relatively strong counterfactual conditions, or some combination. The implementation research and measurement plans described in this package are designed to meet these needs.


Conceptual framework underlying the VIQI project. The VIQI project aims to leverage two types of variation to build evidence for the field – variation across settings, staff, and children already evident across the ECE landscape nationally and experimentally induced variation in quality dimensions. At the core of the VIQI project, a multi-group randomized controlled experimental design where the effectiveness of two interventions (supported by intensive professional development) targeting different dimensions of quality will be tested across community-based and Head Start centers that vary in their initial levels of quality. In doing so, the project addresses a pressing need in the ECE field for further evidence rigorously unpacking the “black box” of quality by illuminating: 1) the causal effects of different dimensions of quality on child outcomes; 2) the effectiveness of different interventions (combinations of curricular and PD supports) across a variety of ECE settings, populations and circumstances; and 3) the reasons why or why not the curricular and PD supports are effective at shaping quality and child outcomes and what factors might drive those results. The VIQI project focuses primarily on Head Start and community-based child care centers, since these are settings for which the literature about the effects of different dimensions of quality and the effects of curricular and PD supports have been further developed. This project will not focus on home-based or other informal child care settings, such as family-based child care settings, as the literature provides much less guidance around which aspects of quality and which interventions might be more or less important and effective in those settings.


The conceptual framework (shown in Exhibit 1) that was developed by taking stock of the ECE evidence base, implementation science and developmental theory and research serves as the foundation for the VIQI project and guides our planned approach to the study design, data sources and measurement plan.


In the middle of the figure, we depict the “black box” of quality – structural, process, and instructional quality features thought to characterize classrooms’ overall global quality and functioning – that we aim to unpack in the VIQI project. The features are grouped to represent two pathways that are hypothesized to promote child outcomes in ECE programs – one through a combined construct of structural and process quality features and one though instructional quality features. This represents the number of pathways, and in turn dimensions of quality, that can be reasonably rigorously teased apart from each other to unpack the effects of different dimensions of quality on child outcomes, given the operational realities of conducting a study of this magnitude within the project’s existing resources.


To the left of these dimensions of quality in Exhibit 1, two interventions (a combination of curricula and professional development supports) are shown that are hypothesized to differentially influence these different pathways directly and through a set of associated mechanisms (which are discussed in more detail below). Intervention Approach A is conceptualized as a whole-child, global intervention approach that is expected to primarily affect children’s experiences in the classrooms though structural and process quality features. The Creative Curriculum has been selected as the intervention that best meets the whole-child, global approach. In contrast, Intervention Approach B is conceptualized as an integrated or interdisciplinary approach with broad scope and explicit sequencing of content that is hypothesized to primarily affect children’s experiences in the classrooms though instructional quality features. Connect4Learning has been selected as the intervention that best meets the integrated or interdisciplinary approach. The dotted lines from each of the intervention approaches to the other dimensions of quality are meant to show how the interventions primarily target one over the other pathway.


Stemming from this, at the core of the VIQI project, we envision a multi-group randomized controlled experimental design where the effectiveness of the two interventions is tested. The design can be thought of as a test of two interventions that take different approaches to and focus on different aspects of quality. Such a test would allow us to rigorously estimate two opposing mechanisms for strengthening the quality of children’s experiences in the classrooms to affect child outcomes to determine which approach – and in turn, which dimensions of quality – yield the greatest improvements in children’s development.


The planned data collection activities and measures covered under this request are also guided by this conceptual framework, which provides a theory of change underlying ECE programs and how they achieve their intended effects on children’s experiences in the classrooms and child outcomes. Going left to right of Exhibit 1, it is hypothesized that there are multilevel drivers and inputs that influence the outputs or activities and services outputted and delivered from installation of the interventions, which lead to a set of shorter-term outcomes both for teachers (in terms of knowledge, beliefs, understanding, and co-teacher collaboration) and for classrooms (in terms of multiple aspects of quality) as well as longer-term outcomes for children (in terms of their cognitive, pre-academic, behavioral, and social-emotional competencies including both basic skills and higher-order skills). Below we further describe each of these aspects of the conceptual framework.


Implementation drivers. On the far left side of Exhibit 1, we denote the implementation drivers that have been identified in the literature as promoting high-fidelity implementation (Fixsen, Blase, Naoom, & Wallace, 2009; Han & Weiss, 2005; Metz et al., 2014). The conceptual model takes an ecological systems approach—by focusing on various multilevel influences—and utilizes the National Implementation Research Network (NIRN) model (NIRN, 2016) as a base to provide a wider picture of the proximal and distal drivers of implementation. In doing so, we identify three categories of drivers at the organizational level, or in our case, ECE center level. These drivers are conceptualized as interactive processes that will help ECE centers and their staff to support effective and consistent implementation of an intervention as intended, leading to reliable benefits in quality and teacher practice and ultimately in children’s outcomes. They include:


  1. Competency Drivers – This set of drivers posits that the background, experience, attitudes, and knowledge of staff selected to implement, along with the training and coaching provided to them, are essential for changing educator behavior and supporting implementation.

  2. Organization Drivers – This set of drivers assumes that a well-developed infrastructure, including a positive climate and readiness to take on an initiative, is essential for enabling and supporting the competency drivers and implementation.

  3. Leadership Drivers – This set of drivers posits that leadership, including good management and an effective leadership style that can resolve adaptive and technical issues and problems, can help manage change and, in turn, support implementation.

We nest an Implementation System within the center-level drivers. The Implementation System identifies specific processes that are embedded within the competency and organization drivers – namely, existing professional development supports (e.g., training, coaching infrastructure), curricular activities and materials, and data support system already in place within centers – as these existing capacities or processes are thought to facilitate the installation and implementation of the VIQI interventions.

Because implementation will take place in classrooms within centers, many of which are also nested within larger, umbrella organizations or grantee agencies (which we refer to as the program administrative level) and within a changing landscape of ECE priorities, policies, and practices at the community, district, state, and federal levels (e.g., Durlak & Dupre, 2008; Han & Weiss, 2005), the figure also depicts a layer of influences within the larger macro-system context. The influence of drivers at these higher levels of the ecological system on implementation are largely thought to be indirect, or filtered through, the center-level drivers. Therefore, a greater focus of our planned measurement strategy is on these center-level drivers.


Interventions. To the right of the drivers in Exhibit 1, the interventions (i.e., Interventions A and B) being tested are also depicted as key implementation inputs within our conceptual framework. In VIQI the interventions chosen will entail a curricular model that includes a particular set of curricular materials and activities, a professional development model that includes teacher training and coaching, and ongoing provision of technical assistance and support—all with the aim of promoting the dimension of quality that intervention is hypothesized to target.

Outputs. The middle portion of the figure—outputs—represents the extent to which the intervention services are delivered and received as intended, which in VIQI represents implementation of the curricular and professional development models as well as monitoring of that implementation and provision of technical assistance. This box implicitly highlights the importance of fidelity of implementation—delivering the intervention as intended—as a necessary link in the chain for the interventions to achieve the intended effects on quality and children’s outcomes.

Fidelity of implementation is multidimensional and consists of two main aspects: (1) implementation fidelity, or the extent to which the professional development model and other supports are received as intended; and (2) intervention fidelity, or the extent to which the curricular model is delivered by teachers as intended (Hulleman et al., 2012). Typically assessed fidelity constructs in the literature include adherence, which refers to whether teachers conform to the curriculum “protocol” or deliver the component pieces of a curriculum (e.g., whether teachers used the materials that developers intended them to use); dosage, which refers to an index of quantity of delivery such as number of coaching sessions implemented or the length of sessions; quality, which refers to the qualitative aspects of the manner in which the intervention is delivered, or the skill with which facilitators (e.g., trainers, teachers) deliver material and interact with participants (e.g., teachers, children); and participant responsiveness, which refers to the level of involvement displayed by intervention recipients (e.g., were children actively engaged with curricular components) (Dane & Schneider, 1998).

Mechanisms for changing short-term and longer-term outcomes. Continuing from left to right of the conceptual framework, Exhibit 1 then shows the general theory of change stemming from the installation of an intervention like those planned in VIQI through (1) short-term improvements in teacher outcomes, such as their knowledge, beliefs, and relationship with co-teachers; (2) improvements in different dimensions of classroom quality, which in turn result in; (3) increases in children’s competencies. As discussed earlier, two primary pathways are depicted where the two interventions being tested in the VIQI project are expected to have differential impacts on different dimensions of classroom outcomes (i.e., a combined construct of structural and process quality features vs. instructional quality features). The two dimensions of quality are in the same gray box because we view them as being interrelated, but not completely overlapping. Together, they are thought to characterize classrooms’ overall quality and functioning. Lastly, in Exhibit 1, we depict that the two hypothesized pathways are thought to directly shape a range of children’s cognitive, pre-academic, behavioral and social-emotional competencies.

Study Design

As discussed above, the VIQI project will be conducted in two phases consisting of a year-long pilot study followed by year-long impact evaluation and process study.

Screening and Recruitment for the Pilot Study, Impact Evaluation and Process Study

The design for each of these phases will involve a screening and recruitment process where eligible centers will be asked to participate in the respective phase of the VIQI project. The goal for the pilot study is to successfully recruit about 40 centers, and the goal for the impact evaluation and process study is to recruit 165 centers. Although the results of this study are designed to be generalizable to the center-classroom-child combinations eligible for this study, the study will not provide results that are statistically representative of populations of children or classrooms or centers. Convenience methods and qualitative judgement are necessary to ensure both diversity and feasibility.

The eligibility screening and recruitment process will aim to identify and engage centers in the study that meet the sampling criteria for the VIQI project. To identify the pool of eligible centers for the VIQI project, a staged and tiered screening and recruitment process is planned. This approach will be tested in the pilot study. Information from the pilot study will then be used to adjust and refine the approach for screening and recruitment for the impact evaluation and process study.

In planning for the VIQI project, the team has reviewed publicly available data sources recommended by OPRE, the Office of Child Care, and the Office of Head Start. Yet, these sources are limited in aiding the identification of localities and centers that meet the sampling criteria for the VIQI project. For this reason, the study team will need to contact key informants to gather information to enable us to successfully screen and recruit centers that meet our sampling criteria. A combination of phone calls and in-person visits using semi-structured protocols to ask a series of questions tailored to informants will be used to gather detailed information to inform the recruitment and selection of centers for the respective phase of the study.


A first step in the plan is to gather detailed information about the ECE landscape at state and local levels to explore whether particular localities will be a good fit for the VIQI project. The list of key informants will be identified through a purposeful, snowball sampling process and information gathered through the generic clearance package, in addition to recommendations from the Contracting Officer’s Representative (COR) for VIQI, organizations recognized as experts in the ECE field, other recommendations for state and local experts within large metropolitan areas, and independent internet searchers and reviews of existing and publicly available information (such as reports, websites when available) about the landscape of ECE programming at national, state and local levels. Much of these landscaping activities are currently being conducted under the generic clearance package. From this, we expect to be able to identify promising metropolitan areas that will be targeted for more focused screening and recruitment activities for the pilot study or impact evaluation and process study, depending upon the phase of the project.


As the list of candidate metropolitan areas is narrowed down, the study team will reach out to key informants at local administrative entities that are connected to large numbers of Head Start and community-based child care centers. In some cases, these entities may be Head Start grantee or delegate agencies that receive funding directly from the Office of Head Start. In others, these entities may be community-based child care oversight agencies that operate or oversee multiple child care centers. Together, we refer to these entities as “umbrella agencies”. Upon obtaining initial screening and eligibility information, the study team will then refine and narrow the list of prospective umbrella agencies and begin outreach to informants at individual centers (as warranted). The study team will continue gathering information to further refine and narrow the list of potential centers in an iterative fashion where we take stock of the information learned from sets of conversations to guide the next set of conversations and contacts.


Our screening and recruitment activities will aim to generate a group of participants that has geographic variation of centers in several metropolitan areas across the United States, but the study participants will not be a probability sample; and, therefore, we will not be able to statistically generalize our findings to a broader population of ECE centers in the United States. However, our extensive landscaping effort of ECE programming conducted under the generic clearance will allow us to describe how the centers recruited into the study fit in the context of the broader ECE landscape across the United States.

Once centers agree to participate in the VIQI project, classrooms will be identified and selected to participate in the pilot study or the impact evaluation and process study. Information gathered from the screening and recruitment activities will be used inform the selection of classrooms within participating centers. In line with the target populations for the interventions being tested, classrooms serving a mix of 3- and 4-year-old children that provide a full-day of Head Start or child care services will be selected. (For the VIQI project, “full-day” is considered providing early care and education for at least six consecutive hours per day, five days per week with a consistent lead teacher for that schedule.) We anticipate identifying about 3 classrooms per center that meet these criteria. We also expect that most centers will not have more than 3 classrooms serving a mix of 3- and 4-year-old children. As such, the selected classrooms are expected to closely represent the universe of classrooms serving 3- and 4-year-old children in the centers participating in the VIQI project. We also aim to collect data from all of the administrators, lead and assistant teachers and coaches serving these classrooms of children. Therefore, any analysis of data from these classrooms will be generalizable to all of the classrooms serving 3- and 4-year-old children and related staff in participating centers. We do not expect the participants to be generalizable to classrooms and staff in the participating centers serving children of other combinations of age groupings.


After centers agree to participate in the respective phase of the VIQI project and eligible classrooms are identified and selected, lead and assistant teachers in these classrooms will be asked to participate in any training and professional development activities related to the installation of the interventions, depending upon the center’s assigned research condition. Potential study participants in each of the centers will be asked to complete a set of data collection activities to address the research questions guiding each phase of the study. At baseline, the study team will collect baseline instruments that gather background information about centers, classrooms, and center staff. The targeted group for these data collection activities are all administrators in participating centers who will be asked to complete an administrator baseline survey, all lead and assistant teachers in classrooms selected to participate in this study who will be asked to complete a teacher/assistant teacher baseline survey, and all coaches serving the participating centers who will be asked to complete a coach baseline survey. All participating classrooms will also participate in two time-points of classroom observations at baseline to capture information about the classrooms’ initial quality. A baseline information form will also be administered to parents or guardians of children enrolled in participating ECE centers at the beginning of the pilot study and the impact evaluation and process study. The baseline information form will be used to screen and select eligible children to participate in the protocol for baseline assessments of children’s skills. (A more detailed description of the recruitment and screening process for families and children is provided below.)


Together, the baseline instruments will gather information that will be used to describe the group of participants in the pilot study, impact evaluation and process study. The information will be used to: (a) assess the extent to which screening and recruitment processes in the pilot study, impact evaluation and process study generated participants that met the selection criteria for the VIQI project and compare the resulting group of participants with the samples of other ECE studies and the broader set of centers across the ECE landscape nationally; (b) stratify the group of participating centers based upon the extent to which they provide Head Start or community-based child care services and levels of initial quality (in the impact evaluation and process study only); (c) assess equivalence of research conditions prior to the installation of the interventions, describe subgroups of interest, and as covariates in any impact or subsequent analyses aimed at causally estimating the quality-to-child outcome relationships; and (d) explore the potential influence of drivers in facilitating or inhibiting successful installation and implementation of the interventions in the process study.


After baseline information is collected, a stratified, cluster-based, multi-group random assignment design will be used. In pilot study, centers will be stratified by locality and then randomly assigned to one of the research conditions. In the impact evaluation and process study, centers, first, will be stratified by whether they provide center-based child care or Head Start services and by their initial levels of quality and, second, will be randomly assigned to one of the research conditions. For the pilot study, each of the participating centers (about 40) will be randomly assigned to one of three groups: a group that receives Intervention 1 (Group 1), a group that received Intervention 2 (Group 2), or to a group that continues to conduct “business as usual” (Control). About three classrooms per center serving 3- and 4-year-old children will be asked to participate in the study. In the pilot study, centers will be unevenly assigned to each group, such that 30 centers install one of the targeted interventions (split evenly across two interventions) and 10 centers are in a business-as-usual control condition. For the impact evaluation and process study, centers will be randomly assigned to each group equally: one of two intervention conditions or a business-as-usual control condition. All of the classrooms serving 3- and 4-year-old children that are selected to participate in the study will be asked to participate in the training and professional development activities prescribed for the assigned research condition.


Over the course of the year of the pilot study and process study, the study team will collect a set of fidelity of implementation instruments across research conditions. All lead and assistant teachers across research conditions will be asked to complete weekly teacher logs. All coaches who support the installation of the interventions will be asked to complete coach logs upon completing coaching sessions with teachers on their caseload. A randomly selected subset of classrooms across research conditions will also be asked to participate in implementation fidelity observations to gather information about the implementation and delivery of the interventions and the treatment differential across research conditions. This subset of classrooms will be selected with an eye towards equal representation of classrooms across all of the research conditions and representation of classrooms in centers that provide Head Start and community-based child care services and high and low initial levels of quality. In the pilot study, this information will be used to (a) describe the implementation of the interventions; (b) identify potential factors that appear to either facilitate or inhibit successful installation of the interventions; (c) revise and finalize the data collection activities and measures that will be included in the impact evaluation and process study; and (d) guide the selection of the piloted interventions that have reasonable prospects for generating impacts on dimensions of quality and child outcomes and will be included in the impact evaluation and process study. In the process study, the information will be used to (a) document how the interventions are delivered and received by staff; (b) inform the degree to which the interventions are implemented with fidelity in line with the intended intervention models; and (c) identify factors at the center, staff and child levels that appear to facilitate or inhibit successful implementation of the intervention models to provide context for interpreting the findings of the impact evaluation.


At follow-up, the study team will collect follow-up instruments to inform how centers, classrooms, teachers and children changed as a function of the interventions over the course of the pilot study, impact evaluation, and process study. All the instruments will be administered at the end of the pilot study, impact evaluation and process study. The targeted group for these data collection activities are all administrators in participating centers who will be asked to complete an administrator follow-up survey, all lead and assistant teachers in classrooms selected to participate in this study who will be asked to complete a teacher/assistant teacher follow-up survey, and all coaches serving the participating centers who will be asked to complete a coach follow-up survey. All participating classrooms will also participate in three time-points of classroom observations at follow-up to capture information about the classrooms’ quality. For the pilot study and impact evaluation and process study, a protocol for follow-up assessments of children’s skills, as well as teacher reports on children’s social and behavioral skills, will also be collected for the selected group of children in each of the participating classrooms (the sampling approach of children is discussed in the “Sampling” Section of B1 in Supporting Statement B).


The follow-up information will be used in the pilot study to (a) identify challenges and barriers to installing the interventions; (b) understand the extent to which the interventions are delivered with fidelity; (c) refine hypotheses about the extent to which the interventions are likely to change targeted dimensions of quality in line with the interventions’ respective theories of change; (d) to inform the extent to which the interventions are likely to generate impacts on different dimensions of quality in the impact evaluation and process study that are sufficient to explore the key questions of the impact evaluation and process study; (e) guide the finalization of procedures and support used to install the interventions to help ensure reasonable prospects for generating impacts on dimensions of quality and child outcomes and will be included in the impact evaluation and process study; (f) revise and finalize the data collection activities and measures that will be included in the impact evaluation and process study; (g) describe the population of families and children being served in participating classrooms and centers; and, (h) assess the potential of each intervention to change child outcomes. For the impact evaluation, the information will be used to (a) describe the settings, classrooms and child outcomes and, for information collected at both time points, how they may have changed from baseline to follow-up; (b) document treatment contrasts across research conditions; (c) estimate the impacts of the interventions on quality and child outcomes for all participating centers and classrooms and subgroups of interest; and, (d) estimate the causal effects of quality on child outcomes for all participating centers and classrooms and subgroups of interest.


As mentioned above, the design of the pilot study and impact evaluation and process study will involve recruiting, screening, and selecting a group of families and children who are open to participating in the data collection activities and who meet the study’s eligibility and selection criteria. See the “Sampling” Section of B1 in Supporting Statement B for more details on selection of families and children.


Universe of Data Collection Efforts


The VIQI study will take a multi-method, multi-informant approach to collecting critical and detailed information necessary to address the research questions underlying the VIQI project and to fulfill ACF’s learning agenda. See Exhibit 2 for a matrix detailing the data collection instruments.

Exhibit 2. VIQI Data Collection Instruments

Data Collection Instrument

Data Type

Landscaping protocol with Stakeholder Agencies

Site call/visit

Screening protocol for phone calls

Site call/visit

Protocol for in-person visits for screening and recruitment activities

Site call/visit

Baseline administrator survey

Survey

Baseline teacher/assistant teacher survey

Survey

Baseline coach survey

Survey

Baseline classroom observation protocol

Observation

Baseline parent/guardian information form

Survey

Baseline protocol for child assessments

Direct assessments

Follow-up administrator survey

Survey

Follow-up teacher/assistant teacher survey

Survey

Follow-up coach survey

Survey

Follow-up classroom observation protocol

Observation

Follow-up protocol for child assessments

Direct assessments

Teacher reports to questions about children in classroom

Teacher report

Teacher/assistant teacher Log

Log

Coach Log

Log

Implementation fidelity observation protocol

Observation

Interview/Focus group protocol

Interview


See Exhibit 3 for a matrix detailing the data constructs of interest that will be collected by the data collection instruments.


Exhibit 3. Matrix of Measurement Constructs








Exhibit 3. Matrix of Measurement Constructs (Continued)






Exhibit 3. Matrix of Measurement Constructs (Continued)






Exhibit 3. Matrix of Measurement Constructs (Continued)


Instruments for Screening and Recruitment of ECE Centers


Attachment A.1 – Landscaping Protocol with Stakeholder Agencies and Related Materials


The VIQI team will conduct a series of semi-structured phone/in-person discussions with state and local informants using a semi-structured discussion protocol that is tailored depending upon the expertise and background of the informants and the gaps in the study team’s knowledge base about the ECE landscape in different metropolitan areas. The goal of these discussions is to better understand the landscape and structure of ECE programming and the population served in different metropolitan areas, current/upcoming initiatives to improve quality, variation in levels of quality, details about state/local implementation of curricula and professional development initiatives, data infrastructure, and feasibility of conducting the VIQI project. The group for this data collection protocol is expected to consist of Head Start (HS) grantee and community-based child care agency informants. A total of 120 individuals from different metropolitan areas will be asked to participate in these discussions across the pilot study, impact evaluation and process study of the VIQI project. The information is expected to be gathered in a discussion (and follow-up conversations, if needed) that should last a total of approximately 1.5 hours per participant.


Attachment A.2 – Screening Protocol for Phone Calls and Related Materials


The VIQI team will conduct a series of semi-structured phone discussions with staff from multiple organizations (a mix of Head Start grantee agencies, delegate agencies, organizations that operate multiple child care centers, and independent ECE centers). The conversations will be conducted using a semi-structured discussion protocol that is tailored depending upon the expertise and background of the staff person and the gaps in the study team’s knowledge base about the ECE programming structure and services provided by organizations. Grantees and centers will be asked to provide information about the center(s) history, enrollment, structure and staffing, children’s demographics, curricula and teacher professional development. Information collected during screening calls will help determine whether the grantee or center is eligible for an in-person site visit. The group for this data collection protocol is expected to consist of 132 individuals from HS grantee and child care oversight agencies and 336 individuals from Head Start and child care centers across the pilot study, impact evaluation and process study of the VIQI project. (Because we expect to speak with more respondents at the center level than at the grantee or agency level—using one protocol— we note two separate burden estimates in Exhibit 5 below.) The information is expected to be gathered in a discussion (and follow-up conversations, if needed) that should last a total of approximately 1.2 to 2 hours per participant.


Attachment A.3 – Protocol for In-person Visits for Screening and Recruitment Activities and Related Materials


A subset of the grantees and centers that complete screening and recruitment phone call discussions, are interested in participating in the project, and meet preliminary eligibility requirements for study inclusion will receive up to two rounds of in-person site visits by the VIQI team. The visits will be conducted using a semi-structured discussion protocol that is tailored depending upon the expertise and background of the staff person and the gaps in the study team’s knowledge base about ECE programming and services provided by organizations. During these visits, project team will obtain further clarification on information obtained during the screening call, explore program- and center-level operations in more detail, and discuss plans for potential implementation of the selected interventions, including professional development, research procedures, and the roles and responsibilities of the programs and the project team. The group for this data collection protocol is expected to consist of 610 individuals from HS grantee and child care oversight agencies and 950 individuals from Head Start and child care centers across the pilot study, impact evaluation and process study of the VIQI project. (Because we expect to speak with more respondents at the center level than at the grantee or agency level—using one protocol— we note two separate burden estimates in Exhibit 5 below.) The information is expected to be gathered in a discussion (and follow-up conversations, if needed) that should last a total of approximately 1.2 to 1.5 hours per participant.


Baseline Instruments for the Pilot Study, Impact Evaluation, and Process Study


Attachment B.1 – Baseline Administrator Survey


All center administrators in participating centers in the pilot and evaluation and process studies will be asked to complete a baseline survey to capture measures of demographics, professional experience, center staffing and services provided, training, coaching, and implementation drivers (e.g., beliefs, burnout/stress, center readiness). The target group for this data collection instrument is expected to consist of administrators who are employed with participating centers (or their overarching program) prior to random assignment. We expect some turnover in administrators, and we will seek to collect the baseline administrator survey from any newly hired administrators at the beginning of the academic or school year that a given center is participating in the study. In total, we expect to ask up to 246 administrators to be asked to complete the baseline survey across the pilot study, impact evaluation and process study of the VIQI project: up to 48 during the pilot study and 198 during the impact evaluation and process study.


Appendix B.2 – Baseline Teacher Survey


All lead and assistant teachers in participating classrooms in the pilot and evaluation and process studies will be asked to complete a baseline survey. Teacher surveys will collect data on teacher demographics, professional experience, and classroom resources (e.g., materials and supplies) as well as implementation drivers or moderators of implementation as indicated in the Conceptual Model (Exhibit 1). The target group for this data collection instrument is expected to consist of lead and assistant teachers who are in selected classrooms meeting the VIQI project’s eligibility criteria in participating centers prior to random assignment. We expect some turnover in lead and assistant teachers during the year, and we will seek to collect the baseline teacher survey from any newly hired lead and assistant teachers at the beginning of the academic or school year. In total, we expect up to 1,538 teachers to be asked to complete the baseline survey across the pilot study, impact evaluation and process study of the VIQI project: up to 300 during the pilot study and 1,238 during the impact evaluation and process study.


Appendix B.3– Baseline Coach Survey


All coaches serving intervention centers in the pilot and evaluation and process studies, along with all coaches identified as serving teachers in participating control centers, will be asked to complete a baseline survey. Coach surveys include many of the same measures as the teacher surveys (e.g., demographics, professional experience, beliefs, pedagogical content knowledge) plus some questions about their coaching style. Coaches serving intervention centers will also be asked questions about their motivation to implement the assigned intervention. The target group for this data collection is expected to consist of coaches serving intervention centers, along with all coaches identified as serving teachers in participating control centers just after random assignment is conducted, but prior to the installation of the interventions associated with some research conditions. We expect some turnover in coaches during the year, and we will seek to collect the baseline coach survey from any newly hired coaches over the course of the year that a given center is participating in the study. In total, we expect up to 230 coaches to be asked to complete the baseline survey across the pilot study, impact evaluation and process study of the VIQI project: up to 22 during the pilot study and 208 during the impact evaluation and process study.


Attachment B.4 – Baseline Protocol for Classroom Observations


All selected classrooms will be asked to participate in two time-points of classroom observations at baseline for the pilot study or impact evaluation and process study. The observations will aim to obtain information about the initial quality levels across different dimensions of structural, process and instructional quality. The protocol guiding the observations will be administered with lead teachers of the classrooms targeted for this data collection activity and includes guidelines for scheduling observations, pre-observation teacher interview, multiple observational measures of classroom quality, and post-observation teacher interview. The pre- and post-observation teacher interviews are intended to collect updated information on classroom staffing, schedule for the day of the observation, and curricular sources used for instruction during the observation. In total, we expect up to 615 teachers to complete the baseline protocol for classroom observations: up to 120 in the pilot study and 495 during the impact evaluation and process study.


Attachment B.5 – Baseline Parent/Guardian Information Form


The study team will aim to collect a parent/guardian baseline information form at the beginning of the school year from all families of children enrolled in selected classrooms of centers participating in the pilot study and impact evaluation and process study. This data collection instrument will aim to collect information about families’ sociodemographic characteristics, such as parent/guardian’s level of education and income; child characteristics, such as name, birthdate, age, sex and race and ethnicity; and contact information for the parent/guardian. This information will be used to identify a subset of children in each classroom participating in the study that provides representation and distribution of characteristics identified in the sampling frame for the VIQI project. In total, we expect 1,620 parents/guardians to be asked to complete the baseline information form in the pilot study and 6,948 parent/guardians to be asked to complete the baseline information form in the impact evaluation and process study.


Attachment B.6 – Baseline Protocol for Child Assessments


In the pilot study and impact evaluation and process study, a subset of children will be recruited into the study and will be asked to participate in a set of game-like tasks aimed at capturing information about their skills and competencies at the beginning of the impact evaluation and process study. A protocol will be used to guide the assessments that will include guidelines for scheduling assessments and scripts to guide interactions with children. The measures will aim to capture children’s language/literacy and math skills. The group of children selected to participate in this data collection activity will represent a subset of children enrolled in participating classrooms whose parents/guardians consented and completed the baseline parent/guardian information form. The subset of children will be selected with an eye towards meeting the distribution of characteristics in line with the VIQI sampling frame. In total, we expect 480 children to complete the baseline protocol for child assessments in the pilot study and 1,980 children to complete the baseline protocol for child assessments in the impact evaluation and process study.


Follow-up Instruments for Pilot Study, Impact Evaluation, and Process Study


Attachment C.1 – Follow-up Administrator Survey


All center administrators who are employed by participating centers at the end of the year of the pilot study or impact evaluation and process study will be asked to complete a follow-up administrator survey. This instrument will aim to capture measures of center staffing and services provided, teacher training and coaching, and implementation drivers (e.g., beliefs, burnout/stress, center readiness). Administrators of intervention centers will also be asked questions about their perception of the implementation of the intervention and barriers to implementation. The study team does not expect to follow administrators who have left participating centers to complete the follow-up administrator survey. In total, we expect 205 administrators to be asked to complete the follow-up survey across the pilot study, impact evaluation and process study of the VIQI project: up to 40 during the pilot study and 165 during the impact evaluation and process study.


Attachment C.2 – Follow-up Teacher Survey


All lead and assistant teachers in selected classrooms at the end of the year of the pilot study or impact evaluation and process study in participating centers will be asked to complete a follow-up survey. The teacher survey will collect data on professional supports received, classroom resources (e.g., materials and supplies) as well as other implementation drivers or moderators of implementation (e.g., attitudes, organizational climate, pedagogical content knowledge, burnout/stress). The study team does not expect to follow teachers or assistant teachers who have left participating centers to complete the follow-up teacher survey. In total, we expect 1,230 teachers and assistant teachers to be asked to complete the follow-up survey across the pilot study, impact evaluation and process study of the VIQI project: up to 240 during the pilot study and 990 during the impact evaluation and process study.


Attachment C.3 – Follow-up Coach Survey


All coaches serving intervention centers, along with all coaches identified as serving teachers in participating control centers, at the end of the year of the pilot study or impact evaluation and process study will be asked to complete a follow-up survey. Coach surveys include measures of coaching competencies and style, beliefs, and pedagogical content knowledge. Coaches serving intervention centers will also be asked questions about their motivation to implement the assigned intervention and coaches serving control centers will be asked questions about the coaching they have provided. The study team does not expect to follow coaches who no longer support teachers in participating centers to complete the follow-up coach survey. In total, we expect up to 184 coaches to be asked to complete the follow-up survey across the pilot study, impact evaluation and process study: up to 18 during the pilot study and 166 during the impact evaluation and process study.


Attachment C.4 – Follow-up Classroom Observation Protocol


All selected classrooms in participating centers across research conditions will be asked to participate in three time-points of classroom observations at the end of the year of the pilot study and the impact evaluation and process study. The observations will aim to obtain information about the quality levels across different dimensions of structural, process and instructional quality. The protocol guiding the observations will be administered with lead teachers of the classrooms targeted for this data collection activity and includes guidelines for scheduling observations, pre-observation teacher interview, multiple observational measures of classroom quality, and post-observation teacher interview. The pre- and post-observation teacher interviews are intended to collect updated information on classroom staffing, schedule for the day of the observation, and curricular sources used for instruction during the observation. In total, we expect up to 615 teachers to complete the follow-up protocol for classroom observations: up to 120 in the pilot study and 495 during the impact evaluation and process study.


Attachment C.5 – Follow-up Protocol for Child Assessments

In the pilot study and impact evaluation and process study, a subset of children will be recruited into the study and will be asked to participate in a set of game-like tasks aimed at capturing information about their skills and competencies at the end of the impact evaluation and process study. Only children who were enrolled at the centers at baseline at the start of the year, are present at the time of the follow-up assessments, and whose parents consented to the study will be assessed. A protocol will be used to guide the assessments that will include guidelines for scheduling assessments and scripts to guide interactions with children. The measures will aim to capture a broad range of children’s skills (e.g., language, literacy, math, science, self-regulation and executive functioning). The group of children selected to participate in this data collection activity will represent a subset of children enrolled in the classrooms participating in the pilot study and impact evaluation and process study whose parents/guardians consented and completed the baseline parent/guardian information form. The subset of children will be selected with an eye towards meeting the distribution of characteristics in line with the VIQI sampling frame. In total, we expect 2,460 children to complete the baseline protocol for child assessments; up to 480 in the pilot study and 1,980 during the impact evaluation and process study.


Attachment C.6 – Teacher Report on Children


In the pilot study and impact evaluation and process study, lead teachers in participating classrooms will be asked to complete reports about the social-emotional and classroom behaviors of selected children in their classroom at the end of the study. The reports will be administered along with the follow-up teacher survey. In total, we expect 615 teachers to be asked to complete the report on children; up to 120 in the pilot study and 495 during the impact evaluation and process study.

Fidelity of Implementation Instruments for Pilot Study and Process Study


Attachment D.1 – Teacher Log


All lead and assistant teachers in participating classrooms will be asked to complete weekly logs throughout the course of the pilot study or process study. The logs will be used to gather information from teachers about their implementation of the curriculum components, participation in professional development activities, and teaching practices in the classrooms. Information on implementation will capture multiple aspects of fidelity, including dosage (amount of the intervention that is delivered), adherence (degree to which activities are done as intended), and the quality of the delivery. Additionally, because the Teacher Logs will be completed by both intervention and control teachers, treatment contrast (or the degree to which classrooms assigned to the control group resemble the classrooms assigned to either intervention groups) can be examined. The logs will also be monitored by the study team to ensure that the installation and professional development model accompanying the interventions is being delivered as intended. All teachers across research conditions will be the target group for this data collection instrument. We expect turnover in teachers during the year in classrooms for centers that are participating in the study. When this happens, we expect to shift this data collection activity to newly hired teachers in the classrooms. In total, we expect up to 1,230 teachers to be asked to complete the logs on a weekly basis over the course of the year of the pilot study or process study: up to 240 during the pilot study and 990 during the impact evaluation and process study.


Attachment D.2 – Coach Log


All coaches serving intervention centers will be asked to complete online coach logs upon completion of their coaching sessions with lead and assistant teachers assigned to deliver one of the interventions in the pilot study or process study. The logs will aim to capture information about the professional development and coaching provided to teachers and lead and assistant teachers’ implementation of the intervention components and engagement in targeted teaching practices. The logs will be incorporated into the professional development model used to support the installation of the interventions and will be used by the coaches to guide their planning and preparation for coaching sessions with teachers and the types of support that they provide to teachers. The logs will also be monitored by the study team to ensure that the installation and professional development model accompanying the interventions is being delivered as intended. We expect that coaches will engage in a bi-weekly coaching model of teachers. The target group for this data collection activity is expected to be coaches serving intervention centers. We expect some turnover in coaches during the pilot study and process study. When this happens, we expect to shift this data collection activity to newly hired coaches serving intervention centers. In total, we expect up to 123 coaches to be asked to complete a classroom’s log on a biweekly basis over the course of the year of the pilot study or process study: up to 12 during the pilot study and 111 during the impact evaluation and process study.


Attachment D.3 – Implementation Fidelity Observation Protocol


A subset of classrooms across research conditions in the pilot study and process study will be randomly selected to participate in fidelity observations that aim to capture the extent to which the interventions are being delivered in classrooms in line with the intended models. These observations will be used as a check on the implementation-related data obtained via teacher and coach logs, to further describe variation in implementation of the interventions, to describe potential treatment contrasts, and to guide technical assistance and support of installation of the interventions. The protocol guiding the observations will be administered with lead teachers of the classrooms targeted for this data collection activity and includes guidelines for scheduling observations, pre-observation teacher interview, measure of fidelity to the intervention depending upon the classroom’s assigned research condition, and post-observation teacher interview. The pre- and post-observation teacher protocols are intended to collect updated information on classroom staffing, schedule for the day of the observation, curricular sources used for teaching during the observation, and general fidelity to the intervention (i.e., routine implementation of the intervention beyond the one-day observation). In total, we expect about 138 classrooms will be observed during the pilot study and process study: about 90 classrooms in the intervention conditions during the pilot study and about 48 classrooms constituting a subset of the classrooms in the intervention conditions during the impact evaluation and process study will be observed using the implementation fidelity observation protocol.


Attachment D.4 – Interview/Focus Group Protocol


A subset of centers and classrooms across research conditions in the pilot study and process study will be asked to participate in one-on-one or focus group interviews. The purpose of these interviews is to capture insights from study participants on their experiences implementing the interventions, engaging in professional development and completing the data collection instruments. For the pilot study, a total of 86 participants are expected to participate in these interviews, including: up to 6 coaches in intervention conditions, 16 administrators across all research conditions, and 16 teachers and assistant teachers across all research conditions. For the impact evaluation and process study, a total of 236 participants are expected to participate in these interviews, including: up to 3 coaches in intervention conditions per locality, 8 administrators across all research conditions per locality, and 48 teachers and assistant teachers across all research conditions per locality across 4 localities. Each one-on-one interview or small-group interview will last up to 1.5 hours.

A3. Improved Information Technology to Reduce Burden

This study will use information technology, when possible, to minimize respondent burden and to collect data efficiently. Electronic data collection methods (e.g., emails to contact study participants, web-based survey instruments and logs) will be used to reduce burden on study participants when possible.

When information is available from a centralized, computerized source, such information has not been included in the data collection instruments described in this submission.

For the pilot study, impact evaluation and process study, the baseline and follow-up survey instruments for administrators and coaches will be collected using a secure web-based system that is self-administered by the participant. Logs for teachers and coaches will also be collected using a secure web-based system that is self-administered by the participant. Conducting the surveys and logs in this manner means that the respondent can answer survey or log questions on their own without coordinating with a member from the study team to complete the instrument. The web-based survey or log also allows for efficient administration of a survey/log by using skip logic to quickly move to the next appropriate question, depending upon a respondent’s previous answer. For the web-based log, information provided on previous time points of data collection will also be prepopulated into the log to minimize the extent to which the participant needs to repeatedly provide information that does not change over time. For all of the logs, participants will receive automated reminders when a log is due and be able to log into a user-friendly interface to confidentially report their implementation activities.


For the pilot study, the baseline and follow-up survey instruments for teachers and assistant teachers will be completed on paper format, due to constraints in timing and available resources for this project. However, in the impact evaluation and process study, these instruments will be available to teachers with mixed-mode delivery systems to allow for ease of completion by permitting teachers and assistant teachers to complete the surveys using a paper and pencil format or a web-based system that allows for efficient administration of the survey by using skip logic to quickly move to the next appropriate question, depending upon a respondent’s previous answer. In both cases, the surveys will be self-administered by the participant.


During the impact evaluation and process study, electronic informed consent forms (ICFs) will be available to reduce burden on center staff in the distribution and collection of completed ICFs for teachers and parents/guardians. If email addresses are available, participants will receive an email with a link to an electronic version of the consent form. Otherwise, participants will receive cover letters with a link to an electronic version of the consent form that explains the research study and voluntary nature of participation. Participants will then be able to sign the ICF electronically indicating whether they agree to participate in the study. In addition, staff surveys can be completed online, minimizing the time required for staff to collect and submit completed forms to the research team. Teacher and coach logs will be available electronically throughout the project allowing for efficient submissions of implementation data.


A4. Efforts to Identify Duplication

In the design of the planned data collection instruments and activities, attention has been paid to leveraging existing or administrative data sources whenever possible. However, as is often the case in ECE settings, there are limited or nonexistent administrative data sources that can reliably and consistently inform the constructs and processes of interest as delineated in the conceptual model underlying the design of the VIQI project. This is not a critique of the quality or reliability of existing or administrative data sources. Rather, it is often the case that extant data are collected at differing times of the year with methodologies and approaches that vary across states, localities, programs, and centers making it difficult to collect consistent information across the pooled group of participating centers in the pilot study, impact evaluation, and process study to address the guiding questions of the VIQI project. As such, we do not make many assumptions about our ability to fruitfully gather crucial information to successfully achieve the research aims and goals of the VIQI project without unique data collection activities. However, if we discover through our screening and recruitment processes and partnership with ECE centers participating in the project that reliable, centrally located and accessible administrative data is collected in consistent ways across all of the centers involved in a particular phase of the project and can be shared with the study team to inform key constructs of interest, we will look to modify and adjust the data collection instruments accordingly to minimize the potential burden to study participants.


Further, much of the information collected about centers during the recruitment process is informative for the process study. Our team has worked together to design protocols that will be complementary and informative for both recruitment efforts and the process study, which reduces the degree to which center staff are asked the same or very similar questions at both recruitment and the baseline data collection. We intend to streamline data collection from center staff to reduce duplicity across multiple data collection activities. For instance, if information about center or staff characteristics is collected during recruitment activities, that information will be fed into other protocols such as the baseline surveys or center administrator interviews. That way the same information will not be requested multiple times assuming the information remains the same.


A5. Involvement of Small Organizations

We expect some of the participating programs will be independent, small organizations. To minimize the burden of the study on staff, the study team will provide resources for each center to facilitate the designation of liaisons for the study. The study team also will work in partnership with the center staff to identify the best opportunities for administering and collecting information with the data collection instruments to minimize interruption of their routine programming by scheduling and planning visits and activities in conjunction with center leadership.


A6. Consequences of Less Frequent Data Collection

The planned data collection activities aim to gather information only as frequently as needed to achieve the aims of the study. Eliminating any of the proposed data collection items would compromise our ability to address key research questions.

Screening and recruitment calls and site visits. The data collected through screening calls and site visits will include preliminary information about the ECE program and policy context to inform the research design, recruitment and sampling strategies used for the pilot study, impact evaluation and process study.


Teacher/assistant teacher, administrator, and coach baseline survey. The baseline survey will be administered once (at the beginning of the study or when a new staff member joins the study). Without it, we would be unable to verify that random assignment had yielded intervention and control groups that were similar in their observed background characteristics and in their baseline measures of outcomes. The baseline survey is also essential for describing the baseline characteristics of participants in the study, for providing covariates for impact analyses, and for examining implementation drivers.


Teacher/assistant teacher, administrator, and coach follow-up survey. The follow-up survey will be administered only once. The follow-up survey is essential for allowing us to examine changes in beliefs, pedagogical content knowledge as well as in professional development received. Lead teachers are also a key informant of children’s social and emotional outcomes, which cannot readily be captured through direct assessments, and therefore are being captured via the teacher survey at follow-up. There is a reasonable expectation of significant change in key measures between the baseline and follow-up survey, particularly in the intervention conditions.

Baseline and follow-up classroom quality observations. Baseline classroom quality observations will be conducted in the fall in the pilot study and in the spring prior to implementation in the impact evaluation (both on two days). Follow-up classroom quality observations will also be conducted in the following winter/spring (over three days) in the pilot study and the impact evaluation and process study. These observations are necessary for answering the key research questions in the study about the quality-child outcomes relationship and – for the impact evaluation and process study – for allowing us to stratify the group by high and low quality centers prior to random assignment. In addition, without baseline observations we will be unable to verify that random assignment yielded intervention and control groups that were similar.


Direct child assessments. Direct assessments will be administered twice – once in the fall and again in the spring of both the pilot study and impact evaluation. Direct assessments are the ideal method for capturing unbiased information about children’s pre-academic and executive functioning outcomes. Without these assessments, we would not be able to detect impacts on skills targeted by the interventions as reliable administrative records are not available for children’s learning and development at this age.


Teacher and coach logs. Logs will be collected on an ongoing basis (up to weekly for teachers and after every coaching session for coaches) to monitor implementation of the interventions and to inform the process study. More frequent collection of logs allows for the strongest intervention possible because the real-time data can inform technical assistance, training, and coaching efforts during the pilot study, impact evaluation and process study. In addition, the inclusion of several data points allows for a more robust study of implementation, indicating how implementation changed over time as a result of technical assistance, training, coaching or environmental factors. Providing this level of detail increases the value added of the VIQI project to the ECE field.


A7. Special Circumstances

There are no special circumstances for the proposed data collection efforts.


A8. Federal Register Notice and Consultation

Federal Register Notice and Comments

In accordance with the Paperwork Reduction Act of 1995 (Pub. L. 104-13) and Office of Management and Budget (OMB) regulations at 5 CFR Part 1320 (60 FR 44978, August 29, 1995), ACF published a notice in the Federal Register announcing the agency’s intention to request an OMB review of this information collection activity. This notice was published on August 10, 2017, Volume 82, Number 153, page 37454, and provided a sixty-day period for public comment. A copy of this notice is attached as Attachment E.1. During the notice and comment period, one comment was received. Our response is attached as Attachment E.2.

Consultation with Experts Outside of the Study

A panel of experts in the ECE field provided consultation to the study team and members of ACF in a meeting convened on April 24, 2017. These experts represented a range of disciplines and included both practitioners and researchers. Input we received at the meeting has informed decisions about the selected VIQI interventions and the design of the research study.

Technical expert panel members include:

  • Jennifer Brooks, Senior Program Officer, Early Learning, US Program, Bill & Melinda Gates Foundation

  • Greg Duncan, Distinguished Professor, University of California, Irvine

  • Paul McDermott, Professor, University of Pennsylvania

  • Pamela Morris, Professor, New York University

  • Helen Raikes, Willa Cather Professor, University of Nebraska-Lincoln

  • Christina Weiland, Assistant Professor, University of Michigan


In addition, we have consulted with individual experts throughout the study to receive feedback on study design, intervention selection, measures, and data collection plans. To date, these consultations have included the following individuals, categorized by topical area.


Consultations with intervention developers include:

  • Karen Bierman, Evan Pugh Professor, Pennsylvania State University

  • Douglas Clements, Kennedy Endowed Chair in Early Childhood Learning and Professor, University of Denver

  • Vincent Costanza, Superintendent in residence, Teaching Strategies, LLC

  • Nell Duke, Professor, University of Michigan

  • John Fantuzzo, Albert M. Greenfield Professor of Human Relations, University of Pennsylvania

  • Annemarie Hindman, Associate Professor, Temple University

  • Susan Landry, Albert & Margaret Alkek Distinguished Chair in Early Childhood Development, and the Michael Matthew Knight Memorial Professor in the Department of Pediatrics, University of Texas Health Science Center at Houston

  • Breeyn Mack, Senior Director, Educational Content, Teaching Strategies, LLC

  • Christine McWayne, Professor, Tufts University

  • Jason Sachs, Director of Early Childhood Education, Boston Public Schools

  • Julie Sarama, Kennedy Endowed Chair in Innovation Learning and Technologies and Professor, University of Denver

  • Jonah Stuart, Vice President, Public Policy and State Partnerships, Teaching Strategies

  • Barbara Wasik, Professor, Temple University


Consultations on measurement include:

  • Dale Farran, Antonio M. and Anita S. Gotto Chair in Teaching & Learning, Professor of Psychology & Human Development, Emerita, Vanderbilt University

  • Linda Platas, Assistant Professor, San Francisco State University

  • Kathryn Tout, Co-Director of Early Childhood Research, Child Trends


Consultations on landscape of Head Start and community-based child care programming:

  • Key staff at Office of Head Start and Office of Child Care national and regional offices

  • Shannon Burroughs-Campbell, Catholic Charities of Central Maryland – Baltimore

  • FloJean Speck, Maryland Family Network

  • Sara Bosley, Child Resource Center Baltimore County at Abilities Network/Project ACT

  • Margareth Legaspe, Office of State Superintendent of Education (OSSE)

  • Steven Barnett, National Institute for Early Education Research

  • Natalie Renew, Public Health Management Corporation

  • Joel Ryan, Washington State Association of Head Start and ECEAP

  • Karin Ganz, Washington Department of Early Learning


A9. Incentives for Respondents

The data collection plan includes small tokens of appreciation for children who are asked to participate in assessments at baseline and follow-up. Exhibit 4 shows the incentives for participation.


Exhibit 4. Incentives for Participation

Research Activity

Length

Incentive Amount

Timing

Baseline Protocol for Child Assessments

30 min

Stickers/child-attempted assessment session

  • Fall 2018 in Pilot Study

  • Fall 2020 in Impact Evaluation

Follow-up Protocol for Child Assessments

50 min

Stickers/child-attempted assessment session

  • Spring 2019 in Pilot Study

  • Spring 2021 in Impact Evaluation


Prior studies with young children often offer similar tokens of appreciation when conducting assessments with young children. Examples of this include: Head Start CARES [0970-0364], Supporting Healthy Marriage [0970-0299, 0970-0339], the Enhanced Services to the Hard-to-Employ Project – Kansas and Missouri sites [0970-0276]. There is no experimental evidence to suggest that this incentive is effective at increasing response rates or reducing non-response bias. In line with the game-like nature of the child assessments, our hope is that providing stickers to children will make them feel good about themselves regardless of how they perform.


We currently do not plan to offer incentives to participants in any of the other planned data collection activities. However, we may consider conducting an incentives substudy test for the parent baseline information form. If incorporated, the incentive substudy during the Pilot Study would be designed to examine whether providing a small incentive (such as a $10 gift card) reduces nonresponse bias overall and across particular subgroups of interest. Centers would be randomly assigned to one of two incentive conditions: (a) an incentive condition; or (b) a no incentive condition. Prior to random assignment, we would stratify the centers within localities by treatment condition (Creative Curriculum, Connect4Learning, Business as Usual). Parents in the incentive condition would be offered an incentive after they submitted their survey. Through this substudy, we would examine the effect of each condition on overall survey response rates by comparing the response rates (i.e., a baseline information form was submitted) of the centers randomly assigned to the incentive condition to the response rates of centers randomly assigned to the no incentive condition. We would also explore survey response rates by subgroup of interest (i.e., racial-ethnic background, lower vs. higher income). As such, the findings could help inform the extent to which the incentive increased response rates overall and reduced any differential response bias to inform whether a parent incentive is warranted in the impact evaluation. If we decide to incorporate such a substudy, we will submit the proposed study to OMB for review and approval as a nonsubstantive change request. We will work with OMB at that time to coordinate appropriate review and approval.


A10. Privacy of Respondents

Information collected will be kept private to the extent permitted by law. Respondents will be informed of all planned uses of data, that their participation is voluntary, and that their information will be kept private to the extent permitted by law.


Two different strategies will be used for obtaining consent to participate in the research study, depending on the participant. (Consent forms are not included as standalone appendices, but are attached as related materials to the baseline data collection instruments referenced below.) In the pilot study, impact evaluation and process study, active teacher consent will be obtained that informs participants about the survey instruments and logs that will be collected, why and how the information will be used, and how the information will be handled in order to maintain their privacy. If the teacher would like to participate, s/he will sign the consent form and return it to the study team. The teacher can keep a copy of the consent for their reference. Active parent/guardian consent will be obtained on behalf of children in the classrooms participating in the study as well. Parents/guardians will be informed of the type of data collection activities that will be conducted, why and how the information will be used, and how the information gathered will be handled in order to maintain their privacy and that of their children. This will happen prior to any data collection activities being conducted with children. If a parent/guardian would like their child to participate, s/he will sign the consent form and return it to the study team and will also be asked to complete and return the parent/guardian baseline information form that accompanies the consent form. However, the parent/guardian can choose not to complete the baseline information as well. The parent/guardian can also keep a copy of the consent for their reference.


In the pilot study, impact evaluation and process study, administrators and coaches will be informed that their participation is completely voluntary, that their responses will only be used for research and program improvement purposes, and how their information will be stored and handled. This will be done with an introductory statement at the beginning of each data collection instrument where the potential respondent is provided a brief description of the types of information that will be collected on the instrument, why and how the information will be used, and how the information gathered will be protected.


Due to the sensitive nature of this research (see A.11 for more information), the study will obtain a Certificate of Confidentiality. The study team has applied for this Certificate and will provide it to OMB once it is received. The Certificate of Confidentiality helps to assure participants that their information will be kept private to the fullest extent permitted by law.


As specified in the contract, the study team shall protect respondent privacy to the extent permitted by law and will comply with all Federal and Departmental regulations for private information. The study team is developing a Data Safety and Monitoring Plan that assesses all protections of respondents’ personally identifiable information. The study team shall ensure that all of its employees, subcontractors (at all tiers), and employees of each subcontractor, who perform work under this contract/subcontract, are trained on data privacy issues and comply with the above requirements. Every MDRC, MEF Associates, Frank Porter Graham Child Development Institute, and Abt Associates employee, including field staff employed for data collection, is required to sign a privacy pledge as an assurance of nondisclosure of private information. Field staff will also be trained in maintaining strict privacy and data security.


As specified in the evaluator’s contract, the study team shall use Federal Information Processing Standard compliant encryption (Security Requirements for Cryptographic Module, as amended) to protect all instances of sensitive information during storage and transmission. The study team shall securely generate and manage encryption keys to prevent unauthorized decryption of information, in accordance with the Federal Processing Standard.  The study team shall: ensure that this standard is incorporated into the study team’s property management/control system; establish a procedure to account for all laptop computers, desktop computers, and other mobile devices and portable media that store or process sensitive information. Any data stored electronically will be secured in accordance with the most current National Institute of Standards and Technology (NIST) requirements and other applicable Federal and Departmental regulations. In addition, the Contractor must submit a plan for minimizing to the extent possible the inclusion of sensitive information on paper records and for the protection of any paper records, field notes, or other documents that contain sensitive or personally identifiable information that ensures secure storage and limits on access.  


Information will not be maintained in a paper or electronic system from which data are actually or directly retrieved by an individuals’ personal identifier.


A11. Sensitive Questions

Our baseline and follow-up surveys of teachers and center administrators will contain questions on some sensitive topics, like salary, feelings about the workplace, and work-related stress and burnout. These questions will be answered in a self-administered format, which should minimize discomfort. The introductions to each survey will state that their participation is voluntary, they may skip any questions they do not wish to answer, their answers will be protected to the extent permitted by law, and that their responses will not affect their job. The sensitive questions included in the surveys are necessary for understanding the variability in center and staff characteristics that potentially influence implementation and quality levels achieved in the course of the study. The organizational climate and staff stress and burnout have been linked to lower levels of implementation and classroom quality in previous empirical studies (Han & Weiss, 2005) and, therefore, are critical constructs to measure as implementation drivers.


Personally identifiable information will be collected, such as contact information (e.g., name, address, phone numbers, e-mail address) for the administrator, teacher, coach, and the parent (and in the case of the parent consent, additional contacts that could help the study team find the child/family). The collection of personal identifiers is necessary for participant tracking for follow-up surveys and to allow us to access and match administrative records data.


A12. Estimation of Information Collection Burden

Exhibit 5 shows the annual burden of the activities described in this supporting statement. Attachment E.2 explains how the burden estimates were calculated for each instrument in the table. The total annual burden for participants is estimated to be 8,418 hours.



Exhibit 5. Total Burden Requested Under this Information Collection


Instrument

Total Number of Respondents

Annual Number of Respondents

Number of Responses Per Respondent

Average Burden Hours Per Response

Annual Burden Hours

Average Hourly Wage

Total Annual Cost

Instruments for Screening and Recruitment of ECE Centers

Attachment A.1: Landscaping protocol with Stakeholder Agencies (staff burden in Head Start (HS) grantee and community-based child care agencies)

120

40

1

1.50

60

$22.83

$1,369.80

Attachment A.2: Screening protocol for phone calls (staff burden in HS grantees and community-based child care agencies)

132

44

1

2

88

$22.83

$2,009.04

Attachment A.2:

Screening protocol for phone calls (HS and community-based child care center staff burden)

336

112

1

1.20

134

$22.83

$3,059.22

Attachment A.3:

Protocol for in-person visits for screening and recruitment activities (staff burden in HS grantees and community-based child care agencies)

610

203

1

1.50

305

$22.83

$6,963.15

Attachment A.3:

Protocol for in-person visits for screening and recruitment activities (HS and community-based child care center staff burden)

950

317

1

1.20

380

$22.83

$8,675.40

Baseline Instruments for the Pilot Study, Impact Evaluation,

and Process Study

Attachment B.1:

Baseline administrator survey

246

82

1

0.60

49

$22.83

$1,118.67

Attachment B.2:

Baseline teacher/assistant teacher survey

1538

513

1

0.60

308

$13.52

$4,164.16

Attachment B.3:

Baseline coach survey

230

77

1

0.60

46

$22.83

$1,050.18

Attachment B.4

Baseline classroom observation protocol (teacher burden)

615

205

2

0.30

123

$13.52

$1,662.96

Attachment B.5:

Baseline parent/guardian information form

8568

2856

1

0.20

571

$17.95

$10,249.45

Attachment B.6:

Baseline protocol for child assessments (child burden)

2460

820

1

0.50

410

--

--

Follow-Up Instruments for Pilot Study, Impact Evaluation,

and Process Study

Attachment C.1:

Follow-up administrator survey

205

68

1

0.50

34

$22.83

$776.22

Attachment C.2:

Follow-up teacher/assistant teacher survey

1230

410

1

0.75

308

$13.52

$4,164.16

Attachment C.3:

Follow-up coach survey

184

61

1

0.50

31

$22.83

$707.73

Attachment C.4:

Follow-up classroom observation protocol (teacher burden)

615

205

3

0.30

185

$13.52

$2,501.20

Attachment C.5:

Follow-up protocol for child assessments (child burden)

2460

820

1

1

820

--


Attachment C.6:

Teacher reports to questions about children in classroom (administered as part of the follow-up teacher survey)

615

205

1

0.67

137

$13.52

$1,852.24

Fidelity of Implementation Instruments for Pilot Study and Process Study

Attachment D.1:

Teacher/assistant teacher Log

1230

410

36

0.25

3690

$13.52

$49,888.80

Attachment D.2:

Coach Log

123

41

55

0.25

564

$22.83

$12,876.12

Attachment D.3: Implementation fidelity observation protocol (teacher burden)

138

46

1

0.30

14

$13.52

$189.28

Attachment D.4: Interview/Focus group protocol (administrator, teacher/assistant teacher and coach burden)

322

107

1

1.5

161

$18.18 (average of $22.83 for admin-istrators and coaches and $13.52 for teachers/ assistant teachers)

$2,926.98

Estimated Total Cost






--

$116,204.76



Total Annual Cost


To compute the total estimated annual cost, the total burden hours were multiplied by the average hourly wage for four labor categories.


To compute the total estimated annual cost, the total burden hours were multiplied by the average hourly wage for four labor categories. The Head Start grantee- and Head Start and community-based child care center-level administrator hourly wages were determined using the national mean wage for preschool and child care center and program administrators ($22.83/hour) from the Bureau of Labor Statistics National Occupational Employment and Wage Estimates, 2016. Teacher/assistant teacher hourly wages were computed by averaging the national mean wage for preschool teachers ($16.01/hour) and child care providers in child care services ($11.02/hour) from the Bureau of Labor Statistics National Occupational Employment and Wage Estimates, 2016. This yielded an estimated teacher/assistant teacher hourly wage of $13.52/hour. Coach hourly wages were determined using the national mean age for educational coordinators ($22.83/hour) from the Bureau of Labor Statistics National Occupational Employment and Wage Estimates, 2016. For parents, we used the median salary for full-time employees over the age of 25 who were high school graduates with no college experience ($718/week) from the Current Population Survey, 2017 to estimate an hourly wage ($17.95/hour) assuming a 40-hour work week. ($718/per week). The estimated total annual burden cost is $116,204.76.


A13. Cost Burden to Respondents or Record Keepers

We expect that some of the data collection activities will pose an added burden to centers and respondents. Centers will be asked to coordinate schedules with the research team and to support and facilitate data collection activities, regardless of research condition, through the designation of a staff person to act as a liaison with the research team. These activities will be required as part of participating in any respective phase of the VIQI project and will be covered in a legally binding agreement that participating programs and centers (depending upon who has signatory authority) are asked to enter into with the research team. This agreement will bind the program and/or centers, as well as classrooms and teachers to engaging in the designated installation activities for the respective interventions, depending upon the research condition the center is assigned to. The agreement will also outline all of the data collection activities planned in the study. In addition, we propose honoraria for teachers for completing the baseline and follow-up surveys and logs. This is necessary because it is expected that teachers will complete the planned data collection instruments outside of their normal workday covered in their labor agreements, given the level of demand already made upon their time. Therefore, it will be important to ensure that the participating teachers have the capacity and ability to do so in line with the study design.


As such, to compensate participating programs, centers, and teachers for their involvement in the study, a payment of up to $2798/center will be provided for single year of involvement in the study. Part of this payment ($2000/center) is for the coordination of schedules with the research team and supporting and facilitating data collection activities by the designated staff person or liaison. The staff person who is expected to coordinate with the study team is the center or program administrator. Based upon the Bureau of Labor Statistics National Occupational Employment and Wage Estimates, 2016, the program or center administrator’s wage is estimated to be $22.83/hour. We estimate that one 8-hour day a month for 10 months, total of 10 days (estimated to be about $1800/administrator) will be required to coordinate with the research team. The remainder of the $2000 payment is meant to offset administrative costs, such as making copies, mail, etc. to support study activities.


Up to $798 of the payment/center would be paid in installments upon completion of certain teacher research activities as denoted in Exhibit 6. The proposed amounts are listed in Exhibit 6.. We plan to provide each lead teacher who participates in all data collection activities a total of up to $141 for a given year of the study. We plan to provide each assistant teacher who participates in all data collection activities at total of up to $125 for a given year of the study. Thus, assuming 3 classrooms per center, we would provide up to $798/center. The total amount provided to any given center would be adjusted depending on which teacher research activities are completed and the total number of teachers that completed them. Centers can decide how to distribute these honoraria.


Exhibit 6. Honoraria Provided to Respondents

Research Activity

Length

Honorarium Amount

Timing

Baseline Teacher/Assistant Teacher Survey

30 min

$10/survey

  • Spring/Fall 2018 in Pilot Study

  • Spring 2020/Fall 2020 in Impact Evaluation

Teacher log

15 min per log

$10/month

  • Monthly during Pilot Study and Process Study (assumed to be collected for 10 months)

Follow-up Teacher/ Assistant Teacher Survey


Teacher reports to questions about children in classroom

45 min



40 min (10 min/child)

$15/survey



$16 for all teacher-reports on child outcomes

  • Spring 2019 in Pilot Study

  • Spring 2021 in Impact Evaluation


  • Spring 2019 in Pilot Study

  • Spring 2021 in Impact Evaluation


Exhibit 7 provides a comparison of the proposed honoraria amounts for teachers with the hourly wages and time required to complete the planned data collection activities for teachers and assistant teachers in Head Start and community-based child care settings. Based on calculations drawing upon existing information, teacher/assistant teacher hourly wages were computed by averaging the national mean wage for preschool teachers ($16.01/hour) and child care providers in child care services ($11.02/hour) from the Bureau of Labor Statistics National Occupational Employment and Wage Estimates, 2016. This yielded an estimated average teacher/assistant teacher hourly wage of $13.52/hour. Further, this hourly wage has been adjusted for overtime pay 1.5 times the estimated average teacher/assistant teacher hourly wage to account for the fact that we anticipate teachers and assistant teachers to complete the data collection instruments in hours that fall outside of their typical, standard work hours, assuming that they are working full-time schedules. This brings the estimated average overtime teacher/assistant teacher hourly wage to be $20.28/hour, which is the estimate that is used to determine the estimated pay for time required to complete each instrument. We further rounded amounts to the nearest $0.50 increment, since this facilitates communication to centers and teachers around how teachers and assistant teachers will be compensated for their time to account for the costs of completing the instruments.

Exhibit 7. Honoraria Provided to Respondents Based on Estimated Hourly Wages

Research Activity

Length

Honoraria Amount

Estimated Pay for Time Required to Complete Instrument (Based on an estimated hourly wage of $20.50/hour)

Baseline Teacher/Assistant Teacher Survey

30 min per survey

$10/survey (each teacher completes one survey)

$10/response

Teacher log

15 min per log

$10/month (assumes two logs per teacher per month)

$5/response

Follow-up Teacher/ Assistant Teacher Survey


Teacher reports to questions about children in classroom

45 min per survey


40 min (10 min/child)

$15/survey (each teacher completes one survey)


$16/lead teacher for all teacher-reports on child outcomes (assumes 4 children total per teacher)

$15/response



$4/report per child



A14. Estimate of Cost to the Federal Government

The total cost for the data collection activities under this current request will be $4,320,000. Annual costs to the Federal government will average $1,440,000 for the proposed data collection.


A15. Change in Burden

This is new data collection.


A16. Plan and Time Schedule for Information Collection, Tabulation and Publication


Analysis Plan


Analysis of Pilot Study Data. The pilot study will inform several aspects of the impact evaluation: (1) screening and recruitment; (2) data collection procedures; (3) creation/definition of measures; (4) interventions and their installation; and (5) study design and random assignment.


To inform screening and recruitment, we will conduct a qualitative and descriptive analysis of the strategies that were used to recruit centers for the pilot study, and specifically of the challenges that were encountered. To refine the screening criteria, we will also look to see if different types of extant data about the characteristics of centers (and available to the site recruitment team) seem to be predictive of the measures of initial quality and centers’ ability to implement the interventions.


To inform the data collection procedures, we will look descriptively at response rates for each data collection instrument (overall, by setting, by initial quality, and by experimental group). For the teacher logs, we will also examine response rates over time and use variance decompositions to understand the consistency of responses across time periods. We will also explore interrater reliability for measures from the classroom observations of quality and for survey items asked of more than one type of respondent (e.g., administrators and teachers). Finally, we will also leverage information based on interviews and focus groups with staff about the challenges of completing the surveys and logs.


To inform the creation of measures, we will explore using confirmatory factor analysis to confirm that existing scales (survey and observational) load onto a factor(s) with an eigenvalue greater than one. For scales that are study-created, we will use exploratory factor analysis to gain insights into the potential factor structure. We will also explore the internal consistency of the scales (Cronbach’s alpha) to see if scales appear to achieve sufficient reliability. We will also use these types of analyses (factor analysis, Cronbach’s alpha) to explore the creation of measures of fidelity of implementation, as well as composite measures of the two quality dimensions whose effect will be examined in the impact evaluation (structural/process quality and instructional quality). We will also explore the extent to which survey and observational scales seem to have predictive validity, by assessing correspondence in their values at baseline and follow-up.


To inform the interventions and their installation, we will look descriptively at measures used to assess the fidelity with which the interventions are implemented with respect to the training of staff, the coaching provided, and the delivery of the curriculum. We will also descriptively examine perceptions of the interventions’ effectiveness by staff and their motivation to implement the interventions. These descriptive analyses will be conducted by intervention, as well as by setting (Head Start, community-based) and by initial quality. We will also conduct a qualitative analysis of the challenges/facilitators of implementing each intervention. Finally, we will look descriptively at key items from the measures of fidelity of implementation that are hypothesized to be closely linked with different dimensions of quality, as well as from the measures used to assess classroom quality at baseline and follow-up. For the Pilot Study, we will conduct a descriptive analysis to examine change over time in quality dimensions, for each experimental group, and for subgroups defined by initial quality and setting. We will also explore the potential effect of each intervention on the two key dimensions of quality, for all participating centers, by looking at changes in classroom quality across experimental groups. Exploration of potential effects by subgroup of centers may be examined, but only for exploratory purposes because subgroup analyses will be underpowered. Further, we may also explore the potential effect of each intervention on child outcomes, by looking at differences in child outcomes across experimental groups. Across the Pilot Study, but we do not expect these analyses to be definitive, but rather to inform a potential general pattern of findings that could help inform the potential of the interventions for differentially shaping different quality dimensions and child outcomes for all participating centers in the Impact Evaluation and Process Study. All of these analyses will be considered exploratory and descriptive, because the Pilot Study will not be adequately powered to definitively answer questions about the impact of each of the interventions and the nature of quality-child outcome relationships that are central to the VIQI project.

To inform the study design, and in particular random assignment, we will use the pilot study data to explore the feasibility of different approaches for identifying the optimal cut-off for defining the “low” and “high” initial quality strata used to block random assignment (i.e., using a theory-relevant cut-off vs. using a cut-off that maximizes precision gains). We will also examine the distribution of centers across settings (Head Start, community-based) and by initial quality to assess the feasibility of blocking random assignment by both of these factors in the impact evaluation. In doing so, we will be describing the characteristics of the participating classrooms and centers participating in the pilot study, as well as the families and children that are being served in these centers, as part of the analysis described here.


Analysis of Impact Evaluation Data. The research questions for the impact evaluation will be answered using a 3-group random assignment research design. Centers will be randomly assigned to one of three groups: a group that receives Intervention A (Group 1), a group that receives Intervention B (Group 2), or a group that continues to conduct “business as usual” (Control). Each of the two selected interventions will target a different dimension of quality—one will target structural/process quality, and the other will target instructional quality. If the interventions improve quality as intended, this design will create random (experimentally-induced) variation in the two quality dimensions that can be used to rigorously estimate their effect on children’s outcomes using an instrumental variables (IV) analysis.


Using this design, the analysis of the impact evaluation data will examine different types of effects: (1) the effect of each intervention on classroom quality and teacher and child outcomes; (2) the combined intervention (that is, the effect of both interventions pooled together) on classroom quality and child outcomes; (3) the effect of each targeted dimension of quality (structural/process quality and instructional quality) on child outcomes; and (4) the effect of global quality (i.e., a composite measure of the two quality dimensions) on child outcomes.


The analysis for each type of effect is described below. In drawing inferences about these estimated effects, standard statistical tests such as t-tests (for continuous variables and dichotomous measures) or chi-square tests (for categorical measures) will be used to determine whether estimated effects are statistically significant. Each of the analyses will be conducted for the full group of participating centers, as well as for subgroups of interest [e.g., by centers’ initial levels of quality at baseline (low vs. high) and by setting (Head Start vs. community-based)]. Subgroup analyses will be conducted either by sub-setting out the subgroup of interest, or by adding subgroup interactions in the models. Statistical analyses will be performed in SAS. Our sampling design does not require the use of survey weights.


Effect of each intervention on classroom quality and teacher and child outcomes. The effect of each intervention will be estimated by comparing the classroom quality and teacher and child outcomes of centers assigned to each intervention group (Group 1 or Group 2) to those of centers assigned to the control group. In practice, these analyses will be conducted using a model that regresses the outcome of interest (classroom quality, teacher outcome, or child outcome measure) against indicators of group membership (Group 1 and Group 2). The regression coefficient on these indicators will provide an estimate of the effect of each intervention on the outcome of interest. The model will also include a set of random assignment block indicators, to account for the random assignment design and to improve the precision of estimated effects. Because random assignment occurs at the center level, that is the highest level of clustering that needs to be accounted for in the analysis. Higher levels of potential clustering, such as locality and program, will be accounted for as covariates in the model. In addition, the model will control for measures of classroom-level (such as classroom composition and baseline quality) and child-level baseline characteristics and baseline outcomes; because of random assignment, controlling for these baseline characteristics and outcomes in the model is not strictly necessary and it will not affect the impact estimates, but we will include them to improve the precision of estimated effects (reduce their standard error). The analysis will use a multi-level modelling structure to account for the clustered nature of the data: a two-level model will be used for classroom quality (classrooms nested in centers) and a three-level model will be used for child outcomes (children nested in classrooms nested in centers).

Combined intervention effect on classroom quality and child outcomes. The combined effect of the two interventions will be estimated by comparing the quality and child outcomes of centers in Group 1 and 2 to those of centers assigned to the control group. The statistical model will be similar to the one previously described, except that the key independent variable will be an indicator of assignment to Group 1 or Group 2.

Effect of each targeted quality dimension on child outcomes. The effect of structural/process quality and of instructional quality on children’s outcomes will be examined using an instrumental variables (IV) approach. In practice, the effect of these two quality dimensions will be estimated using a two-stage least squares (2SLS) analysis, where indicators of group membership (Group 1 and 2) will be used as the instruments. In the first stage models, each quality dimension (structural/process quality and instructional quality) will be regressed against the two instruments (an indicator of assignment to Group 1 and an indicator of assignment to Group 2), a set of random assignment block indicators, and classroom-level and child-level baseline characteristics. From these regressions, the predicted values of the two quality dimensions will be obtained. These predicted values represent variation in the two quality dimensions that is experimentally induced. In the second stage model, the child outcome of interest will be regressed against the two predicted quality dimensions, as well as random assignment block indicators, and classroom and child-level baseline characteristics. The regression coefficients on the predicted quality dimensions will provide estimates of the effect of the two quality dimensions on children. These estimates are unbiased if these two dimensions are the only pathways through which the interventions improve children’s outcomes. The analysis will use 3-level models to account for the clustered nature of the data (children nested in classrooms nested in centers).

To explore whether the effect of the two quality dimensions on child outcomes is non-linear, we will estimate the effect of each quality dimension for subgroups of centers defined by their baseline quality (low versus high). If effects are larger for one group compared to the other, this would suggest that the effect of a given dimension of quality may be non-linear. This analysis is non-experimental and will be considered more exploratory.


Effect of global quality on child outcomes. The effect of global quality (a composite measure of the two dimensions of quality) on children’s outcomes will be estimated using a similar approach. In the first stage model, global quality will be regressed against the instrument (an indicator for whether a center was assigned to Group 1 or Group 2), a set of random assignment block indicators, and classroom-level and child-level baseline characteristics. From these regressions, predicted values of global quality will be obtained. In the second stage model, the child outcome of interest will be regressed against predicted global quality, as well as random assignment block indicators, and classroom and child-level baseline characteristics. The regression coefficient on predicted global quality will provide an estimate of global quality on children. This estimate is unbiased if global quality is the only pathway through which the interventions improve children’s outcomes. If one of the interventions has a larger effect on global quality than the other, then we will be able to rigorously examine whether the effect of global quality is nonlinear, by using treatment group membership (to Intervention 1 and to Intervention 2) as instrumental variables for global quality and its quadratic (two mediators).


Baseline analyses. Prior to conducting the impact analyses, we will compare the baseline characteristics and outcomes of centers, teachers/classrooms, and children in the three experimental groups, to confirm that random assignment has produced three groups that are similar at baseline. We will test whether the 3 groups’ characteristics are statistically different from each other for each baseline characteristic and systematically across all characteristics.


Response analyses. As described in greater detail in Supporting Statement B, we will also conduct a set of response analyses. The first set of analyses will examine response rates by experimental group, to confirm that response rates are sufficiently high – and differential response rates sufficiently low – and that the internal validity of the study is not compromised. The second set of analyses will examine the characteristics of the staff, classroom, and children that are included versus excluded from the impact evaluation, to gauge the external validity of the results and to better understand the types of classrooms and children to which the results can be generalized. As noted earlier, our study centers will be diverse in terms of initial quality (low and high) as well as setting (Head Start and community-based), so we expect our findings to be generalizable to the broader landscape of urban ECE centers in the US. However, because the study will focus on 3-year-old children, the results may not be generalizable to other age groups or to public pre-K settings.


Handling missing data. We will not impute missing data on the outcomes of interest. However, we will impute missing data on the baseline covariates (e.g., the baseline characteristics and competencies of children, classroom quality at baseline) that are used as covariates in the statistical model for the impact analysis. In a random assignment study design, there are few (if any) drawbacks to imputing baseline covariates, for two reasons. First, the purpose of the covariates is to improve the precision of estimated effects (rather than to control for bias). Second, due to random assignment, the percentage of missing baseline data (and the characteristics of children/classrooms for whom baseline data is missing) should be very similar across the three groups in the study design. This is confirmed in a study conducted on behalf of the Department of Education (ED), which showed that in cluster randomized trials, several imputation methods produce very low bias in estimates of effects and their standard errors (Puma, Olsen, Bell, and Price, 2009). For the VIQI study, we will use one of the appropriate methods reviewed in the ED study (e.g., dummy variable imputation, maximum likelihood with multiple imputation using an EM algorithm). As a sensitivity test on the imputation, we will also estimate the models without baseline covariates; controlling for baseline covariates is not strictly necessary in a randomized experiment, so the estimated effects from an unadjusted regression should be similar to those from an adjusted regression.


Limitations. Our estimates of the effects of quality on children’s outcomes will only be unbiased (causally valid) if the interventions yield effects on our measures of quality dimensions and these dimensions are the only pathways through which the interventions affect children’s outcomes in our analysis. To verify these assumptions, we will examine the effect of the interventions on all available measures of center/teacher outcomes and classroom quality to understand the pathways through which the interventions might be affecting children’s outcomes. We will then define/refine the measures of the two quality dimensions in such a way as to make sure that these two measures capture all possible measured pathways. However, it remains possible that the interventions will affect children’s outcomes through unmeasured pathways, in which case estimates of the effects of quality could be confounded with these other pathways.


Analysis of Process Study Data. A variety of descriptive and comparative techniques will be used to describe implementation drivers and mean levels and variation in different dimensions (dosage, adherence, quality) of fidelity of the intervention (including the provision of professional development and delivery of the curriculum). Correlational analysis will be used to examine associations among various implementation drivers as well as among implementation drivers and aspects of fidelity. Additionally, the achieved relative strength (that is, the treatment contrast) between treatment and control classrooms will be calculated based on procedures detailed by Hulleman and Cordray (2009) (standardizing the average difference between fidelity indices from each condition). Achieved relative strength will be used to help interpret the findings from the impact analysis, and many of the constructs created as part of the process study can be used as moderators (e.g., center readiness) in the impact analysis. Analysis will be conducted with all centers and separately for subgroups with data on outcomes of interest. Our analysis will take into account the nested nature of the data as needed. We will consider methods for handling missing data (as discussed above). We will employ data reduction techniques and psychometric work (e.g., internal consistency, factor analysis, concurrent validity), particularly for fidelity of implementation variables that are collected across time and for new measures. Any qualitative data collected during center visits will also be quantified to measure specific readiness factors of interest (e.g. “rating” a site’s level of quality assurance and improvement processes as minimal, moderate, or strong).


One notable contribution of this process study is an ability to examine center “readiness” and which factors individually, and in combination, predict different aspects of fidelity of implementation. Profile analysis may be used to identify whether centers can be grouped based on their initial “levels” on different readiness factors. Identifying profiles of readiness can inform who may be ready to take on a new initiative and who may need more support before doing so; and for gaining further insight into why (or why not) an intervention was effective at changing quality.


Finally, because implementation drivers, as well as implementation efforts and their outcomes, may naturally change over time and/or may be affected by intervention-induced changes in quality, we plan on examining changes in these constructs. This will allow us to see whether there are differences in a variety of implementation-related factors—such as beliefs, center readiness for change, and organizational climate—before and after implementation of a quality improvement intervention.


Time Schedule and Publication


The timeline of the two phases of the VIQI Project and planned data collection activities is shown in Table 1 and Exhibit 6. The Pilot Study is scheduled to begin in Winter 2018 (or following OMB approval of this request) and end in Spring 2019. Dates included in Exhibit 6 are based on OMB approval of this information collection request in January 2018. The timeline will be adjusted, if necessary. The activities are as follows:


  • Starting Winter 2018 (or following OMB approval of this request), VIQI will begin screening potential centers to assess their eligibility for meeting the sampling criteria for the Pilot Study. Screening centers for eligibility and recruitment of centers will continue for up to five months, until the targeted number of centers (about 40 centers) are successfully recruited for inclusion in the Pilot Study;


  • Upon programs and centers being brought on board in the Pilot Study, Baseline Instruments for the Pilot Study, including self-administered surveys distributed to center administrators, lead and assistant teachers, and coaches serving the participating centers, as well as two time points of classroom observations, will be collected. In addition, self-administered parent/guardian information forms will be distributed to parents/guardians of children being served in the participating centers, and a subset of these children will be asked to complete a set of direct child assessments. The baseline data collection will be conducted in Fall 2018. Altogether, it is expected that baseline data collection will continue for approximately four months;


  • Beginning in Fall 2018, Fidelity of Implementation Instruments for the Pilot Study will be collected. This will include collection of logs completed by lead teachers and assistant teachers, as well as logs completed by coaches. This data collection effort is expected to span approximately nine months in coordination with the academic or school year calendar that most centers are expected to follow. A subset of administrators, lead teachers, assistant teachers, and coaches will also be asked to participate in semi-structured interviews conducted in small group or one-on-one format in Winter 2019 to talk about their experiences installing the interventions and completing the data collection instruments. A subset of classrooms will be observed by an external observer using an implementation fidelity observation protocol to document their fidelity of implementation and the treatment contrast in teaching practices across different conditions; and


  • Beginning in Winter 2019 and ending in Spring 2019, Follow-Up Instruments for the Pilot Study will be collected. This will consist of classroom observations at three points in time in the Winter and/or Spring 2019, as well as self-administered surveys collected from administrators, lead teachers, assistant teachers, and coaches in Spring of 2019. In addition, a subset of children being served in participating centers will be asked to complete a set of direct child assessments and lead teachers will be asked to complete reports on those selected children in Spring. This data collection effort is expected to span approximately four months.


The Impact Evaluation and Process Study are scheduled to begin in Summer/Fall 2019 and end in Spring 2021. However, the timing and design of the Impact Evaluation and Process Study will be finalized based upon learnings gained from the Pilot Study. These activities are as follows:


  • Starting Summer/Fall 2019, VIQI will begin screening potential centers to assess their eligibility for meeting the sampling criteria for the Impact Evaluation and Process Study. Screening centers for eligibility and recruitment of centers will continue for approximately nine months, until all 165 centers are successfully recruited;


  • Upon centers being recruited into the VIQI Project, Baseline Instruments for the Impact Evaluation and Process Study will be collected. This battery of instruments includes self-administered surveys distributed to center administrators (e.g., directors or executive directors), lead and assistant teachers, and coaches serving the participating centers and one time point of classroom observations. These baseline data collection activities will begin in Winter 2020 and end in Fall 2020, with the majority of data being collected prior to random assignment. In addition, in Fall 2020, self-administered parent/guardian information forms will be distributed to parents/guardians of children being served in the participating centers and a subset of these children will be asked to complete a set of direct child assessments. Altogether, it is expected that baseline data collection will continue for approximately nine months;


  • Beginning in Fall 2020, Fidelity of Implementation Instruments for the Impact Evaluation and Process Study will be collected. This will include collection of logs completed by lead teachers and assistant teachers, as well as logs completed by coaches. This data collection effort is expected to span approximately nine months in coordination with the academic or school year calendar that most centers are expected to follow. Administrators, lead teachers, assistant teachers, and coaches will also be asked to participate in semi-structured interviews conducted in small group or one-on-one format in Winter 2021 to talk about their experiences installing the interventions and completing the data collection instruments. A subset of classrooms will be observed to document their fidelity of implementation and the treatment contrast in teaching practices across different conditions; and,


  • Beginning in Winter 2021 and ending in Spring 2021, Follow-Up Instruments for the Impact Evaluation and Process Study will be collected. This will consist of classroom observations at three points in time in the Winter and/or Spring, and self-administered surveys collected from administrators, lead teachers, assistant teachers, and coaches in Spring. In addition, a subset of children being served in participating centers will be asked to complete a set of direct child assessments and lead teachers will be asked to complete reports on those selected children in Spring. This data collection effort is expected to span approximately four months.


Exhibit 6. Data Collection Timeline


Start Date

End Date

Pilot Study

Instruments for Screening and Recruitment of ECE Centers (Attachments A.1 – A.3)

January 2018 (or upon OMB approval of package)

June 2018

Baseline Instruments (Attachments B.1 – B.6)

August 2018

November 2018

Fidelity of Implementation Instruments (Attachments D.1 – D.4)

September 2018

June 2019

Follow-up Instruments (Attachments C.1 – C.5)

March 2019

June 2019

Impact Evaluation and Process Study

Instruments for Screening and Recruitment of ECE Centers (Attachments A.1 – A.3)

July 2019

June 2020

Baseline Instruments (Attachments B.1 – B.6)

March 2020

November 2020

Fidelity of Implementation Instruments (Attachments D.1 – D.4)

September 2020

June 2021

Follow-up Instruments (Attachments C.1 – C.5)

March 2021

June 2021

Data Analysis Timeline



Pilot Study

Analysis of Data from Baseline Instruments

November 2018

January 2019

Analysis of Data from Fidelity of Implementation Instruments

September 2018

December 2019

Analysis of Data from Follow-up Instruments

June 2019

December 2019

Impact Evaluation and Process Study

Analysis of Data from Baseline Instruments

March 2020

June 2022

Analysis of Data from Fidelity of Implementation Instruments

September 2020

June 2022

Analysis of Data from Follow-up Instruments

March 2021

June 2022

Publications Timeline



Report summarizing findings from pilot study and/or summarizing the design for impact evaluation and process study


2019

Report on findings from impact evaluation and process study


2022

Research brief summarizing supplemental findings from impact evaluation and process study


2022

Journal article


2022


A17. Reasons Not to Display OMB Expiration Date

All instruments will display the expiration date for OMB approval.


A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.

8


File Typeapplication/msword
File TitleVIQI OMB Supporting Statement A
AuthorDHHS
Last Modified BySYSTEM
File Modified2018-05-08
File Created2018-05-08

© 2024 OMB.report | Privacy Policy