OMB_CARES_2nd_Package_Supporting_Statement_Part_A_FINAL_rev050709[1]

OMB_CARES_2nd_Package_Supporting_Statement_Part_A_FINAL_rev050709[1]

HHS/ACF/OPRE Head Start Classroom-based Approaches and Resources for Emotion and Social skill promotion (CARES) project: Impact and Implementation Studies

OMB: 0970-0364

Document [doc]
Download: doc | pdf







HHS/ACF/OPRE

HEAD START CLASSROOM-BASED APPROACHES AND RESOURCES FOR EMOTION AND SOCIAL SKILL PROMOTION (CARES) PROJECT:



2ND PACKAGE: IMPACT AND IMPLEMENTATION STUDIES





SUPPORTING STATEMENT FOR OMB CLEARANCE



February 3, 2009


Revised May 6, 2009







A. JUSTIFICATION

A1. Circumstances Necessitating Data Collection

Recently, researchers and policy makers have drawn attention to the high rate of emotional and behavioral difficulties among young, low-income children.1 Exposed to a wide range of psychosocial stressors, children in poor neighborhoods are clearly at greater risk for developing emotional and behavioral difficulties and have less access to mental health services than their middle income peers. 2 These difficulties may compromise their chances for success in school. Children who have difficulty regulating their emotions and behaviors, (e.g. who are either sad, withdrawn, or disruptive) have been found to receive less instruction, to be less engaged and less positive about their role as learners, and to have fewer opportunities for learning from peers. 3 This work signals the need to build and disseminate evidence about preschool classroom processes that support, rather than compromise, young children’s emotional and behavioral development, in conjunction with and in support of practices that promote their early learning.



Recent developmental research has identified several fundamental social and emotional skills that underlie children’s competent social interactions with teachers and children as well as their academic engagement, or attention to the learning tasks of schooling. These specific skills have been the targets of a number of promising program enhancements that have been implemented and studied in a range of preschool settings.4 At the same time, these studies have largely been conducted in ideal conditions: in single cities, with programs highly motivated to take up the intervention, and with training and technical assistance provided under the direction of senior academic researchers. A well-designed project with a nationally representative sample of Head Start programs and a rigorous multi-celled cluster-analytic design holds the promise of identifying the most effective of these new approaches and providing lessons about how they can best be integrated into Head Start classrooms around the country.


The study will utilize a group-based randomized experimental design to test the effects of three very different evidence-based program enhancements designed to improve the social and emotional development of three- and four-year old children in Head Start classrooms. The study aims to provide the information federal policy makers and Head Start providers will need if they are to increase Head Start’s capacity to improve the social-emotional skills and school readiness of preschool-age children. The project is sponsored by the Office of Planning, Research, and Evaluation (OPRE) of the Administration for Children and Families (ACF), and will be conducted under a contract to MDRC.


A1.1 Overview of the CARES Project

The design and measurement of the CARES project primarily focuses on four-year old children in Head Start, which we will refer to as the “core” study. We also plan to assess impacts on three-year old children present in mixed-age classrooms for an efficacy “add-on” study that can advance knowledge and inform policy and practice. This document requests OMB authorization for impact and implementation data collection activities related to the CARES project for both the four-year old and three-year old efforts.



Impact data collection. For the impact study, this submission covers five surveys and a direct child assessment. These teacher self-report surveys (baseline and follow-up) and reports on individual children will be self-administered using paper and pencil and will be mailed back. The parent surveys (baseline and follow-up) will be administered over the phone using a CATI system. The direct child assessment will be administered in-person and will be scored on-line. As we discuss in section B1, the statistical power is improved in this group-randomized trial by the inclusion of covariates measured at baseline that explain much of the variation across individuals in key outcomes of interest. Given that the best predictor of future outcomes is past outcomes, our data collection plan proposes baseline measures of all major outcome constructs.



Implementation data collection. For the implementation study, this submission covers six interview discussion guides, in addition to additional items that will be added to the teacher self-report surveys listed above. Two-day implementation site visits will be conducted with a subset of participating program classrooms. Site visit discussion guides will facilitate in-person, qualitative interviews with coaches, lead and assistant teachers, center directors, other center-based staff, and grantee/delegate agency directors, and will be audio taped. The trainer interview will be conducted over the telephone and will also be audio taped. These implementation instruments are intended to understand variation in program implementation, and key predictors of implementation variability.



Site selection. As discussed in our prior OMB submission, a sampling plan was created to provide a sample of Head Start grantees/delegate agencies and centers for the core study that represents a compromise between a pure probability sample that is nationally representative of the Head Start national child population (which is not feasible) and a purely opportunistic sample of volunteer participants. The plan is designed to produce a sample of 20 grantees/delegate agencies within which Head Start centers will be randomized to treatment groups or a control group. The sample will be divided into two cohorts of study sites. Cohort 1 (4 grantees/delegate agencies) will launch the interventions to be tested in 2009-2010 and Cohort 2 (16 grantees/delegate agencies) will launch the interventions in 2010-2011.



The five main steps of the Head Start CARES sampling plan are:

1. Define a population of Head Start grantees/delegate agencies for inclusion in the sampling frame;

2. Stratify all grantees/delegate agencies in the sampling frame;

3. Randomly sample candidate grantees/delegate agencies from within each stratum;

4. Screen selected grantees/delegate agencies from within each stratum; and

5. Prioritize and narrow the selection of grantees/delegate agencies for further recruitment based on screening and randomization criteria.



The Head Start population for inclusion in this study is defined based on the characteristics of grantees/delegate agencies and their centers. A number of exclusionary criteria are imposed on the CARES starting sample using information from the 2006-2007 Program Information Report (PIR). As a result, certain types of Head Start grantees/delegate agencies are excluded from the study population. Exclusions are listed below:

a. Grantees/delegate agencies that provide only Early Head Start programs;

b. Grantees/delegate agencies that only serve migrant children;

c. Grantees/delegate agencies located in U.S. territories, Alaska and Hawaii;

d. Grantees/delegate agencies that only provide family child care or that provide services primarily in a child’s home;

e. Grantees/delegate agencies that are more than 100 miles from what the Federal Aviation Administration defines as a “primary airport” (to minimize logistical constraints for training and data collection); and

f. Grantees/delegate agencies that operate three or fewer centers (due to our research design).



The second step in the sampling plan is to stratify all grantees/delegate agencies within the sampling frame based on three criteria: (1) the region of the country in which they are located (Northeast, South, Midwest/Plains, and West, as defined in the CARES sampling plan); (2) the demographic composition of their child enrollment; and (3) the Metro/Non-Metro nature of their location. This process results in a total of 14 strata.



The third step of our sampling plan is to randomly select a “starting sample” of multiple grantees/delegate agencies with probability proportional to their total child enrollment (size). At this point, the plan calls for imposing two additional exclusions to the study population:

g. Grantees/delegate agencies that have been in operation for less than two years (these grantees/delegate agencies may not represent stable Head Start operations and therefore data from their inclusion may not generalize to Head Start settings across the nation); and

h. Grantees/delegate agencies that should be excluded from the Head Start CARES research study for compliance or performance reasons.



These exclusions were imposed based on determination by the Office of Head Start and were dropped from consideration for the CARES study.

The fourth step in the sampling process involves calling and screening grantees/delegate agencies to collect additional information about Head Start programming, ability and willingness to participate, and characteristics of each center. Based on information collected through this screening process, the fifth and final step will involve identifying eligible grantees/delegate agencies for site visits, classroom observations, and discussions with Head Start staff. This final round of site recruitment will ultimately determine the grantees/delegate agencies that will participate in the study. As of this writing, the CARES recruitment team has begun the process of screening sites and no one site has been formally accepted into the project. OMB approval for the site recruitment instruments was granted on October 27, 2008.



Evaluation component. The research design for the core Head Start CARES project with four-year old children will consist of a three-treatment design which will measure the net impacts of three interventions (treatments) relative to current Head Start practice. As described above under site selection, the design begins with a sample of Head Start grantees or delegate agencies that are eligible and willing to participate in the study. Participating Head Start centers within each grantee/delegate agency will then be randomized to a treatment group which would receive one of the interventions being tested or to a control group which would not receive any of these interventions. In this way, randomization of centers would be “blocked” by grantee/delegate agency.



A sample of classrooms in participating Head Start centers and all students within those classrooms will be included in the treatment group or control group to which their center is randomized. The net impacts of each intervention would then be measured by comparing measures of future outcomes for students, classrooms, or teachers for each treatment group to those for the control group. Data collection for which OMB authorization is being sought will play an important role in the impact and implementation studies, as described in Section A2.



Efficacy study with 3-year olds. In addition to the core evaluation described above, we are planning to conduct an efficacy study on the three-year old children who will be in enrolled in the participating mixed-age classrooms (as we discuss below, these models have not been tested in three-year old only classrooms). This add-on study will test the effects of social-emotional program enhancements in general (rather than the effect for any specific strategy) on outcomes for children (because of limited power to detect impacts for this age group, due to the sampling strategy).



Notably, information on the effects of the three social emotional program enhancements proposed for Head Start CARES is much weaker for three-year olds than it is for four-year olds. Review of the efficacy evidence has indicated that these three program models have not been tested in classrooms with exclusively three-year old children, and findings for children in mixed-age classrooms has focused on the four-year old children. However, we believe this is a unique opportunity for knowledge building and believe it will be important to provide this information to HHS, policy makers, and the Head Start community about this age group currently being served by Head Start.



Evaluation schedule. Site recruitment of cohort 1 will continue through the 2008-2009 school year and we anticipate randomizing centers to program and control groups in summer 2009. Site recruitment for cohort 2 will follow a similar schedule during the 2009-2010 school year and we anticipate randomizing centers to program and control groups in summer 2010. Cohort 1 program implementation will occur in 2009-2010 and 2010-2011 for cohort 2. Data and findings will be issued and shared over the course of the multi-year evaluation through: a final implementation report (2012); a final impact report (2013); and public use files. Note that in developing the public use file, we will be implementing data masking procedures to ensure that sample members cannot be identified individually. See Appendix I for an example of procedures that were developed for another DHHS project conducted by MDRC. We will implement similar masking procedures for this project.



A2. How, By Whom, and For What Purpose Are Data to be Used

Purposes of the data collection include the following:

  • To study the effects of these specific programs or practices within the Head Start population;

  • To study whether specific programs or practices are more or less effective for certain populations;

  • To study which characteristics of Head Start settings are likely to contribute to effective implementation of different programs or practices;

  • To study which factors are related to training, technical assistance, implementation and fidelity of programs or practices within Head Start settings.



A2.1 The Overall Role of Instruments in the CARES Project

The CARES impact and implementation surveys and discussion guides will yield important data not available through administrative records. The impact study providing information on, for example: social-emotional well-being children; academic outcomes for children; student-teacher relationships; background characteristics of children and teachers. The implementation study providing information on, for example: program fidelity; implementation; and adaptation. These surveys and interviews will be analyzed in conjunction with programmatic data collected as part of the administration of the program models to understand the impacts on child outcomes.

For the impact study, we are interested in assessing effects on children on a core set of key outcomes that are either key targets of these intervention approaches (social skills, emotion skills, executive function) or those that represent key outcomes we are trying to affect, as a result of those changes in skills (behavior problems, approaches to learning). As discussed later, to assess these impacts, it is important that baseline measures of our key constructs are also included in our proposed plan.



Exhibit A2-1 (Appendix B) presents the proposed timeline for the instruments pending OMB approval. Notably, the timing of training in late summer (in two of the three program models) necessitates the collection of teacher-level baseline data prior to the beginning of the school year. Prior research has found that immediately after training, teachers report differences in their practice that can be captured in baseline surveys. To preserve the pre-treatment nature of the baseline data, we propose collecting the Baseline Lead Teacher Self-Report Survey in the spring of the previous academic year.



A2.2 The Role of Specific Survey Components



Impact data collection. Whenever possible, the questions in the impact study surveys were taken or adapted from existing instruments that have been used and validated with national samples or from instruments used in other HHS evaluations. As such, comparisons with national or other evaluation findings will be possible. Section A8 provides more information about instruments used in the development of survey questions. Appendices C.1-C.5 and C.12 provide justification for the impact survey instruments and direct child assessment. We will work with the survey firm, Survey Research Management (SRM) – and if necessary with firms specializing in translation – to ensure that these surveys are translated for administration with non-English-speaking populations as needed.



A2.2a CARES Baseline Lead Teacher Self-Report Survey

Lead teachers of every participating classroom will complete a self-administered survey (Appendix A.1) at baseline of the Head Start year devoted to gathering more specific data regarding their demographic characteristics and educational background, structural characteristics of the classroom, teacher emotion socialization, items concerning teacher burnout, and mental health. In addition to these impact-specific measures, this survey will include items assessing teacher characteristics and contextual factors that may influence implementation. These include items regarding teacher views on social emotional development, organizational climate, past training and professional development, and the dynamic of the lead teacher-teaching assistant relationship. As stated above, we would ideally prefer to administer these baseline surveys in the spring of the previous academic year to avoid contamination of baseline data with the program process, since two of the program models begin their teacher training in the summer. The approximate administration time for the baseline teacher self-report survey is estimated at 20 minutes.



A2.2b CARES Follow-up Lead Teacher Self-Report Survey

Lead teachers of every participating classroom will complete a self-administered survey (Appendix A.2) at spring follow-up of the Head Start year which will assess structural characteristics of the classroom, teacher emotion socialization, teacher burnout, and mental health. In addition to the impact-specific measures listed under A2.2b, this survey will include items assessing teacher characteristics and contextual factors that may influence implementation. These include items regarding teacher views on social emotional development, social emotional-related classroom practices, organizational climate, past training and professional development, and the dynamic of the lead teacher-teaching assistant relationship. Surveys completed by program group teachers will include items focusing on experiences working with the coach, perceptions of the program model, and items to assess supervisor monitoring and support. The approximate administration time for the follow-up teacher self-report survey is estimated at 20 minutes.



A2.2c CARES Lead Teacher Report on Individual Children

From teachers, children’s social and emotional development, social and learning behaviors, and early academic skills and school readiness will be assessed using several measures collected both at baseline and at follow-up. Teachers will complete a self-administered report on individual children (Appendix A.3) which includes the Cooper-Farran Behavioral Rating Scales (CFBRS),5 the Behavior Problems Index (BPI),6 the Student-Teacher Relationship Scale (STRS),7 the Social Skills Rating Scale- Social Skills scale (SSRS),8 the Academic Rating Scale (ARS)9, and parent-teacher involvement (Parent-Teacher Involvement Questionnaire)10 on the four-year old children in their classrooms, as well as the three-year old children, where applicable. The approximate administration time for the teacher report on individual children is estimated at 20 minutes.



A2.2d CARES Baseline Parent Survey

As part of the baseline assessment, parents of both three- and four-year old children will be asked to complete a telephone survey (Appendix A.4) which includes a small set of demographic questions on the family (e.g. marital status, race/ethnicity) and the child (e.g. gender, age) that will allow us to describe the sample. In addition, parents will be asked about their educational background, economic status (income, public assistance status, employment experience), and child exposure to a range of psychosocial risks such as parental depression and parenting stress. Parents will be asked to assess their children’s externalizing and internalizing behaviors and children’s social competence. Parent involvement in school and parent emotion socialization will be assessed. Finally, we propose using items to assess financial resources, housing and connections to social institutions to assess recency of immigration. The approximate administration time for the parent survey is estimated at 20 minutes.



A2.2e CARES Follow-up Parent Survey

A follow-up survey (Appendix A.5) will be administered parents of both three- and four-year old children during the spring of the follow-up year that will provide key measures of supportiveness of children’s social contexts by assessing changes in characteristics of family background such as parent employment and income, reliance on public assistance, and marital status. This survey will be completed as a phone survey, or, if applicable, in the home when the follow-up direct child assessments occur. Parents will be asked to assess their children’s externalizing and internalizing behaviors and children’s social competence Parent emotion socialization practices will be assessed. Again, the approximate administration time for the parent survey is estimated at 20 minutes.

A2.2f CARES Direct Child Assessment

Direct child assessments of four-year old children will be administered to provide objective assessments of children’s well-being since no administrative data on developmental outcomes for children is available.11 Children in participating classrooms will be asked by an interviewer to perform several self-regulation tasks, which assess children’s working memory, motor control, impulsivity, and set shifting skills (all components of executive function) at the time of the assessment. These tasks include 1) a pencil tapping task, in which the child is asked to tap twice when the interviewer taps once, and tap once when the interviewer taps twice; 2) matching pictures along varying dimensions (color, size); or 3) a head-to-toes task, where children are asked to respond naturally to a command such as “touch your head” or “touch your toes” and then are instructed to switch the rules for the task by responding in the opposite way. For these same children, we will include assessments of children’s cognitive development using the broad math and reading subscales of the Woodcock-Johnson III12and the Expressive One-Word Picture Vocabulary Test (EOWPVT)13. Additionally, we will include a task which involves showing children different vignettes about peer-related hostility and conflict resolution14 and a task to assess identification of emotions15. The administration time of the direct child assessments averages 45 minutes.



Direct child assessments will not be completed with three year olds because some of the proposed tasks (those assessing children’s executive function skills) have not been validated for use with three-year olds. For this age group of children, we will be assessing the effects of the program models through other sources of data—the teacher and parent report measures, particularly, that we think are well-suited to assessing impacts for an efficacy study.



Implementation data collection. The implementation study measures and discussion guides were created to inform the replication of CARES program models in other Head Start classrooms and to help interpret impacts.



To inform the replication of CARES program models in other Head Start classrooms, the implementation study needs to: document the nature, extent, and variation in implementation of the training, coaching, and implementation of the three program models in Head Start classrooms; assess the quality of the implementation of the professional development capacity building (i.e., the training, coaching, and mentoring), and the degree to which the implementation of the social emotional program models in the classroom were implemented with fidelity (“intervention fidelity”); understand which contextual factors (characteristics of teachers, coaches, classrooms, Head Start centers/grantees) may be associated with successful coaching, training, and implementation; and, document what is happening in Head Start classrooms regarding social emotional-related practices specifically, and regarding other instructional practices, processes, and classroom quality more generally.



With this in mind, the most scientifically-sound and cost-effective approach to an implementation study was developed. Our approach reflects our review of the implementation evaluation literature and input we received from experts in the field of implementation research, researchers implementing the selected program models, as well as ACF and OPRE staff. Below we present those implementation measures that are present burden to participants, and are in addition to data already collected as part of the program model implementation. Appendices C.6-C.11 provide justification to the implementation discussion guides.



We answer our questions by relying on two sets of sources of data. Quantitative data collected on all classrooms and teachers, and qualitative data on teachers, coaches, and trainers. As we describe below, each provide a unique perspective to allow us to address our questions:

In the qualitative interviews, we focus on getting detailed, descriptive information about each program model, to understand how the trainers and coaches conceptualize the key features of their programs, to understand which aspects of the program they think are more or less challenging to implement on the ground, and to understand what that means for which components of the training and models they emphasize in practice. We are using the qualitative interviews to understand what features of these models the coaches and trainers are using in practice and whether that is consistent or not with the theory of change underlying each model. For example, we want to learn, how much do Preschool PATHS coaches work with teachers on behavior management approaches as compared with emotion coaching (a central component in the development of PATHS). This kind of information will be critical to understand differences or lack thereof in program impacts on aspects of classroom climate and outcomes for children across the program models. For example, if we find impacts from the PATHS model on behavior management, we will be able to understand whether that was because the other aspects of practice they were emphasizing lead, in turn to changes in behavior management, or whether, coaches were indeed training teachers on behavior management skills. In sum, the results from the qualitative interviews will provide detailed descriptions about each program model -- and their respective implementation challenges, key features, and professional development foci that will be critical to understanding these models and for interpreting impacts.



By contrast, we will use the quantitative data collection to measure and understand sources of variation in implementation at the classroom level. Given our reliance on certified trainers and highly structured protocols for the delivery of this training across the three program models, it is our expectation that the greatest variation in implementation will be observed between classrooms and teachers than across trainers in these models. With the large number of classrooms included in the Head Start CARES project, we can answer questions about characteristics that might serve to affect implementation—across teachers, program models, and sites. To conduct these analyses, we will want to exploit the full range of variation across classrooms and therefore are proposing to collect this quantitative data across all classrooms.



A2.2g CARES Site Visit: Coach Interview Guide

In-person interviews with an implementation research team member (Appendix A.6) will be conducted with every coach in the spring of the program year to obtain their perceptions of how the mentoring and coaching is proceeding, including the coach-teacher relationship, the trainer-coach relationship, successes and challenges of coaching and program implementation, and suggestions for improving the coaching and/or mentoring by trainers. Coaches will also be asked to provide their perceptions of teachers’ understanding of the program model and its core principles, to assess teachers’ confidence and motivation to implement the program model in the classroom, and to assess the nature of the lead teacher-teaching assistant relationship. Coaches will be asked to assess overall teacher fidelity of implementation, challenges to implement, and whether their own coaching helps teacher maintain fidelity. Lastly, coaches will provide their assessment of important contextual factors such as staff cohesion, their own views on social emotional development, their perceptions of teachers’ Head Start supervisors’ monitoring of and support for teachers’ implementation of the program model, their view of work place priorities and how CARES fits with these priorities. The approximate interview time for the coach interview is estimated at 60 minutes.



A2.2h CARES Site Visit: Teacher Interview Guide

In-person interviews with an implementation research team member (Appendix A.7) will be conducted with a subset of lead teacher-teaching assistant pairs (each will be interviewed separately) in the spring of the program year to obtain their perceptions of how the coaching is proceeding, including the coach-teacher relationship, successes and challenges of coaching and program implementation, and suggestions for improving the coaching. Teaches will also be asked to reflect on their understanding of the program model, their views on “treatment acceptability”, their confidence and motivation to implementation the program model in the classroom, their lead teacher-teaching assistant relationship, and the roles each teacher plays in implementing the program model in the classroom. Teachers will be asked to discuss how often they implement various program components, challenges they face to implement, adaptations they are using, and how coaches help them to implement with fidelity and acceptable flexibility. Additionally, teachers will be asked about important contextual factors such as their views of social emotional development, their views of their workplace’s priorities, how the program model fits with these priorities, and their own Head Start supervisor’s monitoring of and support for their implementation of the program model. The approximate interview time for the teacher interview is estimated at 60 minutes.



A2.2i CARES Site Visit: Center Director Interview Guide

We propose conducting interviews with center directors (Appendix A.8) to collect information to understand the context in which the implementation occurred. In-person interviews with an implementation research team member will be conducted with in the spring of the program year to obtain important contextual factors such as their views on social emotional development, their opinion of the program model, their monitoring of and support for the teachers’ implementation of the program model, their involvement in the implementation of the program model, their view of work place priorities and how CARES fits in with those priorities, factors affecting implementation, and their sense of the effect of program implementation on the center. Additionally, center directors will be asked to offer their opinions of factors in the community that may have affected the implementation of the program model. The approximate interview time for the center director interview is estimated at 60 minutes.





A2.2j CARES Site Visit: Center Staff Interview Guide

We propose conducting interviews with center staff (Appendix A.9) to collect information to understand the context in which the implementation occurred. In-person interviews with an implementation research team member will be conducted with educational coordinators, mental health consultants, and disabilities coordinators in the spring of the program year to obtain important contextual factors such as their views on social emotional development, their opinion of the program model, their sense of staff autonomy, openness to change, staff cohesion, their involvement in the implementation of the program model, their view of work place priorities and how CARES fits in with those priorities, their sense of other factors that may have affected implementation, and their perceptions of the effect that the program model implementation has had on the center. Additionally, these center staff will be asked to offer their opinions of factors in the community that may have affected the implementation of the program model. The approximate interview time for the center staff interview is estimated at 60 minutes.



A2.2k CARES Site Visit: Grantee/Delegate Agency Director Interview Guide

We propose conducting interviews with grantee/delegate agency directors (Appendix A.10) to collect information to understand the context in which the implementation occurred. In-person or telephone interviews with an implementation research team member will be conducted in the spring of the program year to understand their support for the CARES project and program models, their opinion of the program models, factors affecting implementation, and their sense of the effects of the program implementation on the centers. Additionally, grantee/ delegate agency directors will be asked to offer their opinions of factors in the community that may have affected the implementation of the CARES program models. The approximate interview time for the coach interview is estimated at 60 minutes.



A2.2l CARES Trainer Interview Guide

A phone interview will conducted with every trainer by an implementation research team member (Appendix A.11) in the spring of the program year to gather information on the coaches they are working with, and the process of supervising those coaches, to better understand the focus of the professional development effort. These interviews will ask them to provide ratings on their coach(es) and individual teachers/classrooms, and to report on their sense of what kinds of factors are leading to good and not-so-good implementation and adaptation of the program models in the classrooms. Trainers will also be asked to provide their perceptions of teachers’ understanding of the program model and its core principles, to assess teachers’ confidence and motivation to implement the program model in the classroom, and to assess the nature of the lead teacher-teaching assistant relationship. Trainers will also provide a snapshot of what occurs during the mentoring supervision phone calls with coaches and to discuss the challenges that coaches face. Lastly, trainers will be asked to assess overall teacher fidelity of implementation, challenges to implement, the dynamic of the coach-teacher relationship, and teacher engagement in coaching. The approximate interview time for the teacher interview is estimated at 60 minutes.

A3. Use of Information Technology for Data Collection to Reduce Respondent Burden

The use of Computer Assisted Telephone Interviewing (CATI) has been incorporated into the data collection of the parent surveys in order to ensure accuracy of data, reduce possibility for human error, allow for faster data analysis and reduce respondent burden. Videotaping of direct child assessments will be conducted whenever possible. Other non-technology efforts to reduce burden include training interviewers extensively and sections in the survey with lead questions to enable skip patterns.



A4. Efforts to Identify Duplication

The surveys focus on information that cannot be found in administrative records or other existing sources. They will facilitate the collection of data on, for example, teacher, parent and child socio-emotional well-being, children’s behavior problems, and other child outcomes, and these types of information are not available routinely or systematically in program records. Additionally, the implementation research instruments will enable us to assess fidelity of implementation and adaptation to the program models and inform replication.


A4.1 Reasons Why Available Information Cannot Be Used

Comparable information from other sources does not exist for the variables covered in the CARES survey instruments and discussion guides for the populations included in this project.



A5. Burden on Small Business

Does not apply. All respondents are individuals.

A6. Consequences to Federal Program or Policy Activities if Data Collection is not Conducted

If the survey data are not collected, we will not be able to adequately evaluate the impact and implementation of the CARES social-emotional enhancement program models. The analysis of the short- and long-term impacts of CARES social-emotional strategies would be limited because changes in many important outcomes cannot be captured in administrative records data such as measures of child well-being: early verbal, literacy, and math skills, emotional knowledge, self-regulation and executive functioning skills, and social problem-solving and competence, and, teacher positive behavior support, instructional practices, teacher burnout and depression. Surveys, direct assessments, and qualitative interviews are the only way of obtaining these data and are required in order to fully understand the effects of these treatment strategies. Should funding become available for a benefit-cost analysis at a later date, information on outcomes for children would be an important element for that effort, as well.



A7. Special Data Collection Circumstances

No such circumstances.



A8. Form 5 CFR 1320.8(d) and Consultations Prior to OMB Submission

The 60-day Federal Register notice soliciting comments for the CARES Impact and Implementation Studies survey instruments was posted in the Federal Register, Volume 73, Number 232, pages 73334-73335 on December 2, 2008. To date, one comment has been received (see Appendix F for comment and response). A copy of the published 60-day Federal Register notice is located in Appendix G.



We have developed instruments that incorporate items and scales from other major studies. To the extent possible, the questions included in the survey instruments allow for useful comparisons between the data from this project and that from other large-scale surveys. To select these measures for the various components of the survey instruments and implementation measures, we consulted with a number of individuals outside MDRC, including: Cybele Raver, Clancy Blair, Catherine Tamis-LeMonda (New York University); Karen Bierman, Robert Nix, Mark Greenberg, Celene Domitrovich (Pennsylvania State University); Nancy Hill, Stephanie Jones, Hirokazu Yoshikawa (Harvard University); Mary Louise Hemmeter (Vanderbilt University); Todd Little (University of Kansas); Nicholas Ialongo (Johns Hopkins University); Susanne Denham (George Mason University); John Lochman (University of Alabama); George Knight (Arizona State University); Bob Pianta and Bridget Hamre (University of Virginia); Dwayne Simpson (Texas Christian University); Julie Hakim-Larson (University of Windsor); Deborah Leong (Metropolitan State College of Denver); Carolyn Webster-Stratton (University of Washington); Allison Sidle Fuligni, Carollee Howes, Sharon Ritchie (UCLA); Gary Henry (University of North Carolina at Chapel Hill); Douglas Powell (Purdue University).



A9. Justification for Respondent Payments

We recognize that participation in the CARES impact surveys will place some burden on the participating teachers, parents, and children. Although many of the techniques suggested by OMB to improve response rates have been incorporated into our carefully designed instruments and the survey effort (described in Section B3), it has been our experience that small tokens of appreciation are useful when surveying teachers and low-income populations as part of a complex study design in order to acknowledge the burden placed on participants.



To be effective, the amount of the payments must fit the burden of the survey. We have based the amount to be paid to CARES respondents on prior research, and MDRC’s and the survey firm’s prior experience interviewing similar populations. We propose that the monetary amount be $15 for each teacher self-report survey at baseline and follow-up in Cohort 1. Payment amounts for Cohort 2 will be contingent on discussions with OMB and we will propose a planned variation study on payments for this effort to OMB after the baseline for Cohort 1 is completed. We propose that the monetary amount be $7 for each report on individual children that the lead teacher completes, $20 for the parent survey, and a book or toy valued at approximately $5 for the Head Start children that attempt the direct child assessment. These amounts reflect current practice in surveys using similar instruments and may take forms other than a cash payment, such as a transportation voucher or telephone calling card for the given value.





A10. Confidentiality

Privacy will be assured to the fullest extent allowable under the law. Respondents will receive information about privacy protections at the outset of the interviews. They will be informed that all of the information they provide will be kept strictly private and that study results will be presented only in aggregate form. They will also be told that completion of the survey is voluntary and that they may choose not to answer any question. Finally, we are applying for a Certificate of Confidentiality for these data, as indicated on the consent forms (see Appendix D.1 & D.2). Once the Certificate is received, non-substantive changes will be made to the consent form to reflect that respondents’ answers will be kept confidential, and revised consent forms will be re-submitted to OMB.



The following safeguards will be employed regarding privacy assurances:

  • All staff who have access to data at MDRC and the survey subcontractor firm sign an agreement to abide by corporate policies on data security and privacy. This agreement affirms each individual's understanding of the importance of maintaining data security and privacy and abiding by procedures that implement these policies.

  • All data, both paper files and computerized files, are kept in secure areas. Paper files are stored in locked storage areas with limited access on a need-to-know basis. Computerized files are managed via password control systems to restrict access as well as physically secure the source files.

  • Merged data sources have identification data stripped from the individual records or encoded to preclude identification of individuals.

  • All reports, tables, and printed materials present aggregate numbers only.

  • Compilations of individualized data are not provided to participating agencies.

  • Agreements are executed with any participating research subcontractors, partners, and consultants who obtain access to data files.



MDRC and the SRM survey firm will maintain in-house records of names, addresses, school identification numbers (if applicable), and tracing information for all sample members. This information will not be attached to survey or assessment data or made available to anyone outside appropriate staff of MDRC and the survey firm. All records identifying respondents will be kept in locked storage at MDRC, and respondents will be identified solely by a code number. Any coding, data entry and analysis requiring identification of individuals or households will use code numbers only, and a secret password will be necessary to access the data file. No data will ever be reported in such a way that individuals can be identified.



The importance of maintaining privacy will be emphasized during interviewer training, and any interviewer who knows a respondent will not be permitted to interview him or her. All staff, including coders and computer programmers, will be required to sign a privacy pledge.



At the beginning of each interview, respondents will be informed of their rights. In addition, interviewers will attempt to conduct the interview at a time and place that allows the utmost privacy for respondents. In many cases this will be in private areas at the program sites, while in others it will be in respondents’ homes or over the phone.

A11. Questions of a Sensitive Nature

Questions in some components of the CARES impact surveys are potentially “sensitive” for respondents. Respondents are asked about personal topics, such as mental health, salary and income, and marital status. The questions we have included were selected in part because they have been widely used in previous research and are respected among experts. Moreover, all will be pilot tested prior to the survey’s full implementation, and if problems arise in regard to any specific items, their inclusion will be reconsidered. Also, all survey forms will contain instructions that explain questions before they are posed. Finally, respondents will be informed by research staff prior to the start of the interviews and/or surveys that their answers are confidential, that they may refuse to answer any question, that results will only be reported in the aggregate, and that their responses will not have any affect on any services or benefits they or their family members receive.


A12. Estimates of the Hour Burden of Data Collection to Respondents

Participation in all the survey impact and implementation data collection activities is completely voluntary. No sanction or penalty will be applied to respondents receiving state or federal assistance who choose not to provide information.



The estimated response burden by instrument/component was calculated based on information on survey length obtained during the pretests (see Section B4). Assuming a response rate of 80%, the total number of respondents for CARES Baseline and Follow-up Lead Teacher Self-Report Surveys (360), Teacher Report on Individual Children (3,648), Parent Survey (3,648), Direct Child Assessment (2,880), Site Visit: Coach Interview Guide (60), Site Visit: Teacher Interview Guide (360), Site Visit: Center Director Guide (60), Site Visit: Center Staff Interview Guide (180), Grantee/Delegate Agency Director Interview (20), and Trainer Interview (60) were divided by 3 to determine the average number of responses across three years of clearance, multiplied by the annual number of responses per respondent, multiplied by the average length of the surveys/assessment/interview, divided by 60, then summed to determine the annual burden in number of hours. The response burden breakdown for all instruments is shown in the table below.



To compute the total estimated annual cost, the total burden hours were multiplied by the average hourly wage for six labor categories. The Head Start grantee- and center-level director wages ($30.02/hour) and Head Start center staff wages ($20.81/hour) were determined from the Head Start Program Information Reports. Teacher hourly wages were computed using the national mean wage from the Bureau of Labor Statistics ($12.40/hour). For parents, we used the mean salary for full-time employees over the age of 25 who were high school graduates with no college experience ($15.03/hour). Local coach wages were estimated at $20/hour and trainer wages were estimated to be an average of $175/hour. The total estimated annual cost is $35,410.31.

Instrument

Expected Number of Respondents

Number of Responses per Respondent

Average Burden per Response

Annual Burden

(Hours)

Average Hourly Wage of Respondents



Annual Cost

Baseline Lead Teacher Self-Report Survey

120

1

.33 hrs

40

$12.40

$496.00

Follow-up Lead Teacher Self-Report Survey

120

1

.33 hrs

40

$12.40

$496.00

Teacher Report on Individual Children

1,216

3

.33 hrs

1,204

$12.40

$14,929.60

Baseline Parent Survey

1,216

1

.33 hrs

401


$15.03


$6,027.03

Follow-up Parent Survey

1,216

1

.33 hrs

401

$15.03

$6,027.03

Direct Child Assessment

960

3

.75 hrs

2,160

n/a

n/a

Site Visit: Coach Interview Guide

20

1

1 hr

20

$20.00

$400.00

Site Visit: Teacher Interview Guide

120

1

1 hr

120

$12.40

$1,488.00

Site Visit: Center Director Interview Guide

20

1

1 hr

20

$30.02

$600.40

Site Visit: Center Staff Interview Guide

60

1


1 hr

60


$20.81


$1,248.60

Grantee/Delegate Agency Director Interview Guide

7

1

1 hr

7

$30.02

$210.14

Trainer Interview Guide

20

1

1 hr

20

$175.00

$3,500.00

ESTIMATED TOTALS




4,493


$35,422.80


A13. Estimates of Capital, Operating, and Start-Up Costs to Respondents

Not applicable. All surveys and direct child assessments will be conducted by a subcontracted survey firm.

A14. Estimates of Costs to Federal Government

The estimated cost for designing, administering, processing, and analyzing this survey impact and implementation data until the end of the project is $5,408,335. Therefore, the total estimated cost for the three years of clearance is $4,056,251.25, with an average annual cost of $1,352,083.75.

A15. Changes in Burden

Given changes to the design of the implementation study and a lower estimate of the three-year old sample, the annual burden (hours) for the CARES impact and implementation studies has decreased from 10,168.8 hours (reported on the Federal Register 60-day notice) to 4,493 hours. Site visit interviews will now be conducted with center directors and grantee/delegate agency directors, in addition to the interviews conducted with coaches, teachers, and center staff. The once proposed trainer survey will now be conducted as a trainer phone interview. The coach survey has been omitted from the design. Different teacher self-report and parent instruments will be used at baseline and follow-up so that change is reflected as well. Revisions to the 30-day notice also included reducing the annual number of respondents to reflect the average annual responses across the life of the project and having the average burden hours per response accurately reflect the burden time in any given year.


A16. Tabulation, Analysis, and Publication Plans and Schedule


A16.1a Assessment of Data Quality and File Construction

These surveys will go through a rigorous series of tests for completeness and quality. Professional staff at the survey firm will review the initial cases completed by each interviewer as well as perform occasional spot checks after that. Editing/coding staff will review questionnaires for quality and consistency after this initial period. Interviewers will be apprised of any problems found and retrained if needed. During the coding of data, coder reliability checks will be undertaken repeatedly to verify that coding procedures are being followed correctly. Data entered into computer files will be assessed for missing information, outliers, and other data problems according to standard procedures. If necessary, questionnaires will be re‑coded. The survey firm will deliver data sets of completed cases at agreed-upon internals, along with marginal frequencies. The data and frequencies will be reviewed for outliers, unusual distributions and inconsistencies between data items.


A16.1b Impact Data Analysis

As previously indicated, the research design for the Head Start CARES project will consist of a three-treatment design which will measure the net impacts of three interventions (treatments) relative to current Head Start practice. Participating Head Start centers within each grantee/delegate agency will then be randomized to a treatment group which would receive one of the interventions being tested or to a control group which would not receive any of these interventions. In this way, randomization of centers would be “blocked” by grantee/delegate agency. In half of the grantees/delegate agencies, 1 set of four centers will be randomized and in the other half, 2 sets of four centers will be randomized, thus with replication.



A sample of classrooms in participating Head Start centers, and all students within those classrooms, will be included in analyses and assigned the treatment group or control group to which their center is randomized. The net impacts of each intervention would then be measured by comparing measures of outcomes for students, classrooms, or teachers for each treatment group to those for the control group.



Data reduction. We will use existing approaches developed in developmental psychology for data reduction of our individual survey items into scales representing our constructs of interest.16 For example, the first step would be to identify the set of items in the survey that were intended to address the same broad topic, such as depressive symptomatology in children. We would then examine inter-item correlations for the full set of questions designed to measure this outcome and conduct a factor analysis to determine which items in the set “go together” and appear to be measuring the same underlying construct. Next, we would estimate Cronbach's alpha to assess the reliability of the scale. We would add and delete items as appropriate to maximize Cronbach's alpha. After selecting the final set of items for a given scale, we would then produce an overall scale score for each respondent by summing her scores on each of the items in the scale. The overall scale scores for all respondents would then be used as an outcome measure for the impact analysis, or for computing each evaluation site's ranking on an implementation measure, depending on the analysis. We have used this general approach successfully in several previous evaluations, especially the more recent evaluations with child outcomes data.17



Impact analysis. Our impact analysis will focus on the net impacts of each intervention on student, classroom, and teacher outcomes. Net impacts will be estimated by comparing mean outcomes for each intervention group to corresponding means for the control group with a regression-adjustment for selected background characteristics. Wherever possible the adjustment will control for a baseline measure of the outcome (a “pretest”), because it is usually the most powerful predictor of future outcomes and thereby typically provides the biggest boost possible to statistical precision (or power). Having baseline data is especially critical in this kind of design, in which children are nested in classrooms which are nested within centers, and randomization (our key predictor of interest) is occurring at the highest level of aggregation.



The following sections describe our proposed net impact analysis for blocking without replication and for blocking with replication. These analyses compare a single intervention to the control group. They will be conducted for each intervention tested.



The net impact estimate for a given intervention will reflect a comparison of outcomes for intervention centers and control centers in pairs that are matched by grantee/delegate agency. Because there will be random effects at four levels (students, classrooms, centers, and grantees/delegate agencies), this analysis will represent a four-level hierarchical model. Consider first the underlying four-level model of the situation.



Level 1: Students in classrooms

(1)



Level 2: Classrooms in centers

(2)



Level 3: Centers in grantees/delegate agencies

(3)



Level 4: Grantees/delegate agencies

(4)



where:

= the outcome for student s from classroom k in center c from grantee/delegate agency g, = baseline characteristic i for student s from classroom k in center c from grantee/delegate agency g, = an indicator variable for grantee/delegate agency m which equals one if grantee/delegate agency g is grantee/delegate agency m (g = m) and zero otherwise, = the treatment indicator, which equals one if center c from grantee/delegate agency g was randomized to treatment (an intervention) and zero if it was randomized to control status, = a random error for student s from classroom k in center c from grantee/delegate agency g that is independently and identically distributed across students in classrooms, = a random error for classroom k in center c from grantee/delegate agency g that is independently and identically distributed across classrooms in centers, = a random error for center c from grantee/delegate agency g that is independently and identically distributed across centers within grantees/delegate agencies, and = a random error for the true intervention effect at grantee/delegate agency g which is independently and identically distributed across grantees/delegate agencies.



Equations 1 – 4 imply the following composite mixed model.



(5)



The random error of this model (comprising the entire second line of Equation 5) has four components; one for each level in the data.



A corresponding model will be estimated for examining intervention effects on outcomes for classrooms or teachers. These models will comprise three levels of random variation (for grantees/delegate agencies, centers, and classrooms. 18.



Level 1: Classrooms in centers

(6)



Level 2: Centers in grantees/delegate agencies

(7)



Level 3: Grantees/delegate agencies

(8)

where:

= the outcome for classroom k from center c in grantee/delegate agency g, = baseline characteristic i for classroom k, from center c in grantee/delegate agency g, = an indicator variable for grantee/delegate agency m which equals one if grantee/delegate agency g is grantee/delegate agency m (g = m) and zero otherwise, = the treatment indicator variable, which equals one if center c from grantee/delegate agency g was randomized to treatment and zero if it was randomized to control status, = a random error for classroom k in center c from grantee/delegate agency g that is independently and identically distributed across classrooms in centers, = a random error for center c from grantee/delegate agency g that is independently and identically distributed across centers within grantees/delegate agencies, and = a random error for the true intervention effect at grantee/delegate agency g which is independently and identically distributed across grantees/delegate agencies.



Equations 6-8 imply the following composite mixed model:



(11)



Subgroup analyses

We do not have sufficient power to test for subgroup differences at the level of the grantee or delegate agency (since we only have 20 grantees/delegate agencies represented in our sample), but we will explore a few key subgroups that represent variation we can observe in all classrooms. Examples of such subgroups we will explore include those represented by differences in child characteristics, such as child gender or the level of baseline behavior problems. Differences in subgroup impacts will be tested by estimating interactions between these child characteristics (at the child level) and the experimental program (at the grantee/delegate agency level) in our multi-level model specified above.


A16.1c Implementation Data Analysis

We will conduct two kinds of analyses for answering our questions regarding program implementation: descriptive, qualitative analyses and nonexperimental quantitative analyses. Recall that the goal of the implementation study is to inform our understanding of the program impacts we observe and to inform replication of these models in Head Start settings.



The qualitative information will be used for descriptive purpose to understand how the program is being implemented on the ground. We plan to use these data in a descriptive manner to understand each program model, and their respective implementation challenges, key features, and professional development foci that will be critical to understanding these models and for interpreting impacts. We will transcribe the interviews and analyze them for key themes that are discussed, consistent with recommended practice in this field. We will use these interviews to inform our understanding of how the program has been implemented in practice across the program models.



An example illustrates how important this data will be to understanding our quantitative impact analysis. We might learn from this effort, for example, that even though one model-Preschool PATHS- emphasizes a certain set of child-related emotional skills, trainers and coaches provide professional development to teachers on basic classroom management skills as well. Then, if we find program impacts on teachers’ classroom management skills, we would interpret that to be as a result of the specific professional development training on this topic, and not just merely because the work teachers were doing to enhance emotions skills feeding back on their classroom management. In short, without this information, it will be difficult to interpret the pattern of impacts we observe on the key targets of these intervention strategies.



Second, we will augment these qualitative, descriptive approaches with nonexperimental quantitative data. To do this, we will conduct OLS regression analyses to understand the teacher, classroom, and center characteristics (including composition of children in these centers) associated with high levels of program implementation fidelity and quality. This will allow us to understand what preschool contexts facilitate and undermine the implementation of these differing program models. These analyses will be program-model specific, to allow us to understand if some models are better implemented in certain kinds of contexts than others.


A16.2 Publication Plans and Schedule

In the CARES project, impact surveys and implementation measures will be administered primarily during the Head Start program year and in some cases will be administered in the follow-up year. Fielding of Cohort 1 data collection will begin in the 2009-2010 academic year and 2010-2011 academic year for Cohort 2.

A17. Reasons for Not Displaying the OMB Approval Expiration Date

Not applicable. We intend to display the OMB approval number and expiration data on all survey materials.

A18. Exceptions to Certification Statement

Not applicable. We have no exceptions to the Certification Statement.

1 Gilliam, 2005; Raver, 2002

2 Farmer, Stangl, Burns, Costello & Angold, 1999; Dodge, Pettit, & Bates, 1994; Brooks-Gunn, Duncan, & Aber, 1997

3 Ladd, Birch, & Buhs,1999; McLelland, Morrison & Holmes, 2000; Raver, Garner, & Smith-Donald, 2006

44Consortium on the School-Based Promotion of Social Competence, 1994

5 Cooper & Farran, 1991

6 Zill & Peterson, 1986

7 Pianta, 2001

8 Gresham & Elliott, 1990

9 Perry & Meisels, 1996

10 Bierman, Greenberg, & CPPRG, 1996

11 Kochanska, Murray, & Harlan, 2000; McCabe, Hernandez, Lara, Brooks-Gunn, 2000; Mather & Woodcock, 2001a; Mather & Woodcock, 2001b; McGrew & Woodcock, 2001; Reynell & Gruber, 1990

12 McCrew & Woodcock, 2001

13 Brownell, 2000

14 Denham & Bouril, 1994

15 Pollak, Cichetti, Hornung, & Reed, 2000

16For a discussion of these methods, see DeVellis, R.F. 1991. Scale Development; Theory and Applications, Newbury Park, California: Sage Publications, Inc.

17 See Gennetian, L., and C. Miller. 2000. Reforming Welfare and Rewarding Work: Final Report on the Minnesota Family Investment Program, Volume 2: Effects on Children. New York: MDRC.

18 ? If the four-level models require excessive computational time (which is not expected) we will aggregate student outcomes to their classroom means and compute impact estimates using the resulting three-level model (for classroom means within schools within grantees). This is a valid simplification of the analysis.




File Typeapplication/msword
File TitleHHS/ACF/OPRE
AuthorMDRCER
Last Modified BySeth F. Chamberlain
File Modified2009-05-07
File Created2009-05-07

© 2025 OMB.report | Privacy Policy