OMB CARES 2nd Package_Supporting Statement_Part A 4 12

OMB CARES 2nd Package_Supporting Statement_Part A 4 4 12.doc

HHS/ACF/OPRE Head Start Classroom-based Approaches and Resources for Emotion and Social skill promotion (CARES) project: Impact and Implementation Studies

OMB: 0970-0364

Document [doc]
Download: doc | pdf







HHS/ACF/OPRE

HEAD START CLASSROOM-BASED APPROACHES AND RESOURCES FOR EMOTION AND SOCIAL SKILL PROMOTION (CARES) PROJECT:



2ND PACKAGE: IMPACT AND IMPLEMENTATION STUDIES





SUPPORTING STATEMENT A

FOR OMB CLEARANCE



April 2012





A. JUSTIFICATION

A1. Circumstances Necessitating Data Collection

Recently, researchers and policy makers have drawn attention to the high rate of emotional and behavioral difficulties among young, low-income children.1 Exposed to a wide range of psychosocial stressors, children in poor neighborhoods are clearly at greater risk for developing emotional and behavioral difficulties and have less access to mental health services than their middle income peers. 2 These difficulties may compromise their chances for success in school. Children who have difficulty regulating their emotions and behaviors, (e.g. who are either sad, withdrawn, or disruptive) have been found to receive less instruction, to be less engaged and less positive about their role as learners, and to have fewer opportunities for learning from peers. 3 This work signals the need to build and disseminate evidence about preschool classroom processes that support, rather than compromise, young children’s emotional and behavioral development, in conjunction with and in support of practices that promote their early learning.



Recent developmental research has identified several fundamental social and emotional skills that underlie children’s competent social interactions with teachers and children as well as their academic engagement, or attention to the learning tasks of schooling. These specific skills have been the targets of a number of promising program enhancements that have been implemented and studied in a range of preschool settings.4 At the same time, these studies have largely been conducted in ideal conditions: in single cities, with programs highly motivated to take up the intervention, and with training and technical assistance provided under the direction of senior academic researchers. A well-designed project with a nationally representative sample of Head Start programs and a rigorous multi-celled cluster-analytic design holds the promise of identifying the most effective of these new approaches and providing lessons about how they can best be integrated into Head Start classrooms around the country.


The study utilizes a group-based randomized experimental design to test the effects of three very different evidence-based program enhancements designed to improve the social and emotional development of three- and four-year old children in Head Start classrooms. The study aims to provide the information federal policy makers and Head Start providers will need if they are to increase Head Start’s capacity to improve the social-emotional skills and school readiness of preschool-age children. The project is sponsored by the Office of Planning, Research, and Evaluation (OPRE) of the Administration for Children and Families (ACF), and will be conducted under a contract to MDRC.


A1.1 Overview of the CARES Project

The design and measurement of the CARES project primarily focuses on four-year old children in Head Start, which we will refer to as the “core” study. In the spring before the preschool implementation year, baseline data on classrooms and teachers was collected through classroom observations and teacher self-surveys. In the fall of that same year, baseline data was collected on children and families via parent surveys, direct child assessments, and teacher reports on individual teachers. At the end of the year in which the three social-emotional enhancements were implemented, follow-up data was collected (in the spring) on classrooms, teachers, and children through classroom observations, teacher self-surveys, direct child assessments, teacher reports on individual teachers, and interviews with program staff and teachers.



This document requests an extension of OMB authorization to complete the follow-up impact data collection activities related to the CARES project for four-year olds who are now in kindergarten.



Impact data collection. For the impact study, this submission covers two surveys in the kindergarten follow-up year. This teacher report on individual children will be self-administered using paper and pencil and will be mailed back. The parent survey will be administered over the phone using a CATI system.



Site selection. As discussed in our original OMB submission, a sampling plan was created to provide a sample of Head Start grantees/delegate agencies and centers for the core study that represented a compromise between a pure probability sample that is nationally representative of the Head Start national child population (which is not feasible) and a purely opportunistic sample of volunteer participants. The plan produced a sample of 17 grantees/delegate agencies within which Head Start centers were randomized to treatment groups or a control group. The seventeen sites selected were:



  1. Atlantic Human Resources in Atlantic City, NJ;

  2. Shore Up! in Salisbury, MD;

  3. Self Help in Brockton, MA;

  4. Private Industry Council in Uniontown, PA.

  5. Central Mississippi, Inc. in Winona, MS;

  6. Episcopal Community Services in Chula Vista, CA;

  7. Region 7 Education Service Center in Kilgore, TX;

  8. Tyler ISD in Tyler, TX;

  9. Chicago Public Schools in Chicago, IL;

  10. Chicago Youth Centers in Chicago, IL;

  11. Child Development Council of Franklin County in Columbus, OH;

  12. LEADS Head Start in Newark, OH;

  13. WSOS Community Action Commission, Inc. in Fremont, OH;

  14. Santa Clara County Office of Education in San Jose, CA;

  15. Berkeley-Albany YMCA Head Start in Berkeley, CA;

  16. Pacific Asian Consortium In Employment in Los Angeles, CA; and

  17. Rocky Mountain SER Head Start in Denver, CO



Evaluation component. The research design for the core Head Start CARES project with four-year old children consists of a three-treatment design that will measure the net impacts of three interventions (treatments) relative to current Head Start practice. As described above under site selection, the design began with a sample of Head Start grantees or delegate agencies that were eligible and willing to participate in the study. Participating Head Start centers within each grantee/delegate agency were then randomized to a treatment group which received one of the interventions being tested or to a control group which did not receive any of these interventions. In this way, randomization of centers was “blocked” by grantee/delegate agency into 26 blocks; some grantees were made up of two or three blocks based on the number of centers available.



A sample of classrooms in participating Head Start centers and selected students within those classrooms were included in the treatment group or control group to which their center is randomized. The net impacts of each intervention will then be measured by comparing measures of future outcomes for students, classrooms, or teachers for each treatment group to those for the control group. Data collection for which an extension of OMB authorization is being sought will play an important role in the impact study, as described in Section A2.



Efficacy study with 3-year olds. In addition to the core evaluation described above, we also conducted a smaller exploratory study on selected three-year old children who were enrolled in participating mixed-age classrooms. These children are not the focus of the core analyses because prior evidence that these social-emotional programs are effective for this age group is not available. Therefore, a much more limited set of data was collected for this group of children, focusing primarily on teacher reported data during the year of program implementation.



A2. How, By Whom, and For What Purpose Are Data to be Used

Purposes of the data collection discussed in this extension request include the following:

  • To study the effects of these specific programs or practices within the Head Start population;

  • To study whether specific programs or practices are more or less effective for certain populations.



A2.1 The Overall Role of Instruments in the CARES Project

The CARES impact surveys discussed in this extension request will yield important data not available through administrative records. The impact study will provide information on, for example: social-emotional well-being of children; academic outcomes for children; student-teacher relationships; and background characteristics of children, as children transition into Kindergarten. For the impact study, we are interested in assessing effects on children on a core set of key outcomes that are either key targets of these intervention approaches (social skills, emotion skills, executive function) or those that represent key outcomes we are trying to affect, as a result of those changes in skills (behavior problems, approaches to learning).

A2.2 The Role of Specific Survey Components



Impact data collection. Whenever possible, the questions in the impact study surveys were taken or adapted from existing instruments that have been used and validated with national samples or from instruments used in other HHS evaluations. As such, comparisons with national or other evaluation findings will be possible. We have worked with the survey firm, Survey Research Management (SRM) – and if necessary with firms specializing in translation – to ensure that these surveys are translated for administration with non-English-speaking populations as needed. Versions of these surveys were fielded in the preschool data collection; in kindergarten, the survey was adapted slightly to be more appropriate for the kindergarten setting.



A2.2a CARES Lead Teacher Report on Individual Children

From teachers, children’s social and emotional development, social and learning behaviors, and early academic skills and school readiness will be assessed during the spring of the follow-up year. Teachers will complete a self-administered report on individual children (Appendix A.1) which includes the Cooper-Farran Behavioral Rating Scales (CFBRS),5 the Behavior Problems Index (BPI),6 the Student-Teacher Relationship Scale (STRS),7 the Social Skills Rating Scale- Social Skills scale (SSRS),8 the Academic Rating Scale (ARS)9, and parent-teacher involvement (Parent-Teacher Involvement Questionnaire)10 on the children who had participated in the Head Start CARES project the previous year who have ended up in their classrooms. The approximate administration time for the teacher report on individual children is estimated at 20 minutes.



A2.2b CARES Follow-up Parent Survey

A follow-up survey (Appendix A.2) will be administered to parents of four-year old children during the spring of the follow-up year that will assess changes in characteristics of family background such as parent employment and income, reliance on public assistance, and marital status. This survey will be completed as a phone survey using CATI screens. Parents will be asked to assess their children’s externalizing and internalizing behaviors and children’s social competence, as well as parent emotion socialization practices. The approximate administration time for the parent survey is estimated at 20 minutes.

A3. Use of Information Technology for Data Collection to Reduce Respondent Burden

The use of Computer Assisted Telephone Interviewing (CATI) has been incorporated into the data collection of the parent surveys in order to ensure accuracy of data, reduce possibility for human error, allow for faster data analysis and reduce respondent burden. Other non-technology efforts to reduce burden include training interviewers extensively and sections in the survey with lead questions to enable skip patterns.


The teacher report is a paper-based survey, which is the same approach used for teacher reports during the preschool year. The teacher report, with no “skip” patterns and sensitive items, lends itself well to this kind of paper and pencil approach. Moreover, in preschool, the response rates for paper-based teacher reports were excellent, between 93% and 96%, depending on the time point. Given that the instrument lends itself well to the paper and pencil format and given positive experiences in this form of administration in prior waves, we decided to avoid additional development costs by implementing the same paper-based procedure as was used at the preschool time point.



A4. Efforts to Identify Duplication

The surveys focus on information that cannot be found in administrative records or other existing sources. They will facilitate the collection of data on, for example, child socio-emotional well-being, children’s behavior problems, and other child outcomes, and these types of information are not available routinely or systematically in program records.


A4.1 Reasons Why Available Information Cannot Be Used

Comparable information from other sources does not exist for the variables covered in the CARES survey instruments for the populations included in this project.



A5. Burden on Small Business

Does not apply. All respondents are individuals.

A6. Consequences to Federal Program or Policy Activities if Data Collection is not Conducted

If the survey data are not collected, we will not be able to adequately evaluate the impact of the CARES social-emotional enhancement program models. The analysis of the long-term impacts of CARES social-emotional strategies would be limited because changes in many important outcomes cannot be captured in administrative records data such as measures of child social and emotional well-being. Surveys are an important means of collecting these data and are required in order to fully understand the effects of these treatment strategies.



A7. Special Data Collection Circumstances

No such circumstances.



A8. Form 5 CFR 1320.8(d) and Consultations Prior to OMB Submission

The 60-day Federal Register notice soliciting comments for the CARES Impact and Implementation Studies - Extension survey instruments was posted in the Federal Register, Volume 77, Number 26, page 6565 on February 8, 2012. To date, no comments have been received. A copy of the published 60-day Federal Register notice is located in Appendix D.



We have developed instruments that incorporate items and scales from other major studies. To the extent possible, the questions included in the survey instruments allow for useful comparisons between the data from this project and that from other large-scale surveys. To select these measures for the various components of the survey instruments and implementation measures, we consulted with a number of individuals outside MDRC, including: Cybele Raver, Clancy Blair, Catherine Tamis-LeMonda (New York University); Karen Bierman, Robert Nix, Mark Greenberg, Celene Domitrovich (Pennsylvania State University); Nancy Hill, Stephanie Jones, Hirokazu Yoshikawa (Harvard University); Mary Louise Hemmeter (Vanderbilt University); Todd Little (University of Kansas); Nicholas Ialongo (Johns Hopkins University); Susanne Denham (George Mason University); John Lochman (University of Alabama); George Knight (Arizona State University); Bob Pianta and Bridget Hamre (University of Virginia); Dwayne Simpson (Texas Christian University); Julie Hakim-Larson (University of Windsor); Deborah Leong (Metropolitan State College of Denver); Carolyn Webster-Stratton (University of Washington); Allison Sidle Fuligni, Carollee Howes, Sharon Ritchie (UCLA); Gary Henry (University of North Carolina at Chapel Hill); Douglas Powell (Purdue University).



A9. Justification for Respondent Payments

We recognize that participation in the CARES impact surveys will place some burden on the participating teachers and parents. Although many of the techniques suggested by OMB to improve response rates have been incorporated into our carefully designed instruments and the survey effort (described in Section B3), it has been our experience that small tokens of appreciation are useful when surveying teachers and low-income populations as part of a complex study design in order to acknowledge the burden placed on participants.



Incentives are important, especially in a longitudinal study, to gain respondents’ cooperation and ensure a high response rate and their participation throughout the study, both at the baseline and at the follow up interview. (James 1997, Mack et al 1998, Martin et al 2001). Incentives are most appropriately used in Federal statistical surveys with hard-to-find populations or respondents whose failure to participate would jeopardize the quality of the survey data (e.g., in panel surveys experiencing high attrition), or in studies that impose exceptional burden on respondents, such as those asking highly sensitive questions.



To be effective, the amount of the payments must fit the burden of the survey. We have based the amount to be paid to CARES respondents on prior research, and MDRC’s and the survey firm’s prior experience interviewing similar populations. We propose that the monetary amount remain the same for teacher reports and parent surveys: $7 for each teacher report on children and $20 for the parent survey. These amounts reflect current practice in surveys using similar instruments and will be provided in the form of a gift card for the given value.


The parent survey incentive is comparable to payments used in FACES ($35 for 45-60 minute interview), Baby FACES ($35 for a 120 minute interview) and Building Strong Families (BSF); $50 for completing two 50-minute parent interviews. The incentive amount is sufficient to encourage families to participate in both the study and the survey but is not overly generous. Offering a lower amount could jeopardize the study and actually cost the government more because it could result in a lower uptake of families into the study and more effort expended by the evaluation team to successfully enroll families.


The teacher report incentive payment is intended to encourage staff to complete and return the survey, and will compensate staff for using time outside of their normal work activities to do this work. This amount is appropriate considering the mean salary for full-time employees over age 25 with a bachelor’s degree or higher is $12.40 per hour. In Baby FACES, home visitors received a $25 gift bag for allowing themselves to be observed during a home visit and $5 for each child on their caseload for providing information on language and other outcomes.


As part of the original OMB package, MDRC agreed to conduct a planned variation study of incentives in a subset of sites to test whether teacher response rates were affected by the amount of the payment provided for completion of the self survey. Appendix G presents the background and findings from that study.





A10. Confidentiality

Privacy will be assured to the fullest extent allowable under the law. Respondents will receive information about privacy protections at the outset of the surveys. They will be informed that all of the information they provide will be kept strictly private and that study results will be presented only in aggregate form. They will also be told that completion of the survey is voluntary and that they may choose not to answer any question. Finally, we have a Certificate of Confidentiality for these data.



The following safeguards will be employed regarding privacy assurances:

  • All staff who have access to data at MDRC and the survey subcontractor firm sign an agreement to abide by corporate policies on data security and privacy. This agreement affirms each individual's understanding of the importance of maintaining data security and privacy and abiding by procedures that implement these policies.

  • All data, both paper files and computerized files, are kept in secure areas. Paper files are stored in locked storage areas with limited access on a need-to-know basis. Computerized files are managed via password control systems to restrict access as well as physically secure the source files.

  • Merged data sources have identification data stripped from the individual records or encoded to preclude identification of individuals.

  • All reports, tables, and printed materials present aggregate numbers only.

  • Compilations of individualized data are not provided to participating agencies.

  • Agreements are executed with any participating research subcontractors, partners, and consultants who obtain access to data files.



MDRC and the SRM survey firm will maintain in-house records of names, addresses, school identification numbers (if applicable), and tracing information for all sample members. This information will not be attached to survey or assessment data or made available to anyone outside appropriate staff of MDRC and the survey firm. All records identifying respondents will be kept in locked storage at MDRC, and respondents will be identified solely by a code number. Any coding, data entry and analysis requiring identification of individuals or households will use code numbers only, and a secret password will be necessary to access the data file. No data will ever be reported in such a way that individuals can be identified. Note that in developing the public use file, we will be implementing data masking procedures to ensure that sample members cannot be identified individually. See Appendix F for an example of procedures that were developed for another DHHS project conducted by MDRC. We will implement similar masking procedures for this project.



The importance of maintaining privacy will be emphasized during interviewer training, and any interviewer who knows a respondent will not be permitted to interview him or her. All staff, including coders and computer programmers, will be required to sign a privacy pledge.



At the beginning of each interview, respondents will be informed of their rights. In addition, interviewers will attempt to conduct the interview at a time and place that allows the utmost privacy for respondents over the phone.

A11. Questions of a Sensitive Nature

Questions in some components of the CARES impact surveys are potentially “sensitive” for respondents. Respondents are asked about personal topics, such as mental health, salary and income, and marital status. The questions we have included were selected in part because they have been widely used in previous research and are respected among experts. Moreover, all were pilot tested prior to the survey’s full implementation. Also, all survey forms will contain instructions that explain questions before they are posed. Finally, respondents will be informed by research staff prior to the start of the interviews and/or surveys that their answers will be kept private, that they may refuse to answer any question, that results will only be reported in the aggregate, and that their responses will not have any affect on any services or benefits they or their family members receive.


A12. Estimates of the Hour Burden of Data Collection to Respondents

Participation in all the survey impact and implementation data collection activities is completely voluntary. No sanction or penalty will be applied to respondents receiving state or federal assistance who choose not to provide information.



The estimated response burden by instrument/component was calculated based on information on survey length obtained during the pretests (see Section B4). 



For the activities covered by this extension, the total original number of respondents for CARES Teacher Report on Individual Children (3,648) and Parent Survey (3,648) were divided by 6 to determine the average number of responses across six months of clearance, multiplied by the annual number of responses per respondent, multiplied by the average length of the surveys in hours, then summed to determine the annual burden in number of hours. The response burden breakdown for all instruments is shown in the table below.



To compute the total estimated annual cost, the total burden hours were multiplied by the average hourly wage for two labor categories. Teacher hourly wages were computed using the national mean wage from the Bureau of Labor Statistics ($12.40/hour). For parents, we used the mean salary for full-time employees over the age of 25 who were high school graduates with no college experience ($15.03/hour). The total estimated annual cost is $5,518.59.



Instrument

Expected Number of Respondents

Number of Responses per Respondent

Average Burden per Response

Annual Burden

(Hours)

Average Hourly Wage of Respondents


Annual Cost

Teacher Report on Individual Children

608

1

.33 hrs

201

$12.40

$2,487.94

Follow-up Parent Survey

608

1

.33 hrs

201

$15.03

$3,030.65

ESTIMATED TOTALS




402


$5,518.59



A13. Estimates of Capital, Operating, and Start-Up Costs to Respondents

Not applicable. All surveys and direct child assessments are conducted by a subcontracted survey firm.

A14. Estimates of Costs to Federal Government

The estimated cost for designing, administering, processing, and analyzing this survey impact and implementation data for the entire project is $5,408,335. The cost of the data collection for this extension is $586,508.

A15. Changes in Burden

The original OMB request was for 4,493 hours. This current request is a request for an extension to complete the previously approved hours.


A16. Tabulation, Analysis, and Publication Plans and Schedule


A16.1a Assessment of Data Quality and File Construction

These surveys have gone through a rigorous series of tests for completeness and quality. Professional staff at the survey firm will review the initial cases completed by each interviewer as well as perform occasional spot checks after that. Interviewers will be apprised of any problems found and retrained if needed. Data entered into computer files will be assessed for missing information, outliers, and other data problems according to standard procedures. If necessary, questionnaires will be re‑coded. The survey firm will deliver data sets of completed cases at agreed-upon internals, along with marginal frequencies. The data and frequencies will be reviewed for outliers, unusual distributions and inconsistencies between data items.


A16.1b Impact Data Analysis

As previously indicated, the research design for the Head Start CARES project consisted of a three-treatment design, which measured the net impacts of three interventions (treatments) relative to current Head Start practice. Participating Head Start centers within each grantee/delegate agency were randomized to a treatment group which received one of the interventions being tested or to a control group which did not receive any of these interventions. In this way, randomization of centers was be “blocked” by grantee/delegate agency. In some of the grantees/delegate agencies, more than one set of four centers was randomized.



A sample of classrooms in participating Head Start centers, and selected students within those classrooms, will be included in analyses and assigned the treatment group or control group to which their center is randomized. The net impacts of each intervention will then be measured by comparing measures of outcomes for students, classrooms, or teachers for each treatment group to those for the control group.



Data reduction. We will use existing approaches developed in developmental psychology for data reduction of our individual survey items into scales representing our constructs of interest.11 For example, the first step is to identify the set of items in the survey that were intended to address the same broad topic, such as depressive symptomatology in children. We will then examine inter-item correlations for the full set of questions designed to measure this outcome and conduct a factor analysis to determine which items in the set “go together” and appear to be measuring the same underlying construct. Next, we will estimate Cronbach's alpha to assess the reliability of the scale. We will add and delete items as appropriate to maximize Cronbach's alpha. After selecting the final set of items for a given scale, we will then produce an overall scale score for each respondent by summing her scores on each of the items in the scale. The overall scale scores for all respondents will then be used as an outcome measure for the impact analysis, or for computing each evaluation site's ranking on an implementation measure, depending on the analysis. We have used this general approach successfully in several previous evaluations, especially the more recent evaluations with child outcomes data.12



Impact analysis. Our impact analysis will focus on the net impacts of each intervention on student, classroom, and teacher outcomes. Net impacts will be estimated by comparing mean outcomes for each intervention group to corresponding means for the control group with a regression-adjustment for selected background characteristics. Wherever possible the adjustment will control for a baseline measure of the outcome (a “pretest”), because it is usually the most powerful predictor of future outcomes and thereby typically provides the biggest boost possible to statistical precision (or power). Having baseline data is especially critical in this kind of design, in which children are nested in classrooms which are nested within centers, and randomization (our key predictor of interest) is occurring at the highest level of aggregation.



The following sections describe our proposed net impact analysis. These analyses compare a single intervention to the control group. They will be conducted for each intervention tested.



The net impact estimate for a given intervention will reflect a comparison of outcomes for intervention centers and control centers in pairs that are matched by grantee/delegate agency. Because there will be random effects at three levels (students, classrooms, and centers), this analysis will represent a three-level hierarchical model. Consider first the underlying three-level model of the situation.



Level 1: Students in classrooms (1)



Level 2: Classrooms in centers (2)

Level 3: Centers (3)



Where:

= the outcome for student s from classroom k in center c

= baseline characteristic i for student s from classroom k in center c

= baseline characteristic j for classroom k in center c

= an indicator variable for random assignment block b

= the treatment indicator, which equals one if center c was randomized to treatment (an intervention) and zero if it was randomized to control status,

= a random error for student s from classroom k in center c that is independently and identically distributed across students in classrooms,

= a random error for classroom k in center c that is independently and identically distributed across classrooms in centers,

= a random error for center c that is independently and identically distributed across centers



Equations 1 – 3 imply the following composite mixed model.



(4)

The random error of this model has three components; one for each level in the data.



A corresponding model will be estimated for examining intervention effects on outcomes for classrooms or teachers. These models will comprise two levels of random variation (for centers and classrooms) 13.



Level 1: Classrooms in centers (5)



Level 2: Centers (6)

Where:

= the outcome for classroom k in center c

= baseline characteristic j for classroom k in center c

= an indicator variable for random assignment block b

= the treatment indicator, which equals one if center c was randomized to treatment (an intervention) and zero if it was randomized to control status,

= a random error for classroom k in center c that is independently and identically distributed across classrooms in centers,

= a random error for center c that is independently and identically distributed across centers



Equations 5 and 6 imply the following composite mixed model:

(7)

Subgroup analyses

We do not have sufficient power to test for subgroup differences at the level of the grantee or delegate agency (since we only have 17 grantees/delegate agencies represented in our sample), but we will explore a few key subgroups that represent variation we can observe in all classrooms. Examples of such subgroups we will explore include those represented by differences in child characteristics, such as child gender or the level of baseline behavior problems. Differences in subgroup impacts will be tested by estimating interactions between these child characteristics (at the child level) and the experimental program (at the grantee/delegate agency level) in our multi-level model specified above.


A16.2 Publication Plans and Schedule

Program implementation for preschool classrooms was completed in spring 2011. The kindergarten follow-up data collection will occur in spring 2012 (the focus of this OMB extension request). Data and findings will be issued and shared over the course of the multi-year evaluation through: a final implementation report (2013); a final impact report (2014); and public use files.

A17. Reasons for Not Displaying the OMB Approval Expiration Date

Not applicable. We intend to display the OMB approval number and expiration data on all survey materials.

A18. Exceptions to Certification Statement

Not applicable. We have no exceptions to the Certification Statement.

1 Gilliam, 2005; Raver, 2002

2 Farmer, Stangl, Burns, Costello & Angold, 1999; Dodge, Pettit, & Bates, 1994; Brooks-Gunn, Duncan, & Aber, 1997

3 Ladd, Birch, & Buhs,1999; McLelland, Morrison & Holmes, 2000; Raver, Garner, & Smith-Donald, 2006

44Consortium on the School-Based Promotion of Social Competence, 1994

5 Cooper & Farran, 1991

6 Zill & Peterson, 1986

7 Pianta, 2001

8 Gresham & Elliott, 1990

9 Perry & Meisels, 1996

10 Bierman, Greenberg, & CPPRG, 1996

11For a discussion of these methods, see DeVellis, R.F. 1991. Scale Development; Theory and Applications, Newbury Park, California: Sage Publications, Inc.

12 See Gennetian, L., and C. Miller. 2000. Reforming Welfare and Rewarding Work: Final Report on the Minnesota Family Investment Program, Volume 2: Effects on Children. New York: MDRC.

13 ? If the three-level models require excessive computational time (which is not expected) we will aggregate student outcomes to their classroom means and compute impact estimates using the resulting two-level model (for classroom means within schools). This is a valid simplification of the analysis.




File Typeapplication/msword
File TitleHHS/ACF/OPRE
AuthorMDRCER
Last Modified ByMolly Buck
File Modified2012-04-17
File Created2012-04-10

© 2024 OMB.report | Privacy Policy