Att_Supporting Statement Part A

Att_Supporting Statement Part A.docx

National Title I Study of Implementation and Outcomes: Early Childhood Language Development (ECLD)

OMB: 1850-0871

Document [docx]
Download: docx | pdf


National Title I Study of Implementation and Outcomes: Early Childhood Language Development

Part A: Justification for the Study

May 5, 2011





Contract Number:

ED-04-CO-0112/0011

Mathematica Reference Number:

06692.602

Submitted to:

National Center for Education Evaluation and Regional Assistance

555 New Jersey Ave., Capital Place, NW

Washington, DC 20208

Project Officer: Tracy Rimdzius

Contract Officer: Brenda Jefferson

Submitted by:

Mathematica Policy Research

600 Maryland Avenue, S.W.

Suite 550

Washington, DC 20024-2512

Telephone: (202) 484-9220

Facsimile: (202) 863-1763

Project Director: Christine Ross


National Title I Study of Implementation and Outcomes: Early Childhood Language Development

Part A: Justification for the Study

May 5, 2011







CONTENTS

Part a: Justification for the study 1

A. Justification 1

A1. Circumstances Making the Collection of Information Necessary 3

A2. Purpose and Use of the Information Collection 11

A3. Use of Improved Technology and Burden Reduction 12

A4. Efforts to Avoid Duplication and Use of Similar Information 13

A5. Impact on Small Businesses or Other Small Entities 14

A6. Consequences of Collecting the Information Less Frequently 14

A7. Special Circumstances Relating to the Guidelines of 5 CFR
1320.5 14

A8. Comments in Response to the Federal Register Notice and
Efforts to Consult Outside the Agency 15

A9. Explanations of Any Payments or Gifts to Respondents 16

A10. Assurance of Privacy Provided to Respondents 16

A11. Justification for Sensitive Questions 19

A12. Estimates of Burden Hours and Costs 19

A13. Estimates of Total Annual Cost Burden to Respondents or Record Keepers 20

A14. Annualized Cost to Federal Government 20

A15. Explanations for Program Changes or Adjustments 21

A16. Plans for Tabulation and Publication of Results and Project
Time Schedule 21

A17. Display of Expiration Date for OMB Approval 25

A18. Exceptions to Certification for Paperwork Reduction Act
Submissions 25


References 26

APPENDICES

Appendix A: Legislation Authorizing the Study

Appendix B: Letter to Teachers and frequently asked questions

Appendix C: Letter, Fact Sheet and Consent to Parents

Appendix D: Principal Survey

Appendix e: PreKINDERGARTEN Director Survey

Appendix F: Teacher Survey

Appendix G: Teacher Student Report

Appendix H: Parent INTERVIEW

Appendix I: Student Record Form

Appendix J: OMB Submission History

Appendix K: Confidentiality Pledge

TABLES

Table 1. Timing of Data Collection for Title I ECLD 8

Table 2. ECLD Language Routing Assessment Paths 10

Table 3. Research Questions and Data Collection Methods 11

Table 4. Expert Panel Members 15

Table 5. Estimated Response Time and Burden 20

Table 6. Estimated Cost of Study Components 21

Table 7. Association Between Factor Scores and Student’s Language
Development, Grade X 24


Part A: JUSTIFICATION FOR THE STUDY

This submission is a request for approval of data collection activities that will be used to support the National Title I Study of Implementation and Outcomes: Early Childhood Language Development (Title I ECLD). This study is being funded by the Institute of Education Sciences (IES), U.S. Department of Education (ED); it is being implemented by Mathematica Policy Research, in partnership with Decision Information Resources, Inc. (DIR) and Dr. Timothy Shanahan of the University of Illinois-Chicago Center for Literacy. The study is designed to identify school programs and instructional practices associated with improved language development, background knowledge, and comprehension outcomes for children in prekindergarten through third grade.

This is the second submission of a two-stage clearance request. The first submission (approved on August 2, 2010, under OMB control number 1850-0871) requested approval of the study’s sampling plan, the approach to collecting the information needed to select the sample, and district and school recruitment. In this package, IES is requesting approval for all data collection activities that will support the full-scale study.

A. Justification

Reading is a critical foundational skill that enables children to learn in school and over their lifetimes. Many children, however, do not progress at the expected rate toward skilled, fluent reading that enables them to learn. The National Assessment of Educational Progress (NAEP) found that 33 percent of fourth-grade students did not achieve a basic level of proficiency in reading in 2009 (U.S. Department of Education 2010). Children who fail to learn to read by third or fourth grade are at high risk for dropping out of school, which has negative implications for the trajectory of employment, income, and productivity as an adult (Crissey 2009; Rutter 1989).

Since the mid-1990s, efforts to improve reading instruction in schools and preschools serving high proportions of children at risk for reading difficulties have centered on the use of scientifically based reading instruction. Studies of these efforts show some positive effects on letter knowledge and decoding skills; fewer effects on language development in prekindergarten through first grade (Jackson et al. 2007; Judkins et al. 2008; Preschool Curriculum Evaluation Research [PCER] Consortium 2008); and no effects on reading comprehension into third grade (Gamse et al. 2008a, 2008b). Although letter knowledge and decoding are precursor skills for reading, decoding alone does not lead to comprehension (Snow et al. 1998; National Institute of Child Health and Human Development 2000; National Early Literacy Panel 2008). To increase comprehension, language development is critical, and few of the curricula and teaching strategies tested over the past decade have had a positive effect on language development.

The lack of known strategies to boost language development is important, because many children from low-income or dual-language homes arrive at preschool and kindergarten with language and literacy scores well below the average 4- or 5-year-old (Tarullo et al. 2008; Jackson
et al. 2007; Chernoff et al. 2007). Moreover, supporting growth in young children’s language development and background knowledge is critical if students are to comprehend text because of the theoretical importance of background knowledge in extracting meaning from print (Hirsch 2003, 2006; Hoover and Gough 1990) and the research evidence of the link between these areas of development (National Early Literacy Panel 2008).

To identify school programs and instructional practices associated with better language development, background knowledge, and comprehension outcomes for young children, the U.S. Department of Education has requested a national study. The study will focus on Title I schools because they serve substantial proportions of educationally at-risk children who enter school with language development and early literacy achievement that is below the average for children their age. The study will focus on children from prekindergarten through third grade to measure how these outcomes may be influenced from the earliest years of formal schooling until the point at which children are first assessed in reading comprehension for school accountability purposes. To ensure that the study measures programs and instructional practices in schools with widely varying reading achievement outcomes for demographically similar children, the study will include 50 schools whose students are consistently high-performing in reading achievement outcomes and 50 schools whose students are consistently low-performing. Analyses will estimate the associations between instructional programs and practices and student outcomes to identify promising strategies to improve language and comprehension outcomes for educationally at-risk children in these early years of school. The promising strategies can be rigorously evaluated in future studies.

A1. Circumstances Making the Collection of Information Necessary

The study is being conducted as a component of the National Assessment of Title I, mandated by Title I, Part E, Section 1501 of the Elementary and Secondary Education Act (see Appendix A).

Overview of the Study. In October 2009, ED began working with Mathematica and its subcontractors on a national study of 100 Title I schools to identify school programs and instructional practices associated with improved language development, background knowledge, and comprehension outcomes for children in prekindergarten through third grade. The sample will be evenly divided between schools with consistently high and consistently low average reading achievement scores. The study will include five grade cohorts (prekindergarten, kindergarten, and first through third grades), with classroom and student samples selected from each. Data collection will include direct student assessments, classroom observations, parent interviews, teacher and administrator questionnaires, and student record reviews. Analyses will estimate the associations between school programs, instructional practices, and changes in student outcomes, to inform future rigorous evaluations of strategies to improve language and reading comprehension outcomes for at-risk children.

Given the modest outcomes of rigorous evaluations of research-based interventions to improve the reading comprehension achievement of young children, more ideas are needed about school programs and policies and instructional practices that may best support the development of reading achievement among economically disadvantaged students from prekindergarten through grade 3. This study will use an observational and descriptive approach to identify instructional practices and programs that are associated with greater student progress in reading comprehension, as well as in the related areas of language development, background knowledge, and listening comprehension, from prekindergarten through third grade. The instructional practices and programs identified in this study can inform frameworks for interventions in schools and classrooms that can then be replicated and evaluated systematically in future research.

Observational studies of classroom instructional practices have been used in the past to inform the design of reading interventions (Taylor, et al. 2002 and 2005). Education studies have taken two basic approaches to systematic observations of instruction for these purposes:

  1. Conduct observations in numerous classrooms selected for their diversity, then regress academic outcomes on differences in the occurrence of observed practices (Mashburn et al. 2008; Zill 2003)

  2. Increase the likelihood of identifying distinguishing instructional practices by observing classrooms in “outlier” or “beat-the-odds” schools—schools that are performing better than would be expected given demographic composition (for example, Langer 2001; Taylor, et al. 2000, 2002, 2003).

The Title I ECLD study builds upon this beat-the-odds paradigm, as it will identify 50 relatively high-achieving Title I schools and 50 Title I schools whose achievement levels are low and more characteristic of the Title I population of schools. Analysis of data on school programs and instructional practices and on students’ background knowledge, oral language development, and reading comprehension will identify what is being done differently in classrooms where students are making greater achievement gains; these practices are the targets for future study.

Outlier studies first appeared in the reading research literature during the early 1970s (Weber 1971); such investigations led to the identification of correlates such as clarity of school mission; effective instructional leadership; safe, orderly environment; maximum use of instructional time; and frequent monitoring of student learning (Hoffman 1991). After a period of disuse, this approach reemerged about a decade ago with several new studies (for example, Taylor et al. 2000, 2002, 2003). Despite the face validity of this study design and the fact that past studies have identified some important instructional variables, design flaws have limited the value of these studies. The Title I study will significantly improve upon the methodology of previous studies in the following ways:

  • Identification of consistently high- and low-performing schools. Past studies have identified high-performing schools using a single year of student achievement data, which might be influenced by transient factors such as differential cohort performance or high student and teacher mobility. By selecting schools for the study based on three or more years of reading performance proficiency data, we will identify schools for the study that have more consistently demonstrated high and low performance.

  • Sufficient sample size and power to detect correlations between school programs/instructional practices and student language and comprehension outcomes. Previous studies have focused on intensive qualitative investigation with a small number of schools—generally, between 4 and 14 schools—which provides few degrees of freedom to explore multiple hypotheses about successful practices. This study will include 50 high-performing and 50 low-performing schools (each with three classes per grade) from 10 different locations across the country, providing sufficient power for a more thorough statistical analysis of the relationships between school programs/
    instructional practices and student outcomes.

  • Use of more reliable measures of the growth of reading comprehension achievement, oral language development, and listening comprehension skills during the school year. Student growth might not be discernable on measures with poor reliability at the low or high ends of the achievement distribution. Many economically disadvantaged students perform below grade level on the standardized achievement tests used in previous studies. The measures proposed for this study are adaptive, and are administered one-on-one to ensure that directions are understood, pacing is appropriate, and assessment items are tailored to the child’s achievement level. By minimizing the possibility of floor and ceiling effects, this study will obtain more reliable measures of growth across nearly all students in the sample. Moreover, by measuring achievement in areas related to reading comprehension—language development, background knowledge, and listening comprehension—the study can examine how programs and practices support growth in these areas and model the interrelationships among these outcomes at each grade level.

  • Measures of school programs and instructional practices that encompass all major theoretical perspectives. Past outlier studies have based their measurement protocols on a limited number of theories about the practices believed to be important. For instance, in the CIERA study (Taylor et al. 2000), researchers measured explicit teaching and scaffolding of comprehension instruction, but did not measure the nature of text discussion, the oral language environment, or other variables associated with alternative theories of reading comprehension growth. The Title I ECLD study will measure programs and practices associated with all major theoretical perspectives on instructional practices to support reading comprehension. This approach will enable us to consider multiple instructional and school program approaches to higher reading achievement in the analysis.

  • Multiple approaches to measuring teacher practices that capture even low-frequency practices that might be important. Observation studies typically observe teachers for between a half-day and a full day. A practice that occurs less often than once a day might not be observed in such studies. The ratio of the hours of observation to the likely frequency of a positive incident occurring determines the likelihood of seeing the focal behavior; variation in observed behavior across time is referred to as occasion variance. To reduce the problem of occasion variance, this study will increase the amount of observation time (two half-days in the fall and two half-days in the spring) and include questions about low-incidence practices on teacher surveys. In addition, the observation rubric will include qualitative items and tallies of behavior; the former have demonstrated reliability and links with student outcomes in previous studies (Baker et al. 2006; Hamre et al. 2010; Taylor et al. 2003).

  • An observational measure of instructional practices that has high reliability. Many previous studies have attempted to measure associations between instructional practices and student learning, both in the context of an experimental change in instruction and in large descriptive datasets (for example, Jackson, et al. 2007; Gamse et al. 2008a and b; and Mashburn, et al. 2008), but these studies have found weak or no relationships. This could result either from no true underlying relationship or from low reliability of the instructional practice measures, which can weaken the power of the study to detect relationships with student outcomes. The study team is developing a reliable observational measure of instructional practices by providing clear labels for rating each item, intensive observer training, and stringent certification requirements.

  • Measures of instructional practices and student outcomes across five grades. No previous study has measured instructional practices and student outcomes related to the growth of language development, comprehension, and reading achievement from prekindergarten through grade 3 on a large scale so that changes over time in the growth of these related student outcomes and any differences by grade level in the practices associated with student progress can be measured.

Thus, the Title I ECLD study will bring the strongest measurement, sampling, and statistical analysis approaches to examine instructional programs and practices associated with greater growth in reading achievement and related language and comprehension development from prekindergarten through grade 3.

The study will seek to answer the following questions about the growth of children’s achievement from prekindergarten through grade 3 and its association with school programs and instructional practices:

  1. How do language development, background knowledge, and comprehension develop from prekindergarten through grade 3?

  1. What programs do the sample of schools use to support children’s language development, background knowledge, and reading comprehension?

  2. What teacher instructional practices are observed to support children’s language development, background knowledge, and reading comprehension?

  3. What school programs are associated with greater student progress in language development, background knowledge, and comprehension?

  4. What instructional practices are associated with greater student progress in language development, background knowledge, and comprehension?

In addition, the study will address the following questions about the methodology for identifying high- and low-performing schools and measuring instructional practices:

  1. Can we accurately identify high- and low-performing schools using readily available school-level performance data and demographic information? Do schools tend to have consistently high or low performance across grades and across classrooms? Are third grade assessment measures (typically the first year states collect standardized results) indicative of cumulative school effects in earlier grades?

  2. How can researchers measure instructional practices more reliably?

Study Timeline. The study began in October 2009 and is a five-year project. Activities planned for each year are as follows:

  • Year 1 (October 2009 to September 2010). Planning and design activities, including defining and measuring consistently high- and consistently low-performing Title I schools in selected districts, identifying student assessments, developing classroom observation measures, drafting other data collection forms, and finalizing the study design.

  • Year 2 (October 2010 to September 2011). Recruiting districts and schools, finalizing data collection instruments and training materials, and training data collection staff. We will also schedule and collect data in schools with August start dates.

  • Year 3 (October 2011 to September 2012). Fall and spring data collection.

  • Year 4 (October 2012 to September 2013). Analyze the data and write the report.

  • Year 5 (October 2013 to September 2014). Revise the report and prepare restricted-use data files.

Study Sample. The study sample will be composed of 100 Title I schools in 10 locations: 50 schools with consistently high third-grade reading achievement scores and 50 schools with consistently low third-grade reading achievement scores. High-performing schools are those with reading proficiency rates above the median for Title I schools in the state and with higher-than-expected reading proficiency rates conditional on the percentage of students eligible for free or reduced-price lunch. Low-performing schools are defined using analogous criteria, including reading proficiency rates below the 25th percentile for Title I schools in the state and lower-than-expected reading proficiency rates conditional on the percentage eligible for free or reduced-price lunch. Further details about the school performance criteria are discussed in Part B. Within each school, we will randomly sample three classrooms per grade (prekindergarten through grade 3), for a total of 1,500 classrooms. Within each classroom, we will randomly sample eight students for a total of 12,000 students (due to student mobility, we expect the spring student sample will decrease to 7,500).

Data Collection Plan. The study includes several complementary data collection efforts that will support answers to the study’s research questions. Table 1 lists the timing of the data collection activities, and a brief description of each is provided below.

Table 1. Timing of Data Collection for Title I ECLD

Data Collection Activity

Fall 2011

Spring 2012

Principal survey

X


Prekindergarten program director survey

X


Teacher survey


X

Parent interview


X

Teacher-student report


X

School records


X

Student assessments

X

X

Classroom observations

X

X

  • Principal survey. Hard copy surveys will be administered to principals in fall 2011. Questions will address reading instructional programs and practices used from prekindergarten through grade 3; the extent to which curriculum and instructional practices are coordinated from prekindergarten through grade 3 in the school; supports for struggling readers; and professional development (related to reading and general teaching practices) available to teachers.

  • Prekindergarten program director survey. Hard-copy surveys will be administered in fall 2011 to the prekindergarten program director (if necessary) for questions particular to these programs that are outside the school principal’s purview, such as prekindergarten curriculum and professional development offered. This survey will include a subset of items currently on the principal survey that focus on the prekindergarten program. As with the principal survey, it will be given to the program director associated with the school’s prekindergarten program. In some districts, a prekindergarten program director may be responsible for prekindergarten classrooms in more than one school. In other districts, the prekindergarten classrooms may be fully integrated with the school. We assume that the number of prekindergarten program directors is 20, reflecting some directors responding for all 10 schools in a district and others responding for just two or three schools. In several districts, we do not expect to need to conduct a prekindergarten program director interview because the principal will have responsibility for the prekindergarten classes in the school.

  • Teacher survey. Web-based surveys will be completed by teachers in spring 2012. Items include teacher background, credentials, professional development, reading programs used, books/readers used in the classroom, reading instructional activities and teaching strategies, and support for struggling readers and dual language learners (DLLs).

  • Parent interview. Telephone surveys will be conducted with parents in spring 2012. Items will address family resources and risk factors, including parent education, employment status, income level, marital status, race/ethnicity, and language spoken in the home; home literacy environment, including reading to the child and availability of literacy materials; and parental and family involvement with students’ education, including help with homework and providing children with out-of-home enrichment activities.

  • Teacher-student report. The study will use a web-based report to collect student-level data from teachers on individual children’s engagement/attention, special placement and receipt of services, special support for reading, and disruptive behavior. These data will be collected in spring 2012.

  • School records. The study will collect school records data for all children in the study in spring 2012. The data will be collected electronically and will include the date each child enrolled in the school and the grades attended, receipt of special education services, grade repetition, standardized test scores, and attendance.

  • Student assessments. The study will assess the language development, background knowledge, and comprehension of students in the study. A computer-assisted, one-on-one assessment will be administered to the sample of prekindergarten through third grade students in fall 2011 and again in spring 2012 to measure these outcomes. The actual measures used will vary by grade and children’s English language skills. Table 2 lists the components of the child assessment.

Table 2. ECLD Language Routing Assessment Paths


Home Language

English

Spanish

Other

English Path

English+ Path

Spanish+ Path

English Path

Other Path

Pre-K/K

1

2/3

Pre-K/K

1

2/3

Pre-K/K

1

2/3

Pre-K/K

1

2/3

Pre-K/K

1

2/3

Language Screenera

preLAS 2000: Simon Says and Art Show

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

Language Developmentb

CELF P–2 English

CELF P–2 Spanish

X




X

F



X

F



X




X




CELF–4 English

CELF–4 Spanish


X


X



X

F

X

F


X

F

X

F


X


X



X


X


Background Knowledge

ECLS–K General Knowledge

X

X


X

X





X

X





Reading Fluency

W-J III Reading Fluency


S

S


S

S





S

S




Listening Comprehension

W-J III Oral Comprehension

X

X

X

X

X

X




X

X

X




Reading Comprehension

ECLS–K Third Grade Reading



X



X



X



X



X


Note: Students in the grades shown will complete the measures as indicated by the X in both the fall (F) and spring (S) data collection waves, unless otherwise noted. CELF–4 = Clinical Evaluation of Language Fundamentals–Fourth Edition; CELF P–2 = Clinical Evaluation of Language Fundamentals Preschool–Second Edition; ECLS–K = Early Childhood Longitudinal Study–Kindergarten Class of 1998–99; preLAS 2000 = Preschool Language Assessment Survey 2000; W-J III = Woodcock-Johnson III, Tests of Achievement.

aThe language screener will be administered in the fall to all students as a warm-up activity and as a language screener for students whose home language is not English. In the spring, the Simon Says task in the screener will be administered to all students as a warm-up activity; however, only those students who did not pass the language screener in the fall will receive Art Show to reassess those students’ English language skills.

bThe CELF subtests include: Concepts and Following Directions, Expressive Vocabulary, Word Classes, and Sentence Structure (ages 4 through 8).

  • Classroom observations. Each of the 1,500 classrooms in the study will be observed twice in fall 2011 and twice in spring 2012. Two different trained observers will each observe each classroom for one half-day in the fall and again in the spring, using a measure developed for the study. In addition, one of the observers will also record information on books in the classroom and collect audio samples of teacher language use during one of the spring visits. Measures will assess the emotional supportiveness or positive climate of the classroom; teacher language modeling and support for learning; and approaches to supporting children’s language development, comprehension of oral and written information (that is, listening and reading comprehension, respectively), and expansion of background knowledge.

A2. Purpose and Use of the Information Collection

Table 3 lists the study’s research questions and the data collection that will be used to investigate each question.

Table 3. Research Questions and Data Collection Methods

Research Question

Data Collection Method

1. How do language development, background knowledge, and comprehension develop from prekindergarten through grade 3?

Student assessments

2. What programs do the sample of schools use to support children’s language development, background knowledge, and reading comprehension?

Principal and Prekindergarten Director surveys

Classroom observations

Teacher survey

3. What instructional practices are observed to support children’s language development, background knowledge, and reading comprehension?

Classroom observations

Teacher survey

4. What school programs are associated with greater student progress in language development, background knowledge, and comprehension?

Principal and Prekindergarten Director surveys

Student assessments

Parent survey

Teacher-student report

School records

5. What instructional practices are associated with greater student progress in language development, background knowledge, and comprehension?

Classroom observations

Teacher survey

Student assessments

Parent survey

Teacher-student report

School records

6. Can we accurately identify high- and low-performing schools using readily available school-level performance data and demographic information? Do schools tend to have consistently high or low performance across grades and across classrooms? Are third grade assessment measures (typically the first year states collect standardized results) indicative of cumulative school effects in earlier grades?

Student assessments

Student records

7. How can researchers measure teaching practices more reliably?

Classroom observations



The study team will use the data to identify promising programs and practices for student reading outcomes. Future studies can evaluate the impacts of promising programs and practices that emerge from this study on language and comprehension outcomes for at-risk children in the early years of school. In addition, the study will provide important information about how to
(1) accurately identify high- and low-performing schools and (2) measure teaching practices reliably. The contact letters and fact sheets describing the study to teachers and parents are located in Appendices B and C. The draft survey instruments are included in Appendices D-I. Final instruments will be included with the final OMB package.

A3. Use of Improved Technology and Burden Reduction

The data collection plan was designed to obtain accurate and reliable information efficiently while minimizing the burden on respondents. Consistent with that goal, information will be gathered from existing data sources, where feasible. To reduce the burden on school districts and school administrators, the Common Core of Data (CCD) has been used in identifying schools for the sample; however, the data are not current enough to provide sample characteristics or covariates for the analyses.

Additional existing data sources will include students’ school records and scores for school-administered tests. This information will be obtained in the form of computer files, if a school prefers this method. If it is too burdensome or not possible for a school to provide this information as a computer file, schools will be asked to provide the information either by using the school record form or by providing copies of report cards, which will be coded by the study team.

The teacher survey and teacher-student report are both web-based data collections, and the school records will be collected electronically to reduce burden on teachers and school staff. The parent interview is a computer-assisted telephone interview (CATI). The use of web-based and CATI data collection instruments reduces respondent burden by facilitating routing and skip patterns. The principal and prekindergarten director surveys are hard copy; however, with only 100 principals and 20 prekindergarten directors participating in the study, the cost of developing a CATI or web-based survey outweighs the benefits.

All of the individual student assessments will be conducted using the computer-assisted personal interview (CAPI). This approach has many advantages, including marginally reducing the length of the assessment since the assessor does not have to interrupt the flow of the assessment to calculate stopping points. Assessors can move more quickly through the assessment because complicated rules about which item or set of items comes next are controlled by the instrument software. Both of these features reduce burden and errors, and improve the quality of the data and the accuracy of the child’s scores.

A4. Efforts to Avoid Duplication and Use of Similar Information

No equivalent sources of data exist for the study. Several ongoing studies collect data on classroom practices and young children’s language development, background knowledge, and comprehension (for example, the Head Start Family and Child Experiences Survey (FACES), the Early Childhood Longitudinal Study—Kindergarten Class of 1998-99 (ECLS-K), and the Early Childhood Longitudinal Study—Birth Cohort (ECLS-B), but they do not provide sufficient information to address the questions that are central to this study:

  • FACES focuses only on Head Start program participants, and follows them from prekindergarten through kindergarten, missing grades 1 through 3 and missing many prekindergarten programs operating in schools.

  • FACES measures general instructional practices and broad student achievement, does not focus in detail on instructional practices that might support language development and reading achievement, and does not include a comprehensive language development measure.

  • ECLS-B and ECLS-K include a nationally representative sample of children in the United States, and therefore have too few children in Title I schools for separate analysis. Moreover, the Title I ECLD study is purposively selecting high-performing and low-performing Title I schools in order to examine more divergent instructional practices and student outcomes.

  • ECLS-B and ECLS-K measure instructional programs and practices through surveys, but do not include measures of detailed instructional practices, teacher language, and the classroom text environment that may identify promising programs and practices to support the growth of language and reading comprehension among children from low-income families. Moreover, the ECLS-B and ECLS-K do not include a measure of broad language development, which the CELF will provide in this study.

A5. Impact on Small Businesses or Other Small Entities

The primary entities for the study are school districts and schools, principals prekindergarten directors, teachers, parents, and children. Burden is minimized for all respondents by requesting only the minimum data required to meet the study’s objectives. The burden on districts and schools will also be minimized through careful specification of information needs, restriction of questions to information readily available to the respondent, and the design of the data collection strategy. The sample sizes and data requirements were determined by careful consideration of the information needed to meet the study’s objectives and have been reviewed by the study’s Expert Panel, listed below in Section A8—Consultations Outside of the Agency.

A6. Consequences of Collecting the Information Less Frequently

The data collection plan described in this submission is necessary for conducting ED’s National Title I Study of Implementation and Outcomes: Early Childhood Language Development and, consistent with the goal of Title I legislation, may help identify programs and practices to improve reading comprehension outcomes for at-risk children. The study represents an important next step in developing a systematic and rigorous evaluation agenda in the areas of early childhood and early reading.

With the exception of students, all individuals (principals, prekindergarten directors, teachers, parents) will be asked to participate only once. Students will be assessed twice (at the start and end of the school year), so that we can measure gains in language and comprehension.

A7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances associated with this data collection.

A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

Federal Register Announcement. As required by 5 CFR 1320.8(d), notice was published in the Federal Register on February 3, 2011 on page 6122 for 60 days (see Appendix J). A 30-day notice will be published in the Federal Register and IES will respond to public comments if needed, at the end of the 30-day public comment period.

Consultations Outside of the Agency. The study team has contacted members of its Expert Panel for advice on the study design and data collection plan. The Expert Panel includes a number of leading experts in student assessment, language and reading development, observational assessment of instructional practices, sampling and evaluation design, and other areas relevant to this study. Their feedback was obtained through in-person and telephone meetings. Members of the Expert Panel for this study are listed in Table 4.

Throughout the study, the team will consult with the panel on additional issues that would benefit from their input.

Table 4. Expert Panel Members

Expert Panel Member

Organizational Affiliation

Thomas Cook

Professor of Sociology, Psychology, Education and Social Policy, Northwestern University

David Dickinson

Professor of Education, Vanderbilt University

Barbara Foorman

Francis Eppes Professor of Education, Florida State University

Christopher Lonigan

Professor, Florida State University

Charles Perfetti

Distinguished University Professor of Psychology, University of Pittsburgh

Ray Reutzel

Emma Eccles Jones Endowed Chair and Distinguished Professor of Early Childhood Education, Utah State University

Don Rock

Senior Research Scientist, Educational Testing Service

Christopher Schatschneider

Associate Professor, Florida State University

Catherine Snow

Henry Lee Shattuck Professor of Education, Harvard University


A9. Explanations of Any Payments or Gifts to Respondents

We propose offering incentives to participants. Payments will be similar to those offered to respondents for completing comparable instruments in other studies. Teachers will be paid $20 for their participation in the teacher survey and will also receive $5 for each completed Teacher Student Report they complete (up to a maximum of seven cases). Parents will be paid $15 for their participation in the parent survey and their children will receive stickers as a thank you for their participation in the student assessments component of the study.

The proposed amounts are within the incentive guidelines outlined in the March 22, 2005, memo, “Guidelines for Incentives for NCEE Evaluation Studies,” prepared for OMB. Teachers are reported to be the targets of numerous requests to complete surveys on a wide variety of topics from state and district offices, independent researchers, and the Department of Education (Policy and Program Studies Service and IES). The collective bargaining agreements in many districts do not allow teachers to complete surveys during school time. Therefore, we propose the incentives as an efficient way to obtain response rates of at least 80 percent. The proposed incentives are consistent with incentives approved on similar ED studies, and consistent with incentives used in other large-scale studies such as FACES. For example, in the FACES 2006 study, teachers are paid $25 for completing the teacher survey and $5-$7 for completing each teacher-student report, and parents are paid $35 for completing the parent survey. In the evaluation of Highly Selective Alternative Certification of Teachers, teachers are paid $30 for completing the teacher survey.

A10. Assurance of Privacy Provided to Respondents

The data collection efforts that are the focus of this clearance package will be conducted in accordance with all relevant regulations and requirements to protect the privacy of respondents. None of the information collected will be reported or published in a manner that would identify individual respondents. Responses to this data collection will be used only for statistical purposes. The reports prepared for this study will summarize findings across the sample and will not associate responses with a specific district, school, or individual.

Mathematica and its subcontractors follow the confidentiality and data protection requirements of IES (The Education Sciences Reform Act of 2002, Title I, Part E, Section 183), which requires “All collection, maintenance, use, and wise dissemination of data by the Institute” to “conform with the requirements of section 552 of title 5, United States Code, the confidentiality standards of subsection (c) of this section, and sections 444 and 445 of the General Education Provision Act (20 U.S.C. 1232g, 1232h).” These citations refer to the Privacy Act, the Family Educational Rights and Privacy Act, and the Protection of Pupil Rights Amendment. In addition, for student information, “The Director shall ensure that all individually identifiable information about students, their academic achievements, their families, and information with respect to individual schools, shall remain confidential in accordance with section 552a of title 5, United States Code, the confidentiality standards of subsection (c) of this section, and sections 444 and 445 of the General Education Provision Act.” Subsection (c) of section 183 referenced above requires the Director of IES to “develop and enforce standards designed to protect the confidentiality of persons in the collection, reporting, and publication of data.” Subsection (d) of section 183 prohibits disclosure of individually identifiable information and makes any publishing or communicating of individually identifiable information by employees or staff a felony.

Every data collector will be required to sign a pledge to protect the privacy of respondent data and to ensure the security of the assessment materials (see Appendix K for a copy of Mathematica’s Confidentiality Pledge). The pledge indicates that any violation or unauthorized disclosure may result in legal action or other sanctions by Mathematica, including the termination of employment. The interviewer/assessor training will include a discussion of human subject protection. A copy of the signed pledges will be kept on file and will, upon request, be submitted to ED.

When reporting the results, data will be presented only in aggregate form, so that individuals and institutions will not be identified. Information from the study will be used for research purposes only, and individually identifiable information will not be disclosed. A statement to this effect will be included on the parental consent forms. The teacher survey, principal survey, prekindergarten director survey, and parent survey will include a reminder about privacy in compliance with the legislation. When data are collected through in-person or telephone interviews, respondents will be reminded about the privacy protections and their right to refuse to answer questions. Consent for children to participate in the study will be obtained from the student’s parent or guardian. While minors who enroll in a research study would ideally be asked for their written agreement to participate at the same time parental consent is obtained, the children enrolling in this study are too young to understand a written formal process. Therefore, before beginning the assessment (both fall and spring), we will inform children that they may refuse to do any activities they do not want to do, and ask them whether they agree to participate. Thus, children will provide consent orally at the time of the assessment. Similar to the principal, prekindergarten director, teacher, and parent surveys, all data collected on children will be held in the strictest confidence. Names will be stripped from the data file and all findings will only report aggregated data for the children.

The following safeguards will be employed routinely by Mathematica to carry out confidentiality assurances during the study:

  • As noted above, all employees at Mathematica will sign a confidentiality pledge (Appendix K) emphasizing its importance and describing their obligation.

  • Access to sample selection will be limited to those who have direct responsibility for providing and maintaining sample locating information. At the conclusion of the research, these data will be destroyed.

  • Identifying information will be maintained on separate forms and files, linked only by a sample identification number.

  • Access to the file linking sample identification numbers with the respondents’ ID and contact information will be limited to a small number of individuals who have a need to know this information.

  • Access to the hard copy documents will be strictly limited. Documents will be stored in locked files and cabinets. Discarded materials will be shredded.

  • Computer data files will be protected with passwords, and access will be limited to specific users. Especially sensitive data will be maintained on removable storage devices kept physically secure when not in use.

A11. Justification for Sensitive Questions

The parent interview will include questions about household income, home language, family composition and parent education, which some may view as sensitive items. We will use these data and data from other parent interview questions as covariates in the analyses to adjust for factors related to students’ language development, background knowledge, and reading comprehension or to their self-selection into a prekindergarten program or school. Interviewers will remind parents about the privacy of the information and the voluntary nature of the interview before asking any potentially sensitive questions. A majority of these questions have been used in other federally sponsored surveys cleared by OMB.

A12. Estimates of Burden Hours and Costs

The total reporting burden associated with this data collection request is 9,385 hours. Table 5 presents the burden hours, broken down by instrument and respondent.

The study will include 100 principals, who will complete the 30-minute principal survey and 20 prekindergarten directors who will complete the 15-minute prekindergarten director survey1 for a total of 120 responses from this group. A total of 1,500 teachers will complete a teacher survey and 7,500 parents with a child participating in the spring 2012 data collection will be interviewed. The 1,500 teachers will each complete 5 teacher-student reports, for a total of 1,500 teacher-student reports in the spring. Each of the 100 schools in the sample will provide school record information for 75 children in the spring, for a total of 7,500 school records for the students in the sample.

Table 5. Estimated Response Time and Burden

Instrument

Number of
Respondents

Number of
Responses per
Respondent

Average
Burden Hours
per Response


Fall 2011 Data Collection

Principal Survey

100

1

.50

50

Prekindergarten Director Survey

20

1

.25

5

Spring 2012 Data Collection

Teacher Survey

1,500

1

.42

630

Teacher-Student Report

1,500

5

.16

1,200

School Records Data

100

75

.5

3,750

Parent Interview

7,500

1

.5

3,750

Estimated Total Burden Hours:


9,385


A13. Estimates of Total Annual Cost Burden to Respondents or Record Keepers

There are no direct monetary costs to participants; they spend only their time to participate in the study.

A14. Annualized Cost to Federal Government

The total estimated cost of the study (base contract plus options) is $13,001,340, an annual cost of $2,600,268 across five years.

The total estimated costs of the different components of the study are summarized in Table 6.

The estimated cost of the observational measures, including instrument development, observer training, and data collection for this component, is $3,469,854.

Table 6. Estimated Cost of Study Components

Study Component

Estimated Cost

Study Design

$242,593

Expert Panel, Management

$462,486

Instrument Development/OMB Package

$708,988

Site Selection and Recruitment

$374,557

Data Collection

$10,208,467

Analysis and Reporting

$1,004,249

Total

13,001,340


A15. Explanations for Program Changes or Adjustments

This is a program change and is the second submission of a two-stage clearance request. The first submission (approved on August 2, 2010, under OMB control number 1850-0871) requested approval of the study’s sampling plan, the approach to collecting the information needed to select the sample, and district and school recruitment. In this package, IES is requesting approval for all data collection activities that will support the full-scale study. Specifically, the burden is increased for principals, prekindergarten directors, teachers, and children’s parents to complete the survey instruments, and for schools to provide information from students’ school records.

A16. Plans for Tabulation and Publication of Results and Project Time Schedule

a. Analysis Plans

The analytic strategies will be aligned with the study’s research questions (Section A.1, Overview of the Study). Specifically, the analyses are designed to (1) describe how language development, background knowledge, and comprehension develop during the school year from prekindergarten through grade 3 (research question 1); (2) describe the school programs and instructional practices used to support children’s language development, background knowledge, and comprehension outcomes (research questions 2 and 3); and (3) analyze the relationships between school programs and instructional practices and children’s progress in language development, background knowledge, and comprehension (research questions 4 and 5) In addition, we will address methodological questions about (1) identifying high- and low-performing schools based on readily available data on school-level performance and student demographics (research question 6); and (2) how to measure instructional practices more reliably (research question 7).

Direct child assessments will provide data on children’s language development, background knowledge, and comprehension at the beginning and end of the school year. Information on school programs and instructional practices will draw on principal, prekindergarten director, and teacher surveys and structured observations of the classrooms. Parent interviews will provide information about the home literacy environment and other family background information.

Analyses will employ a variety of methods, including descriptive statistics (means, percentages), tests of differences across subgroups and over time (t-tests, chi-square tests), and multivariate analysis (hierarchical linear modeling [HLM]). The first three research questions can be answered by calculating averages and percentages of children, classrooms, or programs falling into various categories; comparisons of these averages across subgroups; and changes in children’s outcomes over time. More complex analyses of the relationships among school programs and teaching practices and children’s development that address questions 4 and 5 can be done through HLM.

For questions about the characteristics of teachers and children, the development of children’s language, background knowledge, and comprehension over the school year, and the types of school programs and teacher practices found in the sample of schools, we will calculate descriptive statistics such as averages and percentages. For example, we will calculate the average scores on the language development assessment for prekindergarten-age children in the fall and spring and the average gain score between fall and spring. Similarly, we will calculate the percentage of schools using particular reading curricula in prekindergarten (for example, Opening the World of Learning). For all descriptive analyses, we will calculate standard errors, taking into account multilevel sampling and clustering at the appropriate level (school, classroom, and child). We will use analysis weights that take into account complex multilevel sampling and nonresponse at each level.

Analyses of the relationships between school programs, instructional practices, and children’s language development, background knowledge, and comprehension outcomes will use a value-added HLM approach that links student achievement with practices and programs in the study schools and classrooms, while properly accounting for the nested structure of the data, prior student achievement, and other confounding factors. The study’s main analytic models will be estimated by grade, using the scores obtained from fall and spring study-administered assessments of language development, background knowledge, and listening or reading comprehension. The models will include controls for the students’ performance at the beginning of the school year, teacher demographics, and other potential confounds.

The independent (explanatory) variables of interest to be included in the analytic models are
(1) school programs and policies regarding allocation of resources to support language development and comprehension outcomes and support for time and quality of instruction, and (2) observed teaching practices, such as the quality and complexity of the teacher’s language use, instruction on comprehension strategies, and vocabulary instruction. Because we will include a number of school program and instructional practice measures, we will use factor analysis to reduce the number of variables to a smaller number of factors. The factor analysis will generate factor scores—that is, estimates of scores that would have been received on each of the identified factors had they been estimated directly—for classrooms and schools. These estimated factor scores are typically more reliable than the score of the individual observed variables.

In addition, by reducing the number of variables to be included in the models, we expect to estimate the relationship between student learning and school programs and instructional practices more precisely. We will assess the predictive validity of the factor scores by examining the extent to which they are statistically significantly associated with differences in student learning across study schools and classrooms (having controlled for confounding factors). Significant factors will signal which programs and practices best help predict which schools and classes generate higher student achievement, and thus may deserve further study.

Table 7 illustrates how the association between programs and practices and students’ outcomes may be displayed. The estimates and statistical significance of the factor scores, which represent the associations of school programs and instructional practices with student outcomes, are the focus of this study.

Table 7. Association Between Factor Scores and Student’s Language Development, Grade X


Student Language Development
Outcome 1


Student Language Development
Outcome 2


Coefficient (s.e.)

p-value


Coefficient (s.e.)

p-value

Classroom Practices






Factor Score 1






Factor Score 2






School Practices






Factor Score 3






Factor Score 4







Notes: In addition to the results included in this table, the models controlled for the relevant pretest, student, classroom, and school covariates.

(s.e.) = Standard errors of estimated coefficients will be presented in parentheses.

p < 0.10; * p < 0.05; ** p < 0.01.

The data from the study will also be used to address the following methodological questions:

  • How to accurately identify high- and low-performing schools. Measures of student outcomes and instructional practices will support analyses of how consistently low- or high-performing these schools are. Using the student-level measures, which are comparable across schools and more sensitive than the state assessment measures used to identify high- and low-performing schools for the study, we will conduct analyses to determine how consistently high-performing or low-performing each school is, both across grade levels from pre-kindergarten through grade 3 on average and within these grades. We will measure the distribution of student growth associated with teachers within a school and within each grade level and analyze the extent to which student growth in each grade can predict whether the school was identified as high or low performing based on the readily available state reading proficiency data. We will analyze whether the instructional practices that are associated in the study’s analyses with greater student growth can predict whether the school was classified as high or low performing. The analyses will indicate the extent to which the readily available state reading proficiency data for grade 3 was consistent with the growth achieved by students in prekindergarten through grade 3 a year or two later.

  • How to reliably measure instructional practices. We will describe the observational protocol and its reliability compared with other commonly used classroom observation measures. We will report the levels of inter-rater reliability as well as the variation across observations for a single teacher during the year. We will discuss how the measure, training materials, and training procedures were all designed to improve reliability, highlighting differences from measures used in previous studies. Recommendations for improving the reliability of instructional practice measures in future studies will be included.

b. Publication Plans and Time Schedule

The study is currently scheduled to prepare one report summarizing the analyses and findings. The report will present the descriptive findings on the growth of children’s language development, background knowledge, and comprehension outcomes during the school year from prekindergarten through grade 3, as well as the school programs and instructional practices found across the schools in the sample. It will also present the multivariate analyses and findings on the association between teacher practices and school programs and the growth of children’s language development, background knowledge, and comprehension outcomes. Information on the methodological studies will be summarized in appendices to the report. The draft of the final report is due to ED in September 2013 and is projected for release in the spring of 2014.

A17. Display of Expiration Date for OMB Approval

The study will display the OMB expiration date on all respondent materials and study instruments.

A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions of the certification statement are being sought.

ReferEnces

Baker, S. K., R. Gersten, D. Haager, and M. Dingle. “Teaching Practice and the Reading Growth of First-Grade English Learners: Validation of an Observation Instrument.” The Elementary School Journal, vol. 107, 2006, pp. 199-220.

Chernoff, Jacobson, K.D. Flanagan, C. McPhee, and J. Park. Preschool: First Findings From the Preschool Follow-up of the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B). (NCES 2008-025). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, 2007.

Crissey, Sarah R. “Educational Attainment in the United States: 2007.” Current Population Reports, P20-560. Washington, DC: U.S. Department of Commerce, January 2009.

Gamse, B.C., H.S. Bloom, J.J. Kemple, and R.T. Jacob. Reading First Impact Study: Interim Report. (NCEE 2008-4016). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, 2008a. [http://ies.ed.gov/ncee/pubs/20084016/].

Gamse, B.C., R.T. Jacob, M. Horst, B. Boulay, and F. Unlu. Reading First Impact Study Final Report. (NCEE 2009-4038). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, 2008b.[http://ies.ed.gov/ncee/pubs/20094038].

Hamre, Bridget K., Laura M. Justice, Robert C. Pianta, Carolyn Kilday, Beverly Sweeney, Jason T. Downer, and Allison Leach. “Implementation Fidelity of MyTeachingPartner Literacy and Language Activities: Associations with Preschoolers’ Language and Literacy Growth.” Early Childhood Research Quarterly, vol. 25 (3), 2010, pp. 329-347.

Hirsch, E.D., Jr. The Knowledge Deficit: Closing the Shocking Education Gap for American Children. New York: Houghton Mifflin, 2006.

Hirsch, E.D., Jr. “Reading Comprehension Requires Knowledge—of Words and the World.” American Educator, vol. 27, no. 1, 2003, pp. 1013, 1622, 2829, 48.

Hoffman, J. V. “Teacher and school effects in learning to read.” In R. Barr, M. L. Kamil,
P. B. Mosenthal, and P. D. Pearson (Eds.), Handbook of Reading Research, Vol. II (pp. 911-950).
New York: Longman, 1991.

Hoover, Wesley A., and Philip B. Gough. “The Simple View of Reading.” Reading and Writing:
An Interdisciplinary Journal
, vol. 2, 1990, pp. 127–160.

Jackson, Russell, Ann McCoy, Carol Pistorino, Anna Wilkinson, John Burghardt, Melissa Clark, Christine Ross, Peter Schochet, and Paul Swank. National Evaluation of Early Reading First: Final Report. U.S. Department of Education, Institute of Education Sciences. Washington, DC: U.S. Government Printing Office, 2007.

Judkins, David, Robert St. Pierre, Babette Gutmann, Barbara Goodson, Adrienne von Glatz, Jennifer Hamilton, Ann Webber, Patricia Troppe, and Tracy Rimdzius. A Study of Classroom Literacy Interventions and Outcomes in Even Start. (NCEE 2008-4028). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences,
U.S. Department of Education [http://ies.ed.gov/ncee/pubs/20084028].

Langer, J. “Beating the Odds: Teaching Middle and High School Students to Read and Write Well.” American Educational Research Journal, vol. 38, 2001, pp. 837-880.

Mashburn, Andrew J., Robert C. Pianta, Bridget K. Hamre, Jason T. Downer, Oscar A. Barbarin, Donna Bryant, Margaret Burchinal, Diane M. Early, and Carollee Howes. “Measures of Classroom Quality in Prekindergarten and Children’s Development of Academic, Language, and Social Skills.” Child Development, vol. 79, no. 3, May/June 2008, pp. 732-749.

National Early Literacy Panel. Developing Early Literacy: Report of the National Early Literacy Panel. Washington, DC: National Institute for Literacy, 2008.

National Institute of Child Health and Human Development. Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction. Washington, DC: U.S. Government Printing Office, 2000.

Preschool Curriculum Evaluation Research (PCER) Consortium. Effects of Preschool Curriculum Programs on School Readiness. Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Research, 2008.

Pressley, Michael, Ruth Wharton-McDonald, Lisa M. Raphael, Kristen Bogner, and Alysia Roehrig. “Exemplary First Grade Teaching.” In B. M. Taylor and P. D. Pearson, Teaching Reading: Effective Schools, Accomplished Teachers. Mahwah, NJ: Lawrence Erlbaum Associates, Inc., 2002.

Rueda, R., and M. P. Windmueller. "English Language Learners, LD, and Overrepresentation:
A Multiple Level Analysis.” Journal of Learning Disabilities, vol. 39, no. 2, 2006, pp. 99-107.

Rutter, Michael. “Pathways from Childhood to Adult Life.” Journal of Child Psychology and Psychiatry, vol. 30, 1989, pp. 23-51.

Snow, Catherine E., M. Susan Burns, and Peg Griffin. Preventing Reading Difficulties in Young Children. Washington, DC: The National Academies, 1998.

Tarullo, Louisa, Jerry West, Nikki Aikens, and Lara Hulsey. “Beginning Head Start: Children, Families, and Programs in Fall 2006.” Washington, DC: Mathematica Policy Research, 2008.U.S. Department of Education. The Nation’s Report Card: Reading 2007. (NCES
2007-496). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. [http://nces.ed.gov/nationsreportcard/pdf/
main2007/2007496.pdf].

Taylor, Barbara M., P. David Pearson, Kathleen Clark, and Sharon Walpole. “Effective Schools and Accomplished Teachers: Lessons about Primary-grade Reading Instruction in Low-income Schools.” The Elementary School Journal, vol. 101, no. 2, 2000, pp.121-165.

Taylor, Barbara M., P. David Pearson, Debra S. Peterson, and Michael C. Rodriguez. “Reading Growth in High-Poverty Classrooms: The Influence of Teacher Practices That Encourage Cognitive Engagement in Literacy Learning.” The Elementary School Journal, vol. 104, no. 1, 2003.

Taylor, Barbara M., Debra S. Peterson, P. David Pearson, and Michael C. Rodriguez. “Looking Inside Classrooms: Reflecting on the ‘How’ as Well as the ‘What’ in Effective Reading Instruction.” The Reading Teacher, vol. 56, no. 3, 2002, pp. 270-279.

Taylor, Barbara M., P. David Pearson, Debra S. Peterson, and Michael C. Rodriguez. “The CIERA School Change Framework: An Evidence-Based Approach to Professional Development and School Reading Improvement.” Reading Research Quarterly, vol. 40, pp. 40-69.

U.S. Department of Education. The Nation’s Report Card: Reading 2009. (NCES 2010-459). [http://nces.ed.gov/nationsreportcard/pdf/dst2009/2010459.pdf]

Weber, George. “Inner-City Children Can Be Taught to Read: Four Successful Schools.” Washington, DC: Council for Basic Education, 1971.

Zill, Nicholas, Gary Resnick, Kwang Kim, Kevin O’Donnell, Alberto Sorongon, Ruth Hubbell McKey, Shefali Pai-Samant, Cheryl Clark, Robert O’Brien, and Mary Ann D’Elio. “Head Start FACES 2000: A Whole-Child Perspective on Program Performance.” Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families,
May 2003.


www.mathematica-mpr.com

Improving public well-being by conducting high-quality, objective research and surveys

Princeton, NJ Ann Arbor, MI Cambridge, MA Chicago, IL Oakland, CA Washington, DC


Mathematica® is a registered trademark of Mathematica Policy Research




1 The prekindergarten director survey includes a subset of the items in the principal survey that focus specifically on prekindergarten.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorComputer and Network Services
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy