Title I-II OMB Part A 1-30-14

Title I-II OMB Part A 1-30-14.docx

Implementation of Title I/II Program Initiatives

OMB: 1850-0902

Document [docx]
Download: docx | pdf

Implementation of Title I/II Program Initiatives


Supporting Statement for Paperwork Reduction Act Submission

PART A: Justification

Contract ED-IES-11-C-0063







September 2013

(Updated January 2014)


Prepared for:

Institute of Education Sciences

U.S. Department of Education




Prepared by:

Westat

Mathematica Policy Research

edCount




Appendix A State Survey and Extant Data Form A-1

Appendix B District Survey B-1

Appendix C Principal Survey C-1

Appendix D Teacher Survey D-1

Appendix E Teacher Roster E-1

Appendix F Notification Letters F-1

Appendix G Study Brochure G-1


Tables


A - 1. Research questions and data sources 5

A - 2. Estimates of respondent burden 12


Part A. Justification


This package is the first of two for the Implementation of Title I/II Program Initiatives study. This package requests approval for an initial round of recruitment and data collection that will include surveys of all states, nationally representative samples of school districts and schools, and a nationally representative sample of Kindergarten through 12th grade core academic and special education teachers. We anticipate that the state education agency (SEA), school district, principal, and teacher surveys will begin in February 2014. The second package will request approval for the follow-up survey, which will survey the same states, districts, and schools as well as a new nationally representative sample of teachers within the sampled schools.


Introduction


Title I is one of the U.S. Department of Education’s (ED) largest elementary and secondary education programs. Historically, Title I provides financial assistance to schools and districts with a high percentage of students from low-income families to help these students increase achievement. Title I also includes requirements that states hold schools and districts accountable for improvements in student achievement. During the 2009-2010 school year, more than 56,000 public schools used Title I funds, and the program served 21 million children (U.S. Department of Education, 2012a). Title II provides funds to increase academic achievement by improving teacher and principal quality including educator preparation and professional development, as well as providing funds for class size reduction. An estimated 95 percent of districts nationally received Title II, Part A funding for the 2011-12 school year (U.S. Department of Education, 2012b).


The last National Assessment of the Title I program concluded in 2006. Since that time, there have been changes in Title I provisions, such as allowing states to incorporate proficiency improvement (alongside proficiency levels) into school accountability measures and providing more resources to the lowest performing schools through the expansion of eligibility and funding for School Improvement Grants (SIG). Title II guidance allows more flexibility for certain teachers (e.g., special education teachers) to meet the standards to become highly qualified teachers (HQT).


The most recent change related to Title I and Title II was the introduction of the Elementary and Secondary Education Act (ESEA) Flexibility requests in 2011, allowing states to waive a number of provisions in exchange for a commitment to key reform principles. The Title I/II study will provide policy makers with detailed information on how these initiatives are playing out in states, districts, schools, and classrooms.


Overview of the Study


The study of the Implementation of Title I/II Program Initiatives is commissioned by the Institute of Education Sciences (IES), ED’s independent research and evaluation arm. The Title I/II study will examine the implementation of policies promoted through ESEA at the state, district, school, and classroom levels. Through surveys at each level and selected extant data and documents, the study will provide information on activities in four core areas: state content standards, assessments, school accountability, and teacher and principal evaluation.


The study will reflect changes in Title I and Title II provisions since the last National Assessment of the Title I program concluded in 2006 (including ESEA Flexibility provided to states with approved requests). In addition, this study will supplement findings from ED’s annual survey of a nationally representative sample of districts on their uses of Title II, Part A funds. This study will provide more detail on district-implemented workforce development policies and practices for educators as well as teachers’ and principals’ access to and receipt of professional development.


State content standards


State content standards have been a foundational element of ESEA since the 1994 reauthorization of the Act, which required states to establish statewide standards in reading and mathematics in the 3 through 5, 6 through 8, and high school grade ranges and implement statewide accountability systems for evaluating school-level performance. The No Child Left Behind Act of 2001 (NCLB), the subsequent reauthorization of ESEA, extended and strengthened these concepts, incorporating science alongside reading and math, and establishing more prescriptive rules for state accountability systems. More recently, states approved for ESEA Flexibility have agreed to adopt especially rigorous standards; over 40 states have adopted the Common Core State Standards (CCSS).


Assessments


The 1994 reauthorization of ESEA required states to administer annual academic assessments aligned to content standards in reading and mathematics in at least one elementary grade, one middle grade, and one high-school grade. In 2001, NCLB extended this requirement from three grades to seven (grades 3-8 and at least once in high school) and added a requirement for science assessments in each of the 3-5, 6-8, and high school grade ranges.


These assessments are meant to provide information on whether students have acquired the knowledge and skills identified by the standards and, if not, the areas where students fall short and may require additional instruction and assistance. Annual assessment data are also used to hold schools and districts, and increasingly teachers, accountable for student performance. The content and rigor of these assessments vary across states.


School accountability


NCLB standardized state accountability policies by requiring states to establish “adequate yearly progress” (AYP) goals and objectives for student proficiency on the mandated assessments, with targets rising over time so that by 2014, all students would be proficient. Schools and school districts are held accountable for achieving rates of student proficiency in reading and mathematics that meet the annual objectives, as well as for the proficiency rates of specific subgroups of students. Schools that failed to meet AYP for either all students or a subgroup of students over successive years were subject to an increasingly aggressive set of interventions spelled out in the law, beginning with parental choice and culminating with school closing or state takeover.


The federal government has adjusted accountability requirements since the last National Assessment. The Growth Model Pilot Project, initiated in 2005, permitted nine states to use accountability measures that incorporate proficiency improvement (rather than only proficiency levels) under a pilot in 2007-08. When the approach was written into regulations in 2008, 15 states participated. The ESEA Flexibility regulations introduced in 2011 allow states to develop more flexible systems of school accountability that incorporate measures of the achievement growth of individual students, broaden achievement measures beyond mathematics and reading, and include other student outcomes such as graduation rates. In addition, recent policy changes have led to more intensive focus on the lowest-performing schools and differentiated support for schools depending on their performance levels


Teacher and principal evaluation


Improving and supporting the effectiveness of teachers and school leaders is a key part of any strategy for improving student achievement and college and career readiness. Research has shown that teachers (e.g., Rowan, Correnti, & Miller, 2002; Rockoff, 2004; Rivkin, Hanushek, & Kain, 2005) and school administrators (e.g., Witziers, Bosker, & Kruger, 2003) have measurable effects on student achievement.


Title I and Title II incorporate a number of requirements and incentives for states and districts to implement policies that improve certification, support educators during all phases of their careers, hold them accountable for contributing to student achievement, and ensure their equitable distribution—all based on the premise that improvement in student achievement requires improving educator quality. However, there has been substantial development in the national discourse about teacher quality since the passage of the NCLB and the initiation of the last National Assessment of Title I.


Most importantly, thinking about educator quality has shifted away from educator inputs (e.g., degrees and certifications) toward educator performance, defined in terms of practice and especially in terms of contribution to growth in student achievement. This shift has been facilitated by the improvements in the assessment of educator practice (Porter, Youngs, & Odden, 2001; Milanowski, Heneman & Kimball, 2011) and in “value-added” methods to measure school and teacher contributions to student achievement (Meyer, 1997; Harris, 2009; Glazerman et al., 2010).


A.1. Explanation of Circumstances That Make Collection of Data Necessary


The purpose of this new data collection is to provide information on the progress being made on the core policies promoted by Title I and Title II, and the recent granting of ESEA Flexibility requests to states. Historically, Congress has mandated a national study of the Title I program. Title I, Part E, Section 1501 of the ESEA, as amended, mandated the most recent National Assessment of the implementation and impact of Title I, which concluded in 2006 (see http://www2.ed.gov/policy/elsec/leg/esea02/pg12.html). This Title I study is planned in anticipation of the next reauthorization of ESEA. The Title II study is authorized under Part F, Section 9601 of ESEA, which permits program funds to be used to evaluate activities authorized under the act (see http://www2.ed.gov/policy/elsec/leg/esea02/pg113.html). The timing of the study’s data collection is critical to provide information prior to the reauthorization of ESEA, and provide policy makers with information on the implementation of ESEA Flexibility requests.



Although there are research studies that cover similar topics of recent federal education policy, this proposed Title I/II study is set apart by the breadth of research questions and the depth of responses from all SEAs and three levels (nationally representative samples of districts, principals, and core academic and special education teachers). The research questions for the Title I/II study are as follows:


  1. What content standards and high school graduation requirements are states adopting, and what materials and resources do states, districts, and schools provide to help teachers implement the state content standards?


  1. What assessments do states and districts use (in terms of assessment format and coverage of grade levels and content areas), and what materials and resources do states, districts, and schools provide to support the implementation of assessments and use of assessment data?


  1. How has student achievement changed over time?


  1. What elements are included in states’ accountability systems? How do states and districts identify and reward their highest-performing schools, identify and support their lowest-performing schools, and offer differentiated support for schools that are neither highest-performing nor lowest-performing?


  1. How do states and districts evaluate teacher and principal effectiveness and assess equitable distribution of teachers and principals, and what supports do states, districts, and schools provide to improve teacher and principal effectiveness?


A.2. How the Information Will Be Collected, by Whom, and For What Purpose


In order to address the research questions described above, this study will rely on a new set of surveys and information collected from existing sources, for which there are no additional respondents or burden. See Table A - 1 for the linkages between the research questions and the sources of information to answer the questions.


Table A - 1. Research questions and data sources

Research Question

Data Source(s)

1.  What content standards and high school graduation requirements are states adopting, and what materials and resources do states, districts, and schools provide to help teachers implement the state content standards?

State surveys

District surveys

Principal surveys

Teacher surveys

2.  What assessments do states and districts use (in terms of assessment format and coverage of grade levels and content areas), and what materials and resources do states, districts, and schools provide to support the implementation of assessments and use of assessment data?

State surveys

District surveys

Principal surveys

Teacher surveys

3.  How has student achievement changed over time?

NAEP (National Assessment of

Educational Progress) data

EDFacts data

4.  What elements are included in states’ accountability systems? How do states and districts identify and reward their highest-performing schools, identify and support their lowest-performing schools, and offer differentiated support for schools that are neither highest-performing nor lowest-performing?

State surveys

District surveys

Principal surveys

Teacher surveys

State documents

5. How do states and districts evaluate teacher and principal effectiveness and assess equitable distribution of teachers and principals, and what supports do states, districts, and schools provide to improve teacher and principal effectiveness?

State surveys

District surveys

Principal surveys

Teacher surveys

State documents


A.2.1. New Data Collections


There currently is no uniform source of current, detailed information on the study’s core areas. We will administer surveys to obtain this information. Each survey is described below.


State survey. The state survey, administered on paper or electronic PDF form, will focus on state policies, and in particular, the adoption of state content standards, assessments, accountability and low-performing schools, teacher and principal evaluation, and ESEA Flexibility provisions. The survey will be sent to the chief school officer in each of the 50 states and the District of Columbia beginning in February 2014, with the expectation that different sections of the survey may be filled out by different staff in the SEA, as determined by their particular expertise. The state survey contains a different set of questions in Section 3 (school accountability) for states that have an approved ESEA Flexibilty request and states without ESEA Flexibility.1 The state survey with the two versions for Section 3 is in Appendix A.


District survey. This web-based survey will focus on the implementation of state policies, adoption of district policies such as supplemental district content standards and assessments, and supports provided to schools. The survey will be administered to superintendents or their designees from a nationally representative sample of 570 districts beginning in February 2014. The district survey is in Appendix B.


Principal survey. This web-based survey will focus on the implementation of state and district policies within schools, the usefulness of supports received from the state and district, and supports provided to teachers. These surveys will also address how principals’ performance is evaluated, and how principals evaluate teachers’ performance. The survey will be administered to principals from a nationally representative sample of 1,300 schools (nested within the sampled districts) beginning in February 2014. The principal survey is in Appendix C


Teacher survey. This web-based survey will focus on the implementation of state and district policies in classrooms, and the usefulness of supports received from the state, district, and school. These surveys will also address how teachers’ performance is evaluated. The survey will be administered to a nationally representative sample of 9,100 core academic and special education teachers from the sampled schools beginning in February 2014.2 In limiting the sample to these types of teachers, the sample will include teachers most likely to be affected by the various initiatives and programs promoted by Title I and the ESEA Flexibility waivers. The teacher survey is in Appendix D.

Teacher roster. The principals of sampled schools will be asked to provide comprehensive lists of core academic and special education teachers in their schools, which will be used as the sampling frame for the teacher sample. The principal or his/her designee will be asked to verify their school’s current grade spans and list all current teachers whose subject most often taught is reading/English/language arts, mathematics, science, social studies, general elementary, or special education. For each of these teachers, the school will identify the subject most often taught, the main grade for the subject most often taught, and whether the teacher teaches any class whose students are tested for accountability requirements under ESEA. The collection of teacher roster information will begin in February 2014. A mockup of the web-based teacher roster is in Appendix E.


A.2.2. Extant Data Sources


Extant data. The data for the student achievement analyses will come from the National Assessment of Educational Progress (NAEP) and EDFacts. NAEP is a nationally representative assessment of students in mathematics, reading, and other subjects that provides a common measure of achievement across states. EDFacts data are consolidated from annual state reports and include school-level performance on state assessments and adequate yearly progress (AYP) status for all schools in the state. The outcomes of interest for this study are state-level standardized test scores (using NAEP and state assessments). The analyses will focus on state-level trends and cross-state trends in student proficiency levels over time.


ED also collects data from states on elements of state teacher preparation programs and state requirements for initial teacher certification or licensure, including information about teacher preparation and certification programs and policies, as well as the number of people entering and completing programs. The current data provide information on changes in state policies, such as increases in the rigor of teacher preparation or reductions in the barriers to alternative certification routes, from 2008 through 2010. (Information through 2012 is expected to be available in 2013.)


State documents. State documents in the public domain will be collected and reviewed, particularly to learn the details of state accountability policies and teacher evaluation requirements. For example, documents may include state website information on policies. The study team will complete an extant data form (see Appendix A) for each state, then ask SEA staff to verify that the information in the form is correct.


A.3. Use of Improved Information Technology to Reduce Burden


The data collection plan is designed to obtain information in an efficient way that minimizes respondent burden. When feasible, we will gather information from existing data sources. The state survey will be divided into modules which will allow the appropriate staff with expertise in each area to respond separately. This approach will reduce burden for respondents as (a) each individual will have fewer questions to answer and (b) respondents will be asked questions concerning topics in which they are well versed and answers should be readily available. Each state will be given access to a secure SharePoint site to facilitate access for multiple respondents to the survey and to allow respondents to forward any documents they feel would help us better understand their state policies.


Web-based surveys will be the primary mode of data collection for the district, school and teacher surveys. We have found web-based surveys to be a preferred method for survey completion among many respondents. Moreover, web-based administration decreases the cost for postage, coding, keying and cleaning of the survey data. Burden will be reduced with the use of skip patterns and prefilled information based on responses to previous items when appropriate. The web-based survey will allow respondents to complete the survey at a location and time of their choice, and built-in edits will reduce response errors. At the district level, the web-based surveys will facilitate the completion of the surveys by multiple respondents, so that the most appropriate individual will be able to access and provide the data in their area of expertise.


For respondents that choose not to use the web-based survey, we will offer the option of completion of an electronic version of the survey or a paper-and-pencil instrument. A phone survey option will be offered to respondents as part of the nonresponse follow-up effort.


Principals can also use the web to enter data for the teacher rosters. Online edits in the web roster will ensure that all data items required for teacher sampling are entered. We will also make available at the school’s request an Excel template should the school prefer to enter teacher information in Excel. Roster output from the school’s database, either electronically or on hardcopy, also will be acceptable. We will work with the principal or his/her designee to be sure all fields required for the teacher sampling are captured.


A.4. Efforts to Identify and Avoid Duplication


To avoid duplication, we will use extant state data when available rather than ask extensive questions of the state respondents.


A.5. Efforts to Minimize Burden on Small Business or Other Entities


No small businesses will be involved as respondents. Every effort will be made to minimize the burden on respondents.


A.6. Consequences of Less-Frequent Data Collection


Title I has not been studied with a nationally representative sample of states, school districts, schools, and teachers since the National Assessment of the Implementation of Title I was completed in 2006. Since that time, a number of policy initiatives have been implemented. Similarly, Title II expanded the number of policy initiatives applicable to most school districts and schools and has not been studied using a nationally representative sample. These data will provide policy makers with detailed information on how these initiatives are playing out in states, school districts, schools and classrooms. Collecting the data less frequently would leave policymakers and the public poorly informed about the progress of current federal and state education reform initiatives.


A.7. Special Circumstances Requiring Collection of Information in a Manner Inconsistent with Section 1320.5(d)(2) of the Code of Federal Regulations


There are no special circumstances involved with this data collection.


A.8. Federal Register Comments and Persons Consulted Outside the Agency


The 60-day Federal Register notice was published on July 11, 2013 Vol 78, page 41785 and the 30-day Federal Register notice was published on October 1, 2013, Vol 78, pages 60266-60267. There were no public comments on the package.


A Technical Working Group (TWG) has been assembled for this study. The current TWG members are listed below. Additional consultation may be sought during later phases of the study (e.g., data analysis).


  • Christopher Cross, Chairman, Cross & Joftus

  • Daria Hall, Director of K-12 Policy Development, The Education Trust

  • David Heistad, Executive Director of Research, Evaluation, and Assessment, Minneapolis Public Schools

  • Eugenia Kemble, Executive Director, Albert Shanker Institute

  • Sharon Lewis, Director of Research, Council of the Great City Schools

  • Thomas Payzant, Professor of Practice, Harvard Graduate School of Education

  • Sean Reardon, Associate Professor, Administration & Policy Studies, Stanford University

  • Judy Wurtzel, Consultant, Education Policy

  • Grover Russ Whitehurst, Chair and Director of the Brown Center on Education Policy, Brookings Institution


A.9. Payments to Respondents


We propose to provide teachers with a $20 incentive (e.g., gift card, check, or money order) and principals with a $25 incentive for completing the surveys. Some of the schools in the study sample will not receive Title I or Title II funds directly. We recognize that teachers and principals have many demands on their time, and we expect that the incentives will reduce the non-response follow-up (and associated costs) necessary to achieve the desired response rate of at least 85%. The non-response follow-up costs are non-trivial since the study sample includes teachers and principals in 1,300 schools from all 50 states and DC.  The recent Integrated Evaluation of the American Recovery and Reinvestment Act (ARRA) NCEE study did not include principal incentives, and the response rate for the first administration of the principal survey was lower than 80%. 


We expect the incentive to: increase teacher and principal interest and encourage teachers and principals to prioritize participation over other activities that do not provide a monetary reward; improve the quality of the data; and partially compensate teachers’ and principals’ time and effort, in acknowledgment of the time required to complete the surveys (a 30 minute survey for teachers and a 30 minute survey as well as a teacher roster for principals).  The proposed $20 and $25 incentives are within the incentive guidelines outlined in the March 22, 2005 memo, “Guidelines for Incentives for NCEE Evaluation Studies,” prepared for OMB.


Recent research indicates positive effects on response rates for the use of cash incentives. For example, Cantor, O’Hare and O’Connor (2007) found that promised incentives of $15 – $35 increase response rates. In another study, researchers found that providing incentives for early non-respondents boosted response rates (Zagorsky & Rhoton, 2008). Finally, in a recent review of the incentive use in longitudinal surveys, Laurie and Lynn (2009) concluded that respondent incentives are an important element in minimizing attrition for longitudinal surveys.


A.10. Assurance of Confidentiality


Other than the names and contact information for the survey respondents and teachers that will make up the teacher sampling frame, which is information typically already available in the public domain (i.e., state, district, and school websites) no data collected for surveys will contain personally identifiable information. No names and contact information will be released.


Responses will be used for research or statistical purposes. States and districts receiving Title I and Title II funds have an obligation to participate in Department evaluations (Education Department General Administrative Regulations (EDGAR) (34 C.F.R. § 76.591)). Participation is voluntary for principals and teachers.


The following language will be included on the cover sheet of district, school, and teacher surveys, and the teacher roster under the Notice of Confidentiality: Information collected for this study comes under the confidentiality and data protection requirements of the Institute of Education Sciences (The Education Sciences Reform Act of 2002, Title I, Part E, Section 183). Responses to this data collection will be used only for statistical purposes. The reports prepared for the study will summarize findings across the sample and will not associate responses with a specific district or individual. We will not provide information that identifies you or your district to anyone outside the study team, except as required by law.


On the state survey, we will modify the Notice of Confidentiality statement to replace references to “district” with “state.” However, while individual states may be identified in reporting, individual respondents will not be identified.


The Education Sciences Reform Act of 2002, Title I, Part E, Section 183 of this Act requires, “All collection, maintenance, use, and wide dissemination of data by the Institute” to “conform with the requirements of section 552 of title 5, United States Code, the confidentiality standards of subsection (c) of this section, and sections 444 and 445 of the General Education Provision Act (20 U.S.C. 1232g, 1232h).” Respondents will be assured that confidentiality will be maintained, except as required by law.


Specific steps to guarantee confidentiality include the following:

  • Identifying information about respondents (e.g., respondent name, address, and telephone number) will not be entered into the analysis data file, but will be kept separate from other data and will be password protected. A unique identification number for each respondent will be used for building raw data and analysis files.

  • A fax machine used to send or receive documents that contain confidential information will be kept in a locked field room, accessible only to study team members.

  • Confidential materials will be printed on a printer located in a limited access field room. When printing documents that contain confidential information from shared network printers, authorized study staff will be present and retrieve the documents as soon as printing is complete.

  • In public reports, findings will be presented in aggregate by type of respondent or for subgroups of interest. No reports will identify individual respondents or local agencies.

  • Access to the sample files will be limited to authorized study staff only; no others will be authorized such access.

  • All members of the study team will be briefed regarding confidentiality of the data.

  • Most data will be entered via the web systems. However, a control system will be established to monitor the status and whereabouts of any hard copy data collection instruments during data entry.

  • All data will be stored in secure areas accessible only to authorized staff members. Computer-generated output containing identifiable information will be maintained under the same conditions.

  • Hard copies containing confidential information that is no longer needed will be shredded.



A.11. Questions of a Sensitive Nature


There are no questions of a sensitive nature asked in any of the surveys.


A.12. Estimates of Respondent Burden


Beginning in February 2014, surveys will be administered to respondents in:


  • The 50 states and the District of Columbia,

  • 570 sampled school districts ,

  • 1,300 sampled schools (within the sampled school districts), and

  • 9,100 sampled core academic and special education teachers (within the sampled districts and schools).


School principals in 1,300 sampled schools will be asked to complete a teacher roster.


In all, responses will be required from 12,321 respondents (51 state officials for the state survey; 570 district officials for the district survey; 1,300 principals; 1,300 school officials for the teacher roster; and 9,100 teachers). Although we expect that at the state and district levels, there may be more than one respondent completing the survey, we are estimating the burden to complete the total survey as one respondent per state/district times the number of minutes for the total survey.


Based on the survey piloting, we estimate that it will take (1) state respondents an average of 180 minutes (in total, summed across multiple respondents in each state working on separate survey sections) for the survey and follow-up, (2) district respondents an average of 60 minutes for the survey, (3) principal respondents an average of 30 minutes for the survey, (4) school respondents an average of 30 minutes to complete the teacher roster, and (5) teacher respondents an average of 30 minutes to complete the teacher survey, so total burden for the 2013-2014 data collection is 394,380 minutes or 6,573 hours (see Table A - 2 below).



Table A - 2. Estimates of respondent burden

Informant/Data Collection Activity

# of Respondents

Minutes per completion

Number of administrations

Burden in minutes

Total Burden Hours

Total Costs

State

SEA survey and Follow-up

51

180

1

9,180

153

$6,646.32

District

District survey

570

60

1

34,200

570

$24,760.80

School

Principal survey

1,300

30

1

39,000

650

$28,236.00

Teacher roster

1,300

30

1

39,000

650

$28,236.00

Teacher survey

9,100

30

1

273,000

4,550

$122,395.00

Total


12,321


330

1


394,380


6,573


$210,274.12

NOTE: Assumes an hourly rate of $43.44 per hour for educational administrators and an hourly rate of $26.90 for teachers (derived from the Bureau of Labor Statistics’ Occupational Employment and Wages for educational administrators and teachers, May 2011).


A.13. Estimates of the Cost Burden to Respondents


There is no annualized capital/startup or ongoing operation and maintenance costs associated with collecting the information.


A.14. Estimates of Annualized Government Costs


The amount for the design, conduct of two rounds of surveys and analysis, and reporting for the base contract for this study is $9,341,945. The annualized cost over five years is $1,868,389.


A.15. Changes in Hour Burden


This is a new collection. There is a program change resulting in an increase in burden and responses of 6,573 hours and 12,321 responses.


A.16. Time Schedule, Publication, and Analysis Plan


The report will follow the principles of the Federal Plain Language Action and Information Network and adhere to the requirements of the NCES Statistical Standards (2002), IES Style Guide (2005) and other IES guidance and requirements for public reporting.

The focus of the report will be on the progress made on the core policies promoted by Titles I and II. The analyses in the report will not try to separate the influence of Titles I and II from that of other Federal and state reform initiatives that have arisen since the previous National Assessment of Title I.


This first study report will answer a clearly established set of questions using information from the state, district, school, and teacher surveys and extant sources of data. The report will start with an outline of highlights. The body of the report will contain five chapters, each addressing one of the major themes of the study and the corresponding research question. Each chapter will have a brief context section summarizing the provisions of Titles I and II that embody policy-relevant issues in each area, and how these policies have evolved since the No Child Left Behind Act of 2001 (NCLB). The report will include information at each level—state, district, school, and teacher.


We anticipate beginning to field the SEA, district, principal, teacher roster, and teacher surveys in February 2014. We expect that the baseline report based on these data will be available during summer 2015.


The report described above will be supported by analyses that will have three main objectives: (1) describing the extent to which policy and program initiatives related to the objectives of Title I and Title II are being implemented at the state, district, and school levels, including how implementation varies by selected state, district, school, and teacher characteristics; (2) describing patterns of cross-level implementation; and (3) describing trends in student achievement since the last Title I report. Each set of planned analyses is described below.


A.16.1. State, District, and School Level Implementation


The primary goal of the study is to describe the implementation of policy and program initiatives related to the objectives of Title I and Title II. To achieve this goal, extensive descriptive analyses will be conducted using survey data and data derived from the review of state documents. We anticipate that relatively straightforward descriptive statistics (e.g., means, frequencies, percentages) and simple statistical tests (e.g., tests for differences of proportions) will typically be used to answer the research questions detailed in section A.1 above.


While simple descriptive statistics such as means and percentages will provide answers to many of our questions, cross-tabulations will be important to answer questions about variation across state, district, school, and teacher characteristics. The primary characteristics of interest for the cross-tabulations are:


  • Whether a state received an approved ESEA Flexibility request from the U.S. Department of Education.3 This is of interest because ESEA Flexibility is expected to influence the design of state accountability systems and approaches to educator quality measurement and policy makers will be interested in what states do with the ESEA Flexibility.


  • District poverty level, size, urbanicity, and concentration of English learners. Poverty is included because Title I is specifically intended to ameliorate the effects of poverty on local funding constraints and educational opportunity. Urbanicity is included because of the relationships between educational opportunity and rural isolation and the concentration of poverty in urban schools. Size is included because it may be related to district capacity to develop and implement programs. Concentration of English learners is included because of the increased emphasis on ensuring that this group of students also meets state content standards and recognition that modifications in testing as well as instruction will be needed to facilitate progress of these students.


  • School Title I status, grade span (high school, middle, and elementary grades), poverty level, and cross classifications of Title I status and poverty level. Grade span is included because the implementation of state content standards and aligned assessments as well as responses to accountability systems likely differs by grade level. Title I status is included because the focus of Title I funds and requirements is to influence state and district policy and school effectiveness. We plan to cross-classify Title I status with school poverty because there may be substantial differences between what happens in high-poverty Title I schools in comparison with Title I schools with low/medium-poverty levels, which may be more similar to schools which do not receive Title I funding.


  • Teacher grade span taught and whether the teacher teaches a grade or subject in which state testing is required by ESEA. These characteristics are included because teachers’ responses to state content standards, assessments, accountability systems, and teacher evaluations based on student growth may vary by grade span, and by whether they teach tested subjects or grades.


Because of the use of a statistical sample, survey data presented for districts, schools, and core academic and special education teachers will be weighted to national totals (tabulations will provide standard errors for the reported estimated statistics). In addition, the descriptive tables will indicate where differences between subgroups are statistically significant. We will use Chi-Square tests to test for significant differences among distributions and t-tests for differences in means. Tabulations will be included in the reports where appropriate.


A.16.2. Cross-Level Analysis


The planned cross level analyses involve examining responses of districts, schools, or teachers by categories of responses from units at the next level. Examining data across levels has two purposes. The first is to examine the presence of key requirements in state and district policies that influence educators. For example:


  • Whether in states that adopted new or revised content standards, core academic and special education teachers received professional development on the state content standards and supporting materials to facilitate teaching to the state content standards; and


  • Whether in districts that use student growth to evaluate teachers, core academic and special education teachers experienced professional development opportunities that helped them improve student growth.


The second goal of cross-level analysis is to examine the relationship between policies and programs originating at the state or district level and implementation “on the ground” in schools and classrooms. Though the planned analyses cannot support causal conclusions about the effects of state and district actions on school and classroom implementation, they can provide evidence on the extent to which school and classroom practices are consistent with higher-level policies.


Conceptually, these analyses posit that certain state and district policy choices influence what happens at the school and classroom level. Examples of potential cross-level analyses include:


  • Whether core academic and special education teachers in states that provide more extensive support for teachers to use state content standards are more likely to report using the state content standards in their classrooms than teachers in other states;


  • Whether districts in states that have been granted ESEA Flexibility are less likely to require after school academic services to students in low-performing schools than districts in other states; and


  • Whether principals in districts that report using student growth to evaluate teachers consider growth results when planning professional development for teachers.


These cross-level analyses will be based on cross-tabulations. For example, a straightforward set of descriptive tables will show the relationships between survey responses at one level and the mean responses at the lower level or percents of lower-level units (e.g., principals or teachers) responding in a certain way. Breakdowns by the previously discussed state, district or school characteristics will further sharpen the interpretation of these relationships.


A.16.3. Student Achievement Trends


We will conduct a descriptive analysis of cross-state trends in student proficiency levels, using state-level NAEP results and statewide results on state assessments. This analysis will address the research questions of whether students are making progress on meeting state academic achievement standards within states and how this progress varies across states. The analysis will be entirely descriptive and will not attempt to derive any causal inferences. The analysis will utilize two data sources:


  • State-by-state NAEP proficiency rates since 2007, math, reading, and science, grades 4, 8, and 12; and


  • State-by-state proficiency rates on each state’s own assessments since 2007, math and reading, grades 4, 8, and 9-12 (from EDFacts and state web sites)


The results of this analysis will consist of a comparison of changes in student proficiency levels across states. For each state and for the nationwide average we will present the following information:


  • Percent proficient on NAEP, in 2007 and most recent year, and the difference between the two years;


  • Percent proficient on state assessments, in 2007 and the most recent year, and the difference between the two years; and


  • The difference between the percent proficient on NAEP and the percent proficient on the state assessment, in 2007 and the most recent year, and the difference between the two years


For each of these items we will report results separately for math and reading, and for grades 4, 8, and 12, as well as aggregated across both subjects and grades. NAEP assessments are given in grade 12, but the tested high-school grade(s) on state assessments vary across states, and we will report results for the tested high-school grade(s) for each state. Additionally, we will report the percentage of states that are improving, declining, or remaining the same in their proficiency rates since 2007, for both NAEP and the state assessments.


A.17. Display of Expiration Date for OMB Approval


The Institute of Education Sciences is not requesting a waiver for the display of the OMB approval number and expiration date. The surveys (Appendixes A through D), teacher roster (Appendix E), and notification letters (Appendix F) will display the expiration date for OMB approval.


A.18. Exceptions to Certification Statement


This submission does not require an exception to the Certificate for Paperwork Reduction Act (5 CFR 1320.9).



References


Cantor, D., O’Hare, B. & O’Connor, K. (2007). The use of monetary incentives to reduce non-response in random digit dial telephone surveys. In J. M. Lepkowski, C. Tucker, J. M. Brick, E. De Leeuw, L. Japec, P.J. Lavrakas, M.W. Link, & R.L. Sangster. (Eds.), Advances in telephone survey methodology (pp. 471-498). New York: J.W. Wiley and Sons, Inc.

Glazerman, S., Loeb, S., Goldhaber, D., Staiger, D., Raudenbush, S., & Whitehurst, G. (2010). Evaluating teachers: The important role of value-added. Washington, DC: Brown Center on Education Policy, The Brookings Institution.

Harris, D. N. (2009). Would accountability based on teacher value added be smart policy? An examination of the statistical properties and policy alternatives. Education Finance and Policy, 4(4), 319-350.

Laurie, H. & P. Lynn (2009). The use of respondent incentives on longitudinal surveys. In P. Lynn (Ed.), Methodology of Longitudinal Surveys (pp. 205 – 231). John Wiley & Sons: Southern Gate.

Meyer, R. (1996). Value-added indicators of school performance. In E. A. Hanushek & D. W. Jorgenson. (Eds.), Improving the performance of America’s schools: The role of incentives (pp. 197-223). Washington, DC: National Academy Press.

Milanowski, A. T., Heneman, H., & Kimball, S. M. (2011). Teaching assessment for teacher human capital management: Learning from the current state of the art (WCER Working Paper No. 2011-2). Madison, WI: Wisconsin Center For Education Research.

Porter, A. C., Youngs, P., & Odden, A. (2001). Advances in teacher assessments and their uses. In V. Richardson (Ed.), Handbook of research on teaching (4th edition) (pp. 259-297). Washington, DC: American Educational Research Association.

Rivkin, S.G., Hanusheck, E.A., & Kain, J.F. (2005). Teachers, schools, and academic achievement. Economterica, 73(2), 417-458.

Rockoff, J. (2004). The impacts of individual teachers on student achievement: Evidence from panel data. American Economic Review, Papers and Proceedings, 94(2), 247-252.

Rowan, B., Correnti, R., & Miller, R.J. (2002). What large-scale, survey research tells us about teacher effects on student achievement: Insights from the "Prospects" study of elementary schools. Teachers College Record, 104(8), 1525-67.

U.S. Department of Education. (2012a). Improving basic programs operated by local educational agencies (Title I, Part A). Retrieved from: http://www2.ed.gov/programs/titleiparta/index.html

U.S. Department of Education. (2012b). Findings from the 2011-2012 Survey on the Use of Funds Under Title II, Part A. Retrieved from: http://www2.ed.gov/programs/teacherqual/resources.html

Witziers, B., Bosker, R., & Kruger, M. (2003). Educational leadership and student achievement: The elusive search for an association. Educational Administration Quarterly, 39(3), 398-425.

Zagorsky, J. & Rhoton, P. (2008). The effects of promised monetary incentives on attrition in a long-term panel survey. Public Opinion Quarterly, 72(3), 502-513.

1 For the purposes of the baseline survey, we consider ESEA Flexibility states as those states with an approved ESEA Flexibility request and are expected to implement the Flexibility policies during the 2013-14 school year.

2 Core academic and special education teachers are those teachers whose subject most often taught is reading/English/language arts, mathematics, science, social studies, general elementary, or special education.

3 For the purposes of the baseline survey, we consider ESEA Flexibility states as those states with an approved ESEA Flexibility request and are expected to implement the Flexibility policies during the 2013-14 school year.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy