1875-NEW (3992) rev SSA OMB T3 102009 Part A ONLY object

1875-NEW (3992) rev SSA OMB T3 102009 Part A ONLY object.doc

Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems

OMB: 1875-0254

Document [doc]
Download: doc | pdf

American Institutes for Research®




Evaluation of State and Local
Implementation of Title III Standards, Assessments, and Accountability Systems





OMB Clearance Request

For Data Collection Instruments





October 20, 2009




Prepared for:

United States Department of Education

Contract NO. ED-04-CO-0025/0017





Prepared by:

American Institutes for Research

Windwalker Corporation

edCount, llc

Table of Contents





List of Appendices

Appendix A: Construct Matrix A-1

Appendix B: Data Use and Confidentiality Agreement B-1

Appendix C: State Title III Director Interview Materials C-1

Appendix D: Title III Subgrantee Survey Materials D-1

Appendix E: Case Study District Interview Materials E-1

Appendix F: Case Study Focus Group Materials F-1





List of Exhibits





Introduction

The Policy and Program Studies Service (PPSS), Office of the Under Secretary, U.S. Department of Education (ED) requests clearance for the data collection for the Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems. The purpose of the study is to provide an up-to-date, in-depth picture of implementation of No Child Left Behind’s (NCLB) Title III provisions across the nation as of 2009-10. To this end, the evaluation will employ multiple lenses through which to view the patterns and complexities of implementation at both the state and district levels. Clearance is requested for the study’s design, sampling strategy, data collection, and analytic approach. This submission also includes the clearance request for the data collection instruments.

This document contains three major sections with multiple subsections:

  • Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems

    • Overview

    • Conceptual framework

    • Evaluation questions

    • Sampling design

    • Data collection procedures

    • Analytic approach

  • Supporting Statement for Paperwork Reduction Act Submission

    • Justification (Part A)

    • Description of Statistical Methods (Part B)

  • Appendices containing the construct matrix, data use and confidentiality agreement, state interview protocol, subgrantee (school district) survey, and case studies protocols.



Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems

Overview

The Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems will provide a comprehensive, in‑depth picture of implementation of No Child Left Behind’s (NCLB) Title III provisions across the nation as of 2009–10. While prior research on Title III has focused almost exclusively on initial state-level implementation of the law’s provisions, this study will extend data collection and analysis to include district and school implementation as well.

To collect data of adequate breadth, depth, and accuracy to fully reflect the implementation of Title III provisions, this study will employ a mixed-methods approach, collecting quantitative and qualitative data on implementation, as well as extant documents and student achievement data. Our technical approach begins with a state-by-state review of relevant extant documents, including state standards documents, Title III and Title I accountability workbooks, approved plans, Consolidated State Performance Results, Title III Biennial Reports, and other publicly available state documentation. An especially important aspect of this document review will be an in-depth analysis of each state’s English language proficiency (ELP) standards, noting patterns across states with respect to the content, structure, and specificity of the standards and accompanying documentation.

A second component of state-level data collections will be a set of interviews with state Title III directors from all 50 states and the District of Columbia. The purpose of the state-level interviews will be to generate national data on the ways in which states are implementing ELP standards, assessing limited English proficient (LEP)1 students, setting Annual Measurable Achievement Objectives (AMAOs), holding districts accountable for AMAOs, monitoring Title III implementation, and providing support to Title III districts. Interviews will pay particular attention to the ways in which ELP standards and assessments have been linked to content area standards and assessments, and the assistance that states provide to districts to make these linkages instructionally. On most topics, these interviews will provide state-level longitudinal data to build upon the two waves of Title III data collection conducted for the Study of State Implementation of NCLB (SSI-NCLB) (LeFloch, et al., 2007).

As part of the data collection process, the evaluation team will also administer a Web-based survey of a nationally representative sample of 1,300 sub-grantees (districts) that receive Title III funds. The purpose of the survey is to examine how local districts use state ELP standards (e.g., for curriculum development); assess LEP students for identification, placement, and academic progress; monitor student participation and progress in language instruction programs; respond to Title III accountability designations; ensure teacher quality and continued professional learning regarding LEP student instruction in language development and content areas; and assist schools in differentiating instruction for diverse LEP learners.

A fourth component of the evaluation will be a series of district case studies nested in five states. The purpose of these qualitative data collections is to enrich our understanding of quantitative analyses of survey data and to provide critical insight into the ways in which the Title III provisions are interpreted and acted on within states, districts, and schools. To provide an understanding of the state policy context and its influence on local implementation, the district data collections will be complemented with additional telephone interviews of state officials in the states in which the districts are located.

Finally, we will collect and analyze student achievement data in two forms. From six states and two large districts we will collect longitudinally linked student-level achievement results on the state ELP, ELA, and mathematics assessments in order to analyze the relationship of between LEP students’ performance on state ELP assessments and their performance on state content area assessments. We will also collect cross-sectional trend data on the performance of LEP students on state assessments for all 50 states, the District of Columbia.

Conceptual framework

The technical design for this project embraces four interrelated objectives that will enable the study to deepen understanding of the extent to which Title III is achieving its underlying goals.

  • Objective 1: To describe the progress in implementation of Title III provisions, and variation in implementation across states.

  • Objective 2: To examine how localities are implementing their programs for LEP students and how these relate to state policies and contexts.

  • Objective 3: To determine how LEP students are faring in the development of their English language proficiency and mastery of academic content.

  • Objective 4: To maintain a focus, in all project data collection and analysis activities, on the diversity among LEP students—for example, in their concentrations, languages, ages, length of residence in the US—and the educational implications of this diversity.

The study design is also guided by a conceptual framework that reflects the relationships among levels of the educational system and key constructs expected to influence implementation (see Exhibit 1). The conceptual framework is based on a set of assumptions about the intended implementation of the law. At the federal level, Title III and other components of NCLB are intended to provide standards-based parameters, requirements, and inducements to support LEP students’ acquisition of English and subject matter content. States are expected to respond by establishing policies and practices to specify instructional goals (through standards), provide information on progress (through assessments and AMAOs) and motivate, support, and monitor local improvement (through accountability measures and technical assistance). Local districts, in turn, are expected to implement curriculum, instruction, and capacity building policies and practices that reflect the standards and result in greater school and classroom opportunity for LEP students to learn English and academic content. The intended result of this multi-level implementation process is higher proportions of LEP students achieving proficiency on ELP and state content assessments.

How implementation plays out in practice, of course, is influenced by a variety of contextual factors and by the specific policies and practices that the states and localities put in place. Each level of implementation introduces important sources of variation. A primary purpose of this study is to describe and better understand this variation at the state and district levels. Findings from earlier studies of NCLB implementation (e.g., LeFloch et al. 2007; Birman, et al., 2007; CEP, 2006), as well as the substantial theoretical and empirical literature on policy implementation more generally, suggest the parameters of and contributing influences on this variation within and across levels of the system. These include differences in the nature of the student populations served, in the political and fiscal environment in each state or locality, in organizational and human capacity to address LEP student needs, and in the specific policy decisions about the multiple and interacting components of the law itself. Understanding how Title III provisions filter through the various layers of the system and illuminating the factors that influence implementation should help to inform improvement efforts at all levels.

Exhibit 1 depicts the general implementation process across system levels and suggests many of the domains, topics, and constructs that will be measured in one or more components of this study.

Exhibit 1. Conceptual Framework

Evaluation Questions

This study addresses five sets of evaluation questions and the data sources used to address each (see Exhibit 2), whose development was guided by our conceptual framework. Evaluation questions 1-4 focus on Objectives 1 and 2 listed in the Overview – that is, on state and local implementation of Title III provisions, including variation in implementation across states and localities. Evaluation Question 5 focuses on Objective 3 – that is, on achievement patterns for LEP students on state content and English language proficiency (ELP) assessments. Throughout all questions, as relevant and feasible, the study will attend to Objective 4 regarding the diversity of the LEP population and the relationship of this diversity with policies, practices, and student performance.

Exhibit 2. Title III Study Evaluation Questions and Data Sources

Evaluation Question

Data Sources

(EQ1) State standards for English language proficiency.


    1. How have states addressed the requirement to establish English language proficiency (ELP) standards?

    2. How do these ELP standards vary across states in terms of breadth, specificity, and topics covered?

State Title III Director Interview

Documentation

Standards Review

Case Studies

Subgrantee Survey

(EQ2) Assessment of limited English proficient students.


    1. How do states and districts assess LEP students for identification and placement, for Title I accountability, for Title III accountability, and for instructional improvement at the local level?

    2. How have states sought to ensure alignment of ELP standards and assessments?

    3. How do states include the four domains of reading, writing, speaking, and listening in their ELP assessments?

    4. How do states assess student proficiency in the additional domain of comprehension?

    5. Are states including all LEP students in state ELP assessments?

    6. What testing accommodations are available to LEP students on state content assessments? To what extent are these accommodations used?

    7. Do districts use additional tools or tests to monitor LEP students’ progress in acquiring English proficiency?

State Title III Director Interview

Case Studies

Subgrantee Survey

(EQ3) Annual measurable achievement objectives (AMAOs).


    1. How are states setting their AMAO targets and making AMAO determinations?

      1. What criteria do states use to set their AMAOs? How have these changed over time?

      2. How do states implement the requirement to factor into their AMAO targets the amount of time students have been enrolled in language instruction programs?

      3. To what extent do states use additional criteria, other than the state ELP assessment, to make AMAO determinations?

      4. How do state AMAOs relate to state criteria for determining when students exit from the LEP subgroup?

      5. To what extent do states apply minimum subgroup size policies in making AMAO calculations?

      6. How do states handle accountability for consortia of small LEAs that have been formed for Title III purposes?

    2. How are AMAOs used to foster improvement?

      1. How aware are districts of their AMAO targets and status?

      2. How and when do states inform districts that they have not met their AMAO targets?

      3. What improvement actions are taken in districts that do not meet their AMAO targets, do not meet AMAOs for two consecutive years, and do not meet AMAOs for four consecutive years?

      4. How are parents of LEP students informed about the failure to meet AMAO targets?

      5. Are there any promising state practices or policies related to AMAOs and accountability for ensuring that LEP students learn English?

State Title III Director Interview

Case Studies

Subgrantee Survey

(EQ4) Capacity to promote LEP language acquisition and achievement


Organizational capacity

    1. What capacity (expertise, staff availability, and organizational supports/infrastructure) exists in the SEAs and LEAs to support schools in meeting the instructional needs of LEP students?

    2. What is the nature of the state and local data systems with regard to LEP students? (What data are collected at the state and local levels, how and to whom are they reported, and how are these data used for educational decisions?)

Technical assistance (TA)

    1. What TA do states and districts provide to local educators to help them meet the instructional needs of LEP students?

    2. What professional development is available to teachers of LEP students in Title III districts?

Teacher Quality

    1. What policies have states and districts put in place to ensure that LEP students are taught by teachers who are highly qualified in their content area and are also knowledgeable about instruction of LEP students?

    2. How are states and districts implementing the teacher fluency requirements under Title III?

State Title III Director Interview

Case Studies

Subgrantee Survey

(EQ5) LEP student progress on ELP and content assessments.


    1. Are states and Title III subgrantees meeting the student achievement targets for LEP students established in states' Title III AMAOs? 

    2. What are the characteristics of students who are classified as LEP for the purpose of Title III accountability (e.g., language group, length of time in the United States, level of proficiency)? 

    3. Are LEP students making progress in learning English, achieving English language proficiency on ELP assessments, and meeting achievement targets on state content assessments?  Does the amount of progress vary by student characteristics such as grade level, language group, length of time in the United States, length of time in LEP services, or level of proficiency? 

    4. How do current LEP students, former LEP students, and non-LEP students compare in their performance and progress on state content assessments?

    5. How long does it take for LEP students to attain proficiency on the state ELP assessments and the state content assessments?  Does this vary by student characteristics?

    6. What is the relationship between LEP students’ performance on state ELP assessments and their performance on state content assessments in mathematics and English/language arts?  Does this relationship differ by content area or by student characteristics? 

State AMAO Data

Student Assessment Data



The detailed construct matrix in Appendix A depicts the intersection of the relevant evaluation question(s) and data source(s), with the key domains, topics, and constructs measured on our draft data collection instruments. The two-,three-, or four-digit construct identification numbers (e.g., 3.2.3.1) contained in the ‘Construct’ column of the construct matrix also appear in parentheses beside each item in the draft versions of data collection instruments provided in appendices C, D, E and F. In this way each item can be mapped back to the evaluation question it helps to answer.

Sampling design

The main components of this study are presented below in Exhibit 3 along with the proposed sample. A detailed discussion of our sampling design is provided in the Supporting Statement for Paperwork Reduction Act Submission, Part B section of this package.

Exhibit 3. Main Study Components and Proposed Sample

Study Component

Sample

Standards Review

All 50 states and the District of Columbia

State Interviews

Full population of Title III Directors in all 50 states and the District of Columbia

Subgrantee (District) Survey

A nationally-representative sample of 1,530 subgrantees (with the goal of receiving 1,300 completed survey responses)

Case Studies

A purposive sample of 12 districts within 5 states

Student Assessment Data

A purposive sample of six states and two districts

State Performance Trends

All 50 states and the District of Columbia, as data are available





Data Collection Procedures

The data collection for this study includes a standards review, state interviews, subgrantee (district) survey, case studies, student assessment data, and state performance trends. We have included all of the study’s data collection instruments in this submission; however, only the state interviews, subgrantee survey, and case studies require clearance. Exhibit 4 below presents a summary of our data collection procedures. The instruments for which we are requesting OMB clearance appear with an asterisk. A more detailed discussion of these procedures is provided in the Supporting Statement for Paperwork Reduction Act Submission, Part B section of this package. Copies of the state data confirmation document and interview protocol, the subgrantee survey, the case study district interview protocol, and the case study focus group protocols are included in Appendices C, D, E, and F respectively.



Exhibit 4. Summary of Data Collection Procedures

Study Component

Data Sources

Timeline

Standards Review

  • State ELP standards, as available on state websites

  • Data collected through the interviews with State Directors of Title III (listed below)

  • Supporting documents intended for use in conjunction with the ELP standards, as available

Spring-Winter 2009

State Interviews*

  • 60-minute phone interviews with State Directors of Title III conducted by trained staff

October 2009-January 2010

Subgrantee (District) Survey*

  • 45-minute web-based survey administered to subgrantees

October 2009-January 2010

Case Studies*

  • Site visits to 12 districts in five states, including 60-minute interviews with district personnel, 60-minute focus groups with school-level personnel, and (where necessary) additional 20-30 minute interviews with SEA administrators.

October 2009-January 2010

Student Assessment Data

  • State-level student-level longitudinally linked achievement data, from a purposive sample of six states and two districts that have such data available

March 2009-September 2009

State Performance Trends

  • Biennial Report (2004-05 and 2005-06)

  • Consolidated State Performance Reports (CSPRs) ( 2004-05, 2005-06, and 2006-07)

  • EDFacts (2006-07 school year and all of 2007-08) http://www.ed.gov/about/inits/ed/edfacts/index.html

  • Supplemental information about state assessments through state websites

March 2009-September 2009


Analytic approach

Standards Review

The first of the study’s evaluation questions addresses the implementation of state ELP standards, including both the ways in which states are addressing the requirement to establish ELP standards and the variation of these standards across states. The study will collect and analyze state actions to address the requirement through a combination of document analysis, state interviews, and case studies. The variation in the standards themselves will be addressed primarily through analysis of state documents, supplemented with information from the state interviews about the rationale for specific state approaches. Information about use of the standards and about how that use is influenced by variation in the standards themselves as well as by other aspects of the state context will derive primarily from case studies. In this section, we discuss only the document analysis aspects of these questions, focusing on the content, organization, and format of the standards.

The first step in the analysis of the state ELP standards will be to refine the content analysis categories and criteria. To date, no set of commonly agreed upon criteria for analyzing state ELP standards exists, but there are relevant frameworks from which this study might draw to develop such a set for the purposes of this review.

Evaluation questions 1.1 and 1.2 focus on state approaches to the development of ELP standards and on the variation across states with respect to the breadth, specificity, and topics covered. The standards documents produced by each state provide a valuable source of data to address these questions, particularly with regard to standards content. This section describes the plan for our review and analysis of the standards documents. Data from this review will then be combined with data from other sources (case studies, surveys, and state interviews) to describe the states’ overall approaches to ELP standards design, development, and use.

To particularize the analysis of ELP standards content, the research team – in consultation with ED, our consultants, and the Technical Work Group (TWG) – has identified 7 questions to guide our review. These guiding questions and accompanying indicators incorporate findings from our examination of the literature on second language acquisition/learning as well as the requirements of Title III. The review will address the following questions by completing state-by-state documentation, a summary matrix of data from all states, and a comprehensive analysis and report of observed patterns across states:

  1. What principles of second language development and acquisition are reflected in the ELP standards and how do states vary in their incorporation of such principles into their standards documents?

  2. What specific linguistic skills and language functions are incorporated into the ELP standards? How and to what extent do these vary across language domains, proficiency levels, and states?

  3. How and to what extent do the ELP standards reflect the four domains of reading, writing, speaking, and listening?

  4. How and to what extent do the ELP standards reflect the academic language needed for success in the academic content areas? What dimensions of academic language are emphasized and in what content areas?

  5. Are there explicit linkages between the ELP standards and the state’s content standards in core subjects? What form do these linkages take and how do they differ across states?

  6. Do the ELP standards include specific exemplars or tools to help guide and support curriculum and instruction for ELLs in English language development and access to core content? What forms do these supports take (e.g. exemplars of student performances, lesson guides, curricular suggestions, etc.)? Are the standards measurable and stated in a way that supports their assessment?

  7. How are the ELP standards organized and formatted? How do states vary in the levels of specificity and clarity of their standards? What are the indicators of this variation?

To address the research questions above and facilitate a systematic review of the standards documents, the research team developed Data Capture Forms and a Data Summary Matrix. The review will employ the use of the Data Capture Forms for each state’s ELP Standards and the Data Summary Matrix to analyze the ELP standards documents for 2008-09 for all 50 states and the District of Columbia, for a total of 51 sets of state standards. The report from the ELP Standards Review will include state-by-state information comprised of the responses to the research questions listed above, a coded matrix that will show features of the standards at-a-glance across the 51 sets of ELP standards, and a final comprehensive analysis and summary document which looks at variations and patterns of features of states’ ELP standards.

The process of the ELP standards review has six phases. Three reviewers will complete the review, all of whom have classroom experience teaching ELL students, administrative experience in ESL, and extensive academic experience studying education ELLs. Throughout the entire process, two reviewers will be conducting the majority of the review. A third reviewer will be included for purposes of preparing materials for the review and reviewer training activities, reconciling differences during the review, and assisting with the final comprehensive analysis. This model is similar to the review process used by Achieve, an organization that reviews content standards across the nation. Exhibit 5 outlines the phases of the review process.

Exhibit 5: Phases of Standards Review Process

Phase #

Activity

Product

Phase 1

Collecting Documents

ELP Standards and Supplemental Documents Collection

Phase 2

Collecting and Reviewing Examples of ELP Standards; preparation of coding guide

Training Materials (anchor standards, coding guide)

Phase 3

Reviewer training

Training activity for reviewers, inter-rater reliability checks

Phase 4

Reviewing Individual States’ ELP Standards and Supplemental Documents

Data Capture Forms for Each set of Standards Completed

Phase 5

Compiling Information from State-by-state Information Sheets and Critical Questions/Responses Tables

Data Summary Matrix with all 51 Sets of ELP Standards Information in Spreadsheet

Phase 6

Analyzing and Summarizing patterns and variations from Data Capture Forms and the Data Summary Matrix

Comprehensive Analysis and Summary Document



After completion of the standards review process, the evaluation team will develop a section of the first study report on the standards review outcomes based on the evaluation questions and other criteria defined by ED. This section of the report will summarize findings on ELP standards across states and include state-by-state descriptive tables.

State Interviews and Documentation

Data on state implementation of standards, assessments, and AMAOs will come from two primary sources: state reports (such as the CSPRs or reports on state websites) and interviews of state Title III directors. Prior to the state interviews, we will first review a series of existing documents that states have submitted to the federal government and are publically available. By reviewing this information, we can streamline the interview process and avoid asking questions of the state Title III director that have already been answered.

The majority of the evaluation sub-questions on assessment of LEP students and AMAOs will be answered by collecting and analyzing the state interview data. The first phase of state interview analysis will consist of coding of text data, an iterative process that includes reading, reviewing, and filtering data to locate important descriptions and identify prevalent themes relating to each evaluation question. Once meaningful categories of state policies and practices are identified, in the second stage we will create counts of the number of states that fall in each category. Finally, we will identify exemplar cases or narratives that provide detailed contextual information.

The analytic tools that we have developed for analysis of interview data on state policy implementation have five critical features: (1) a format that is amenable to both quantified and text data; (2) a flexible interface, in which new variables can be inserted or in which data can be updated easily; (3) fields to indicate when data were updated; (4) flags to indicate when data are uncertain and need to be verified; and (5) mechanisms to facilitate basic counts, tabulations, coding, and charts. In the past we have successfully and efficiently customized Excel spreadsheets to meet our data analysis needs.

Subgrantee Survey

The sampling population for this survey will be Title III subgrantees (N=~5000). Our achieved sample is planned to be 1,300 subgrantees. Sample selection is a compromise between unit weighting and probability-proportional-to-size. (Details of sample selection are provided in the ‘Subgrantee Survey’ subsection of the ‘Sampling Design’ section of Part B - Description of Statistical Methods.)

Given that there are different selection probabilities and nonresponse rates for different strata, we will create weights that are a function of both selection probabilities and nonresponse rates (except for the certainty stratum of large districts, where there would be no sampling error and the precision of the estimate for that stratum would be based only on the nonresponse rate). We will calculate such analytic weights for each of the sampling strata, such that the sample can be weighted to represent the national population of Title III subgrantees.

Our nationally representative sample of subgrantees (districts) receiving Title III funds will permit us to examine differences in implementation of Title III provisions throughout the country and across several classifications of districts. An achieved sample of 1,300 subgrantees will provide excellent precision +/-3% for full sample national estimates and the statistical power to detect an 8 percentage point difference (e.g., 46% vs. 54%) between two subgrantee subgroups.

Given our previous research experience, we believe that the number of LEP students in the district will be a key analytic variable in differentiating results among school districts. Differences in urban, suburban, and rural districts and NCLB accountability status will also be relevant to a wide range of Title III implementation issues. It will be important to examine differences based on LEP student demographics such as the proportion of students in the various districts who live in poverty and who have differing language backgrounds. We anticipate that most of the analyses will involve univariate (means, frequencies, etc.) and bivariate (comparisons of means, crosstabulations, etc.) approaches. We will also cross‑tabulate district survey data by categorizations of states derived from the state interview data.

Previous research clearly shows that districts with smaller numbers of LEP students have different policies and practices from those of districts with larger numbers of LEP students. Thus the statistical results on these two types of measures are likely to be quite different. The policy relevance of the first approach is that it emphasizes the large number of districts with small numbers of LEP students. If the focus of future assistance to districts is to be on improving the practices of districts with few LEP students, the first approach is preferable. The first approach may also be somewhat easier for readers to understand. The policy relevance of the second approach is that it emphasizes the policies and practices relating to the broader population of LEP students. If the goal is to describe what is happening to LEP students nationwide, the second approach is preferable. Given the advantages of both of these approaches, both will be used in this evaluation.

Case Studies

It is especially important that analyses of qualitative data be performed with rigor, so that the credibility of findings is not compromised. Having the researchers work as a team during the case study visits will enhance the consistency and reliability of data gathered. Further, by audiotaping interviews and completing data captures promptly, researchers ensure that data captured are fresh in their minds, increasing the level of detail and accuracy. Following each state visit activity, researchers will write up a brief summary of the key points captured and compare with a teammate.

While still in the field (or immediately following phone interviews), researchers will complete “data capture forms” compiled in Excel; a strategy refined through the SSI-NCLB. These data capture forms will be developed prior to data collection, and revised through the pilot-testing process. They are designed to impose a low burden on staff (requiring no more than 1 hour to complete) but generate a concise and accurate data file that facilitates responses to quick-turnaround data requests from ED. At the close of each state visit, researchers will thoroughly read all of the captured data and code for any additional variables.

The qualitative data analysis process involves three steps:

  1. Coding data,

  2. Conducting cross-case analyses, and

  3. Identifying emergent themes and patterns.

In the coding process, we will identify segments of interview text that relate to a specific topic, such as “accountability for AMAOs,” and compare responses across states and districts. When there is variation in responses to that topic, we will consider various demographic and background variables to identify any patterns with regard to respondent types. Finally, we will determine the extent to which cross-site analysis of codes produces identifiable themes and associations.

This analysis will include a comparison of topics across states, across districts within a state, and across districts in the case study sample in order to identify trends and themes as they emerge and to highlight promising practices. These findings will add depth and richness to the quantitative findings from other aspects of the evaluation. The qualitative results will be integrated with the results of the quantitative analyses, verifying some findings, permitting elaboration of others, and suggesting cautions in the specific interpretations or conclusions. This complementary nature of data can provide subtle nuances to interpretations, reveal unanticipated findings, and suggest the reasons for the observed patterns.

Student Assessment Data

State-level data. To provide an overview of LEP student performance at the state and district levels we will analyze extant data available from the EDFacts database administered by the Department of Education on all states of the country. To begin, we will present a summary of the variation in AMAO definitions and targets across the 50 states. We then plan to report the 2008-09 numbers accompanied by substantial notations and caveats regarding the differences across states.

To describe the variation in AMAOs targets across years, we will summarize the main changes to standards, assessments, or associated AMAO definitions during the time period 2005-06 to 2008-09. We will illustrate how the changes using two or three state examples. In addition, we will note the extent to which the variables requested by ED through the Consolidated State Performance Reports (CSPRs) have changed over time and possible variations in the ways in which state officials interpreted specific requests. AIR staff have experience interpreting with CSPR data and possible errors in state reported data and understand that extent to which such issues can complicate analyses (see for example, Chapter II of State and Local Implementation of the No Child Left Behind Act Volume V—Implementation of the 1 Percent Rule and 2 Percent Interim Policy Options).

Having acknowledged the above issues, we will conduct the following three analyses:

  • State-level performance on ELP assessments. To provide an overview of student performance on ELP assessments, we will summarize the state-level performance on ELP assessments by presenting the number and percentage of Title III-served LEP students making progress in learning English and attaining English proficiency in each state, as well as performance relative to AMAO 1 and 2.

  • State-level performance of LEP student subgroups on state reading/English Language Arts and math content assessments. To examine student achievement in academic content areas, we will report the number and percentage of LEP students at proficient or advanced levels on state content assessments. In doing so, we will take into consideration whether states are using the flexibility to include students in the LEP subgroup for two additional years, and if so, in what year did they start doing so, as well as performance relative to AMAO 3.

  • Title III subgrantees’ performance relative to their AMAOs. Shifting to the district/consortium level, we will report the number and percentage of Title III subgrantees missing AMAOs and missing AMAOs in consecutive years in each state. We will interpret these results in consideration of the unique assessments and different AMAO targets used in each state.

Our analyses will produce a set of tables, each of which will consist of a list of states in the rows and a list of 2008-09 AMAO performance variables in the columns. Each table will include a series of table notes to indicate differences in standards, assessments, or associated AMAO definitions to inform and potentially caveat the interpretation of any apparent differences in actual performance.

Student-level data. To answer the remaining questions, we will analyze student-level longitudinally-linked assessment data gathered from the sample of six states and two districts. Once permission has been received to access student-level state and district data, the first analytic task will be to perform consistency checks (“cleaning”) of the data, analyzing the number of missing observations, and merging the datasets at the student-level across years. Once data have been cleaned, we will analyze (1) LEP students’ characteristics, (2) LEP students’ average growth and the length of time required for LEP students to attain proficiency, (3) comparisons of the performance of former LEP students to other groups, and (4) the relationship between LEP students’ performance on ELP and academic content assessments.

  • LEP students’ characteristics. The first analysis we will conduct will be a simple description of the LEP student population within each of the six states and two districts. The population will be described in terms of their language group, length of time in the United States, level of proficiency, among other characteristics.



  • LEP students’ average growth. We will calculate of the average annual gain (or growth if three or more years of longitudinal data are available) in LEP students’ ELP assessment scale scores and in their academic content assessment scale scores within each state. Analyses will be conducted separately in each state and will be conducted separately for each outcome measure (e.g., four subdomains of ELP assessments, reading, mathematics). Analyses will also account for any lack of vertical equating of test scores by conducting analyses within grade span or standardizing scores within grade spans. For these analyses, we will estimate a series of multi-level repeated-observations growth models with time-varying covariates (Raudenbush and Bryk, 2002).



  • Comparisons of former or “monitored” LEP students to other groups. Within each state, we will compare the gain or growth of LEP students at each ELP test level, former LEP students (achieved ELP and no longer in services), and students who have not been identified as an LEP student within the time span of available state data. We would look at the rate of growth in state content scale scores for each of these groups following the general modeling approach just above.



  • Length of time required for LEP students to attain proficiency. This analysis utilizes multiple years of longitudinally-linked student‑level data to examine LEP students’ probabilities of attaining higher levels of language proficiency using survival analysis. The first analysis concerns how much time it takes a LEP student to move from one level of English proficiency to the next. In other words, we will focus not only on the likelihood of being at a certain proficiency level but on how quickly (or slowly) LEP students move to proficient levels. In a similar analysis, we will examine how much time it takes LEP students who attain proficiency on the state ELP assessment to reach a proficient level on the state content assessments. Survival analysis will also be used to address the question of how long it takes LEP students to be redesignated. Parrish and colleagues (2006) showed that, after 10 years in California schools, less than 40 percent of LEP students had been redesignated as English proficient. This study will determine whether this result holds for other states and/or school districts.



  • Relationships between performance on ELP and state academic content assessments. We will begin by studying the observed relationship between performance on ELP tests and state academic content assessments using cross‑sectional data. That is, we will evaluate whether students who perform at a high level on one type of assessment also do so on the other assessment.

Given the possibility of a nonlinear relationship between ELP and state content test scores, we plan to use piece‑wise regression to determine the regression slopes for each group defined by English proficiency levels as well as by ELP test score quartiles. In the first case, a slope coefficient is estimated for each proficiency level. We plan to complement this analysis with a quartile analysis because, in some states and/or school districts, LEP students may cluster in certain proficiency levels, leaving other levels with relatively few observations. Generating groups of students of equal sample size would allow us to observe the relationship between ELP and state content test scores in a more uniform manner. As a more refined alternative, we will conduct an analysis using LOWESS regression in STATA to fit the nonlinear relationship.

To the extent that we are able to follow LEP students over time, we can expand our analyses dramatically. We will analyze at the student level how progress made in English proficiency and progress made in content areas evolve over time. If a student moves to higher ELP levels, at what rate is he or she improving in math or English language arts achievement?

As in the cross‑sectional analyses, these individual‑level longitudinal trajectories can be studied by content area, primary language, initial level of ELP, eligibility for free or reduced‑price lunch, and grade. We include grade as a stratifier because of how redesignation affects the group of students still classified as LEP students in upper grades. The pool of LEP students is likely to be very different in lower and upper grades. We would expect the pool to be more diverse in the lower grades and to become relatively more uniform in the upper grades as, over time, the top performers are reclassified as English proficient.



Supporting Statement for Paperwork Reduction Act Submission

Justification (Part A)

  1. Circumstances Making Collection of Information Necessary

Over 5 million students in U.S. schools come from a non-English-speaking background and have not yet developed the level of English proficiency needed to achieve academically and compete for jobs in an increasingly global and knowledge-based economy. Proficiency in English opens doors to opportunities—to learn the academic curriculum, to graduate from high school and pursue postsecondary education, and to obtain high-paying and rewarding work. Policymakers and educators at all levels are working to ensure that these opportunities exist for the nation’s limited English proficient (LEP) students.

The federal government has had a long‑standing commitment to ensuring access of LEP students to a meaningful education. As early as 1968, the Elementary and Secondary Education Act contained provisions for supporting the education of LEP students and in its 1974 landmark decision, Lau v. Nichols, the U.S. Supreme Court declared, “There is no equality of treatment merely by providing students with the same facilities, textbooks, teachers, and curriculum; for students who do not understand English are effectively foreclosed from any meaningful education” (Lau v. Nichols, 1974). In 2001, the No Child Left Behind Act (NCLB) substantially strengthened the focus of federal requirements for serving LEP students on this relationship between English language proficiency and academic success. In particular, Title III added provisions focused on “promoting English acquisition and helping English language learners meet challenging content standards” (National Clearinghouse for English Language Acquisition & Language Instruction Educational Programs [NCELA], 2008). Under Title III, for the first time, districts were held accountable for the progress of LEP students in learning English and in achieving the state’s challenging academic standards.2

The proposed evaluation will provide a current picture of how—and how well—states, districts, and schools are implementing these provisions by documenting the variation across states regarding standards for English language proficiency (ELP), assessments to measure ELP, targets for the achievement of districts’ LEP students, and consequences for districts that do not meet their targets. However, this evaluation will go much deeper. It will examine how state policies translate into district practices, how states and districts are making connections between English language proficiency and academic learning, and how well LEP students are acquiring English and the subject matter competence they need to succeed in school and beyond.

  1. Purposes and Uses of Data

This data collection will serve to update state-level information about Title III implementation and will also provide an important opportunity to go beyond the mechanics of implementation at the state level to understand whether and how states and districts are making the necessary connections between ELP and academic learning; how the law’s standards, assessment, and accountability mechanisms are being translated at the local level into instructional decisions and improvement strategies for LEP students; whether Title III implementation takes into account the many layers of diversity in the LEP population; and how LEP students are faring in both ELP and subject matter learning. Our approach embraces mixed-methods data collection and analyses that will enable the study to answer a series of key evaluation questions and to deepen understanding of the extent to which Title III is achieving its underlying goals.

  1. Use of Technology to Reduce Burden

We will use a variety of information technologies to maximize the efficiency and completeness of the information gathered for this evaluation and to minimize the burden the evaluation places on respondents at the state, district, and school levels:

  • When possible, data will be collected through states’ websites and through sources such as EDFacts, the Biennial Report, and Consolidated State Performance Reports (CSPRs).

  • To further streamline the interview process and reduce burden on state officials, Title III directors will be asked to complete an electronic pre-fill data confirmation form via email.

  • We will use a web-based format and administration process for the sub-grantee (district) survey to alleviate burden on the respondents.

  • A toll-free number and email address will be available during the data collection process to permit respondents to contact interview staff with questions or requests for assistance. The toll-free number and email address will be included in all communication with respondents.

  1. Efforts to Identify Duplication

Where possible, we will use existing data including EDFacts, the Biennial Report, CSPRs, Common Core of Data, and NCELA to inform our analyses, which will greatly reduce the number of questions asked on the state interview, thus reducing respondent burden and minimizing duplication of previous data collection efforts and information. In those cases where extant data will not be available prior to the timeline required, we may need to request more recent information from study participants. However, we have kept such questions to a minimum and on an as-needed basis.

  1. Methods to Minimize Burden on Small Entities

No small businesses or entities will be involved as respondents.

  1. Consequences of Not Collecting Data

Existing research on Title III has focused almost exclusively on initial state-level implementation of the law’s provisions. Failure to collect the data proposed through this study would prevent a picture of how, and how well, states are currently implementing the Title III provisions of NCLB. For example, it would limit our understanding of the variation across states in standards for English language proficiency, assessments to measure it, targets that states are setting for districts regarding the achievement of their LEP students, and consequences states are imposing for districts that do not meet their objectives. Additionally, the consequences of not collecting the data include an inability to examine how state policies translate into district practices, how states and districts are making connections between English language proficiency and academic learning, and how well LEP students are acquiring English and the subject matter competence they need to succeed in school and beyond. As such, the information gained through this study can inform national, state, and local efforts to improve outcomes for this growing and diverse student population.

  1. Special Circumstances

None of the special circumstances listed apply to this data collection.

  1. Federal Register Comments and Persons Consulted Outside the Agency

A 60 day notice about the study was published in the Federal Register (Volume 74, page 12323) on March 24, 2009 to provide the opportunity for public comment. No public comments have been received.

Throughout the duration of this study, we will draw on the experience and expertise of a technical working group that provides a diverse range of experiences and perspectives, including researchers with expertise in relevant methodological and content areas as well as representatives from the state, district, and school levels. The members of this group, their affiliation, and areas of expertise are listed in Exhibit 6.



Exhibit 6. Members of the Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems Technical Working Group

Proposed TWG Member

Professional Affiliation

Area(s) of Expertise

Supreet Anand

Maryland Title III Director

State implementation of Title III, Title I, and other federal programs; standards and assessments, especially for English Language Proficiency

Theodora ‘Teddi’ Predaris

Director, ESOL Office

Fairfax County Schools, Virginia

District implementation of Title III, Title I, and other federal programs; issues surrounding English Language Acquisition and LEP students

Guillermo Gomez

Teacher, Los Angeles Unified, Member of California Commission on Teacher Credentialing

District and school implementation of Title III, Title I, and other federal programs; issues surrounding English Language Acquisition and LEP students

Dr. Jamal Abedi

Professor,
University of California, Davis

Standards and assessments, especially for English Language Proficiency

Dr. Gary Cook

Researcher, Wisconsin Center for Education Research, University of Wisconsin

Standards and assessments, especially for English Language Proficiency

Dr. Nonie Lesaux

Professor, Harvard University

Issues surrounding English Language Acquisition and LEP students

Dr. David Kaplan

Professor, University of Wisconsin

Analysis of longitudinal student achievement data and sampling and survey methodology



  1. Payment or Gifts

If allowed by the U.S. Department of Education Contracts and Management Service, refreshments will be provided to participants in the case study focus groups as a small token of our appreciation. If refreshments are not an option, the study will provide $20 to the up to 480 focus group participants for a maximum total cost of $9600.

  1. Assurances of Confidentiality

As a research contractor, the research team is concerned with maintaining the confidentiality and security of its records. The team will ensure the confidentiality of the data to the extent possible through a variety of measures. The contractor’s project staff has extensive experience collecting information and maintaining confidentiality, security, and integrity of interview and survey data. The team has worked with the Institutional Review Board at American Institutes for Research to seek and receive approval of this study. The following confidentiality and data protection procedures will be in place:

Project team members will be educated about the confidentiality assurances given to respondents and to the sensitive nature of materials and data to be handled. Each person assigned to the study will be cautioned not to discuss confidential data.

Data from the case studies, state interviews, and subgrantee surveys will be treated as follows: respondents’ names and addresses will be disassociated from the data as they are entered into the database and will be used for data collection purposes only. As information is gathered from respondents or from sites, each will be assigned a unique identification number, which will be used for printout listings on which the data are displayed and analysis files. The unique identification number also will be used for data linkage. Data analysts will not be aware of any individual’s identity.

We will shred all interview protocols, forms, and other hardcopy documents containing identifiable data as soon as the need for this hard copy no longer exists. We will also destroy any data tapes or disks containing sensitive data.

Participants will be informed of the purposes of the data collection and the uses that may be made of the data collected. All case study respondents will be asked to sign an informed consent form (see drafts in appendices E and F). Consent forms will be collected from site visitors and stored in secure file cabinets at the contractor’s office in Washington, DC.

We will protect the confidentiality of district survey respondents and all district- and school-level respondents who provide data for the study and will assure them of confidentiality to the extent possible. We will ensure that no district- and school-level respondent names, schools, or districts are identified in reports or findings, and if necessary, we will mask distinguishing characteristics. Responses to this data collection will primarily be used to summarize findings in an aggregate manner (e.g., across types of districts) and secondarily to provide examples of program implementation in a manner that does not associate responses with a specific individual or site. We will not provide information that associates responses or findings with a district-level or school-level subject, or to a school or district to anyone outside of the study team except if required by law.

The case of state-level respondents is somewhat different. Our state-level data collections, by their very nature, focus on policy topics that are in the public domain. Moreover, it would not be difficult to identify Title III and assessment directors in each state and thus determine the identity of our state-level respondents. Having acknowledged that, we will endeavor to protect the privacy of our state-level interviewees, and as with district- and school-level respondents, we will avoid using their names in reports and attributing any quotes to specific individuals. We will primarily report on the numbers of states that engage in specific practices, thus avoiding reference to specific states.

While most of the information in the final report will be reported in aggregate form, as noted above, there may be instances where specific examples from the case study data will be utilized to illustrate “best practices”. In these instances, the identity of the case study site will be masked with a pseudonym and efforts will be made to mask distinguishing characteristics.

All electronic data will be protected using several methods. We will provide secure FTP services that allow encrypted transfer of large data files with clients. This added service prevents the need to break up large files into many smaller pieces, while providing a secure connection over the Internet. Our internal network is protected from unauthorized access utilizing defense-in-depth best practices, which incorporate firewalls and intrusion detection and prevention systems. The network is configured so that each user has a tailored set of rights, granted by the network administrator, to files approved for access and stored on the LAN. Access to our computer systems is password protected, and network passwords must be changed on regular basis and conform to our strong password policy. All project staff assigned to tasks involving sensitive data will be required to provide specific assurance of confidentiality and obtain any clearances that may be necessary. All staff will sign a statement attesting to the fact that they have read and understood the security plan and ED’s security directives. A copy of this statement is featured in Appendix B.

  1. Justification of Sensitive Questions

No questions of a sensitive nature will be included in this study.

  1. Estimates of Hour Burden

The estimated hour burden for the data collections for the study is 1,600 hours. Based on average hourly wages for participants, this amounts to an estimated monetary cost of $66,528. Exhibit 7 summarizes the estimates of respondent burden for study activities.

The burden estimates associated with the state interviews is 77 hours. This figure includes:

  • Time associated with preparing for the interview, including gaining cooperation, providing information about the study, scheduling an interview, and having the Title III contact for the 50 states and the District of Columbia review the Data Confirmation Document (0.5 hour/respondent); and

  • Time for the Title III contact for the 50 states and the District of Columbia to participate in a 1-hour telephone interview.

The burden estimates for response to the subgrantee (school district) survey is 975. This burden estimate includes:

  • Time for 85 percent of the 1,530 district contacts in the sample to respond to a 45-minute district survey.

The burden estimate associated with the case studies is 548. This burden estimate includes:

  • Time for an SEA contact for five states to participate in a 30-minute interview;

  • Time associated with identifying districts for the case studies—i.e., the time it would take state officials to help select three key districts to visit in each of the five selected states (1 hour/state); and

  • Time associated with gaining cooperation from districts, which will include time for districts to review study information, request additional information or clarification as needed, identify and recruit participants for interviews and focus groups, and coordinate meeting logistics (e.g., locating a meeting room) (2 hour/district);

  • Time for 36 district-level staff in 12 districts to participate in a 1-hour interview; and

  • Time associated for school-level staff to participate in 1-hour focus groups. The maximum number of staff included in each focus group will be eight; however, we anticipate that the average focus group will comprise closer to four to six staff members. Using the maximum of eight as a base, the burden estimate for the focus groups includes:

    • Time for 96 secondary school principals and resource staff in 12 districts to participate in a 1-hour focus group;

    • Time for 96 elementary school principals and resource staff in 12 districts to participate in a 1-hour focus group;

    • Time for 96 secondary school teachers in 12 districts to participate in a 1-hour focus group;

    • Time for 96 elementary school teachers in 12 districts to participate in a 1-hour focus group; and

    • Time for 96 parent liaisons in 12 districts to participate in a 1-hour focus group.

Exhibit 7. Summary of Estimates of Hour Burden


Task

Total Sample Size

Estimated Response Rate

Number of Respondents

Time Estimate

(in hours)

Total Hour Burden

Hourly Rate

Estimated Monetary Cost of Burden

State Interviews

Gaining cooperation

51

100%

51

0.5

26

$45

$1,170

Conducting interviews —

Title III contact

51

100%

51

1

51

$45

$2,295

Total for State Interviews

--

--

--

--

77

--

$3,465

Subgrantee (School District) Survey

Administering surveys —

District contact

1,530

85%

1,300

0.75

975

$45

$43,875

Total for Subgrantee Survey

--

--

--

--

975

--

$43,875

Case studies

Conducting interviews —

SEA contact

5

100%

5

0.5

3

$45

$135

Identifying districts

5

100%

5

1

5

$45

$225

Gaining cooperation

12

100%

12

2

24

$45

$1,080

Conducting interviews —

District contacts

36

100%

36

1

36

$45

$1,620

Conducting focus groups —

Secondary school principal and resource staff

96

100%

96

1

96

$45

$4,320

Conducting focus groups —

Elementary school principal and resource staff

96

100%

96

1

96

$45

$4,320

Conducting focus groups —

Teachers (Secondary)

96

100%

96

1

96

$29

$2,784

Conducting focus groups —

Teachers (Elementary)

96

100%

96

1

96

$29

$2,784

Conducting focus groups —

Parent liaisons

96

100%

96

1

96

$20

$1,920


Total for Case Studies

--

--

--

--

548

--

$19,188

TOTAL

--

--

1940

--

1,600

--

$66,528



  1. Estimate of Cost Burden to Respondents

There are no additional respondent costs associated with this data collection beyond the hour burden estimated in item A12.

  1. Estimate of Annual Cost to the Federal Government

The estimated cost for this study, including development of a detailed study design, data collection instruments, justification package, data collection, data analysis, and report preparation, is $2,456,560 for the three years, or approximately $818,853 per year.

  1. Program Changes or Adjustments

This request is for a new information collection.

  1. Plans for Tabulation and Publication of Results

We have designed our data collections, data management, and analysis procedures to accommodate the short data collection period of October 1, 2009 to February 15, 2010. The research team will develop coding materials for entering and preparing for analysis the data collected as it is received, and will enter all data into an electronic database. Our team will ensure accuracy of the data and will analyze the data as described in our analytic approach. The research team will submit preliminary data tabulations to the Contracting Officer’s Representative (COR) no later than June 15, 2010. General approach to analyses is described in Part B of this submission.

We will submit a detailed outline for the final report to the COR no later than July 5, 2010. This outline will include a summary of initial findings that integrates information from the subgrantee survey, state interviews, case studies, and analyses of student achievement data and extant documents. Following a review by the COR, we will submit a revised outline no later than August 9, 2010.

The first draft of the final report will be submitted to the COR no later than October 4, 2010 and after two rounds of subsequent reviews, we will submit the final version of the report no later than May 30, 2011.

For this study, the research team will also communicate and disseminate information to ED and other stakeholders through the following:

  • In-person briefings for ED staff each year of the contract (three briefings total)

  • A user-friendly policy brief and fact sheet in both Years 2 and 3 of the study, targeting policymakers, educators, media, and the public

  • Dissemination of the fact sheet and nontechnical executive summary for each report completed to the study participants

  • Dissemination of the reports, nontechnical executive summaries, policy briefs, and fact sheets to a number of audiences through organizations that focus on the instructional needs of LEP students

  • Submission of proposals for several staff members to conduct presentations at two professional (e.g. AERA) and/or practitioner conferences (e.g. National Association for Bilingual Education [NABE]) during Years 2 and 3 of the study

The team will also seek to attend ED-sponsored events at the National Clearinghouse for English Language Acquisition (NCELA) and Office of English Language Acquisition (OELA).

Finally, we will prepare public use data files and submit them to ED no later than the end of the contract, September 30, 2011. Specifically, the research team will produce a CD-ROM that can be formatted to the National Center for Education Statistics’ (NCES) Electronic Codebook (ECB). We will ensure that all public use data files are in compliance with all privacy protection laws, maintaining the strictest confidentiality of all individual data collected in this study. We will also submit codebooks, technical reports, and other study materials to ED.

  1. Approval to Not Display OMB Expiration Date

All data collection instruments will include the OMB expiration date.

  1. Explanation of Exceptions

No exceptions are requested.





























References

Birman, B., LeFloch, K.C., Klekotka, A., Ludwig, M. Taylor, J., Walters, K. Wayne, A. and Yoon, K.S. (2007). State and local implementation of the No Child Left Behind Act Volume II – Teacher Quality under NCLB: Interim Report. Washington, DC.: U.S. Department of Education, Office of Planning Evaluation and Policy Development, Policy and Program Studies Service.

Center on Education Policy (2006). From the Capital to the Classroom: Year 4 of the No Child Left Behind Act. Washington, D.C. Retrieved May 31, 2006, from http://www.cep-dc.org/NCLB/Year4/ CEP-NCLB-Report-4.pdf.

Lau v. Nichols. 414 U.S. 563. (1974).

LeFloch, K., Martinez, F., O’Day, J., Stecher, B and Taylor, J. (2007). State and local implementation of the No Child Left Behind Act Volume III – Accountability under NCLB: Interim Report. Washington, DC.: U.S. Department of Education, Office of Planning Evaluation and Policy Development, Policy and Program Studies Service.

National Clearinghouse for English Language Acquisition and Language Instruction Educational Programs (NCELA). (2008) History. Retrieved June 23, 2008, from http://www.ncela.gwu.edu/policy/1_history.htm.

Parrish, T.B, Merickel, A., Pérez, M., Linquanti, R., Socias, M., Spain, A., Speroni, C., Esra, P., Brock, L., & Delancey D. (2006). Effects of the implementation of Proposition 227 on the education of English learners, K–12: Findings from a five year evaluation. Palo Alto: American Institutes for Research.

Raudenbush, S. W. and Bryk, A. S. (2002). Hierarchical linear models application and data analysis methods (Second Edition). Newbury Park: Sage.

U.S. Department of Education. (October, 2007). Framework for high‑quality English language proficiency standards and assessments: Draft. Washington, DC: U.S. Department of Education, LEP Partnership, Office of the Deputy Secretary of Education, by the Assessment and Accountability Comprehensive Center.



Appendix A

Construct Matrix



Exhibit A.1. Construct Matrix



Domain

Topic (and level of system)

Construct

Evaluation Question

State Inter-view or data confirm doc.

District Survey

Case Study

(S= state; D= district)

Extant/ Other

1. Standards

Development

(state)

1.1.1 Timing: When were ELP standards first developed?

1.1

 X

 

S

X

1.1.2 Stability: When have ELP standards been revised?

1.1

 X

 

S

X

1.1.3 Source: Consortium, state process, other

1.1

 X

 

S

X

1.1.4 Linkage: From ELP standards to academic content standards

1.1

 X

 

S

X

1.1.5 Supports and challenges: What factors have been supports or challenges?

1.1

 X

 

S

 

Content

(state)

1.2.1 Breadth

1.2




1.2.2 Specificity

1.2

 

 


1.2.3 Topics Covered

1.2

 

 


Use

(state & district)

1.1.6 Purpose of use (text selection, curriculum development, formal and informal assessment, etc.)

1.1

 X

X

S,D

 

 

1.1.7 Users (who for what purposes)

1.1



S,D


1.1.8 Accessibility and support for learning

1.1

 

 

S,D 


1.1.9 Awareness/depth of understanding of standards' contents

1.1

 

 

 S,D




Appendix B

Data Use and Confidentiality Agreement



Title III Evaluation

Data Use and Confidentiality Procedures

Employee Agreement



I acknowledge that I have been granted access to confidential data, which include sensitive student-level education records, to facilitate the performance of my duties on the Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems (or “Title III Evaluation”). This Agreement confirms that I recognize and understand that my use of these data is restricted to the fulfillment of my duties on the Title III Evaluation and that it is my responsibility to safeguard and maintain the confidentiality of these data. Safeguarding consists of following all protections specified in the Title III Evaluation’s Information Security Plan. Maintaining confidentiality means not releasing or disclosing the data to any unauthorized individuals.

I have received a copy of the Title III Evaluation Data Use and Confidentiality Procedures, Information Security Plan, and Institutional Review Board (IRB) submission. I certify that I have reviewed these documents and agree to abide by the standards set forth therein for the duration of my employment on the Title III Evaluation. I understand that my e-mail and computer usage may be monitored by the company to ensure compliance with these standards.

I am aware that any violations of the Data Use and Confidentiality Procedures may subject me to disciplinary action, up to and including discharge from employment.







___________________________________           _______________________

Employee’s Signature                                        Date





___________________________________

Employee’s Printed Name





1 We use the terms limited English proficient and LEP students consistently in this OMB package because these are the terms used in the law. However, the field uses several different terms and has largely shifted to terms such as English language learners to stress a learning rather than a deficit model. In the study we will exercise care and sensitivity regarding practitioners’ preferred and commonly used terminology.

2 Title III requires specified accountability reporting and actions (see below) only for districts receiving Title III funding. In 2004-05, 33 states, the District of Columbia, and Puerto Rico reported Title III accountability results only for Title III districts; another 3 states reported for both Title I and Title III districts; and 13 states reported for all districts with LEP students (LeFloch et al., 2007).

1000 Thomas Jefferson Street, NW | Washington, DC 20007‑3835

File Typeapplication/msword
File TitleEVALUATION OF STATE AND LOCAL
AuthorInformation Technology Group
Last Modified By#Administrator
File Modified2009-10-23
File Created2009-10-23

© 2024 OMB.report | Privacy Policy