1875-NEW (3992) rev SSB OMB T3 102009 Part B ONLYnonames

1875-NEW (3992) rev SSB OMB T3 102009 Part B ONLYnonames.doc

Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems

OMB: 1875-0254

Document [doc]
Download: doc | pdf

American Institutes for Research®




Evaluation of State and Local
Implementation of Title III Standards, Assessments, and Accountability Systems

Part B (Description of Statistical Methods)



OMB Clearance Request

For Data Collection Instruments





October 20, 2009




Prepared for:

United States Department of Education

Contract NO. ED-04-CO-0025/0017





Prepared by:

American Institutes for Research

Windwalker Corporation

edCount, llc

Table of Contents





List of Appendices

Appendix A: Construct Matrix A-1

Appendix B: Data Use and Confidentiality Agreement B-1

Appendix C: State Title III Director Interview Materials C-1

Appendix D: Title III Subgrantee Survey Materials D-1

Appendix E: Case Study District Interview Materials E-1

Appendix F: Case Study Focus Group Materials F-1



List of Exhibits





Supporting Statement for Paperwork Reduction Act Submission

Description of Statistical Methods (Part B)

  1. Sampling Design

Our study includes six components, the sampling design for which are subsequently described:

  1. Standards Review

  2. State Interviews

  3. Subgrantee (District) Survey

  4. Case Studies

  5. Student Assessment Data

  6. State Performance Data



Standards Review

We will review the full population of English language proficiency (ELP) standards of all 50 states and the District of Columbia. One key step in ensuring that the standards used for comparisons across states are comparable is to determine which school year will serve as the target. This study will use the standards each state is implementing in the 2008-09 school year for this standards review process.

State Interviews

We will interview the full population of state Title III and state assessment directors from all 50 states and the District of Columbia. We anticipate 100 percent response rates from the states as we have achieved on previous studies (e.g., LeFloch et al. 2007)).



Subgrantee Survey

Sampling Criteria. The sampling population for this survey will be Title III subgrantees (N=~5000). In most cases, subgrantees are school districts, but in a minority of cases, consortia of school districts have been assembled because individual school districts have very small numbers of English language learner (ELL) students. Thus the reporting unit will be subgrantees, and we will use the term “subgrantee survey” to describe the data collection.

The sampling frame will be constructed using data from EDFacts and other sources. All school districts that are reported to serve any students under Title III will be included. For states that have missing or questionable EDFacts data, we will work with ED to generate a more accurate list. The key variables needed for the sampling frame are the NCES district ID number and the number of students served under Title III. With the district ID number, we should be able to generate other useful information for sampling and analytic purposes.

We will work with the EDFacts data to consolidate data from school districts that are members of Title III consortia. Some consortia can be identified through the EDFacts data, but we may also work with Office of English Language Acquisition (OELA) to create a final list of Title III subgrantees.

We have calculated that a 1,300 subgrantee sample will provide excellent precision +/-3% for full sample national estimates and the statistical power to detect an 8 percentage point difference (e.g., 46% vs. 54%) between two subgrantee subgroups. This estimate assumes a power level of .80 and determination of statistical significance using a two-tailed test (p<.05) (both common standards). These estimates assume an observed percent of 50 percent. The estimate would be more precise if the observed percent were closer to 0 or 100 percent. This estimate also assumed that the districts were equally weighted and districts will actually have unequal weights so the precision will be somewhat lower, depending on the standard deviation of the weights.

Proposed Sample. The goal of the survey is to receive 1,300 completed survey responses. In order to generate this number, we will sample approximately 1,530 subgrantees. This will allow for errors in the sampling frame (e.g., districts no longer receiving Title III subgrants) and for 15 percent non-response.

Sample selection is a compromise between unit weighting and probability-proportional-to-size involving:

  1. Creating five strata of subgrantees based on number of students served under Title III. The number of subgrantees in a stratum will be inversely related to the number of students per subgrantee, so that each stratum contains roughly the same numbers of ELL students.

  2. Determining the target number of sampled cases per stratum. The stratum with the largest subgrantees will likely involve sampling with certainty. Other strata will have lower sampling probabilities. The goal will be to produce sampling probabilities that are a compromise between equal probabilities and probabilities-proportional-to- size (PPS). For example, the stratum with the smallest subgrantees may have an average sampling probability of 1 in 20. The actual numbers of subgrantees in each stratum and sampling probabilities within strata will need to be determined after the sampling population list has been assembled.

  3. Sorting the sampling list within each stratum by State and number of Title III students, and selecting subgrantees using systematic PPS procedures. Thus, within a stratum, subgrantees with more Title III students will have a higher probability of selection, and the sampling procedure will assure a good representation of subgrantees across States.

We will calculate analytic weights such that the sample can be weighted to represent the national population of Title III subgrantees. We will analyze non-response and adjust for non-response in the weighting of subgrantees.

Case Studies

Sampling Criteria. We will conduct case studies in a purposive sample of five states. States will be chosen for their own unique context, thereby capturing some of the considerable variation in how Title III is implemented across states and providing the basis for an in-depth analysis of how implementation plays out in each of the selected states.

In consultation with the Contracting Officer’s Representative (COR), ED staff, and drawing on input from our TWG, we have identified a tentative sample of five states based on the following criteria:

  • Types of limited English proficient (LEP) populations.

  • Approaches to and stage of implementation of Title III provisions.

  • Approaches to curriculum, assessment and instruction for LEP students.

  • Overlap with state assessment analysis.

  • Regional location.


Proposed Sample. Our preliminary set of five states based on these criteria includes states with the characteristics listed in Exhibit 8. The final sample selection will be made in consultation with the COR and other ED staff.

Exhibit 8. Proposed Case Study Sample Characteristics

Region

State Size

LEP population

ELP assessment

West

Large

Large

State-designed

Midwest

Medium

Diverse

State-designed

Southwest

Small

Diverse

State-designed

Northeast

Large

Large, diverse

State-designed

Southeast

Medium

Fast-growing

WIDA-ACCESS1



Once the selection of states has been finalized, the research team will ask state staff to provide candidate districts along several key sampling dimensions. We will then select two key districts to visit in each of the selected states and three in the larger states. The purposive case study district sample is not intended to generalize to a larger population of districts. Decisions about which districts to visit will be based on state recommendations, information learned in state interviews (and if possible district surveys), a review of extant data, and the desired variation across the full sample of 12 districts. District selection will then be submitted to ED for final approval. The visit to districts will be a central focus of the case studies. Across all districts visited, we will aim to obtain variation of the following dimensions:

  • Size and percentage of LEP population;

  • Diversity of LEP population (i.e., some districts with a range of language groups represented and others with one group that predominates);

  • Nature of LEP population (i.e., newcomers, long‑term LEP students, migrants, etc.);

  • Curriculum and instructional approaches;

  • A mix of districts that have met Annual Measurable Achievement Objectives (AMAO) targets and those that have not; and

  • At least one district that participates in a consortium in order to qualify for Title III funds.

The research team will ensure that a district survey is collected from each of the districts selected for case study visits thereby gathering data from multiple sources on each district and eliminating the need to spend time during case study interviews on easily reported data elements. Within each district, the team will work with district officials to identify two schools to be visited (one elementary and one secondary, where possible). Researchers will review extant data in order to make strategic decisions about which schools to target in each district. Given that under this grant program the states and subgrantees are required to provide data for this and other evaluation purposes and case study respondents will be volunteers, we anticipate a 100 percent response rate.

Student Assessment Data

Sampling Criteria. We will conduct analyses of student-level assessment data in a purposive sample of six states and two districts from another key state that contains a large proportion of the nation’s LEP students that lacks a student-level, longitudinally linked data system. We will need student-level data on both ELP and academic content assessments, for several consecutive years. Therefore, we will select states with data systems that have a unique student identifier, have administered their current English language arts and mathematics tests for several years (ideally, vertically equated tests), and are able to link ELP test data to academic content test results. The ultimate selection of states will be contingent on the availability and quality of data and may result in a smaller sample of states. Thus, to the extent possible, we will also seek to maximize variation in the sample of states by including: states using each of the major ELP consortium assessments as well as several off-the-shelf or state-specific tests, states with academic content tests and accommodations policies of varying types, states with large and dense LEP populations and fast-growing LEP populations, states with primarily a Spanish language population and states with diverse language groups, and states across a fair geographic distribution. We expect that the sample of states for the student achievement analyses will overlap somewhat with the states selected for case studies. We propose to conduct state interviews with a pool of up to nine states that are likely candidates for the sample before finalizing the selection of states to be included in the student-level assessment data collection.

Proposed Sample. Our proposed sample for the analysis of student assessment data is provided in Exhibit 9. We will select the final sample in consultation with the COR and other ED staff.



Exhibit 9. Proposed Student Assessment Data Sample

Region

State Size

Quality of Data System

LEP Population

ELP Assessment

South

Medium

Unknown

Fast-growing

ELDA2

West

Large (Two large districts)

Strong

Large

State-developed

Southeast

Large

Strong

Large

State-specific

Mid-Atlantic

Medium

Strong

Moderate

LAS Links3

Northeast

Large

Strong

Large, diverse

State-developed

Southeast

Medium

Strong

Fast-growing

WIDA-ACCESS

Southwest

Large

Strong

Large

State-developed



Several other potential states that could serve as alternates include:

  • Western medium-sized state with a large number and density of LEP students, and a reportedly strong data system;

  • Southeastern large state using WIDA-ACCESS, with a fast-growing LEP population, but unlikely to release data;

  • Midwestern large state using WIDA-ACCESS with a large number of LEP students, and a reportedly strong data system;

  • Midwestern medium-sized state using LAS Links, with a fast-growing LEP population, and a reportedly strong data system;

  • Midwestern large state using ELDA, but unlikely to release data;

  • Midwestern medium-sized state using a state-specific test;

  • Southestern medium-sized state using ELDA for longer period than the others, a fast-growing LEP population, a distinctive accommodations policy, but an unknown data system.

State Performance Data

We will summarize state-level performance data on percentage proficient and AMAO performance for all 50 states and the District of Columbia to the extent that EDFacts and CSPR data are available for each state.

  1. Procedures for Data Collection

The data collection procedures of each of the six main components of our study (standards review, state interviews, subgrantee (district) survey, case studies, student assessment data, and state performance trends) are discussed in detail below. (Analytic and estimation procedures as well as estimates of precision were discussed above in the overview of the study under the section titled ‘Analytic Approach.’)

Standards Review

The research team will work together to collect and organize relevant extant documentation. Chief among these will be state ELP standards, which are the subject of specific evaluation questions. The standards review will involve the collection of ELP standards in effect for school year 2008-09 for each state and consortium from states’ websites or from other sources such as the ELP consortia. We will also collect other supporting documents that are intended to be used in conjunction with the ELP standards (e.g., exemplars of student work/performance reflecting the standards; supporting documentation about the rationale and evidentiary background to the standards, teaching guides or other implementation documents) as they are publicly available.

In addition, we will collect (primarily from websites) state Title III plans, Consolidated State Performance Reports (CSPRs), descriptions of state technical assistance to Title III districts, state AMAO documentation, Biennial Evaluation data, and technical manuals for state ELP assessments. We will work to obtain these documents for all 50 states and the District of Columbia (acknowledging that some types of documents are more easily accessible than others). We will collect some extant data in the fall of 2008 to inform the development of the study design, but most extant data collection will take place in spring 2009, in conjunction with other data collection activities.

State Interviews

Our interview protocols for state Title III directors and assessment directors are designed to communicate questions in a clear, conversational manner; generate systematic quantifiable data across all states; allow respondents to provide adequate contextual information on their state’s approaches; and include appropriate questions from the SSI-NCLB interviews in 2004 and 2006 to examine change over time. Drafts of the state data confirmation document and state interview protocol are provided in Appendix C, along with materials to notify states about the study and schedule a time for the state interview. Some of the key topics to be covered in the interviews include the following:

  • Updated information about ELP standards and assessment development or revision, building on rather than duplicating what was learned in the SSI-NCLB study;

  • State criteria used to set AMAOs and state policies for implementing AMAOs (including policies for minimum subgroup size and criteria used to determine when students exit from the LEP subgroup);

  • Implementation of AMAOs, including data systems in place to monitor AMAO data and progress, and technical assistance and support provided to districts;

  • Accountability and improvement actions for districts failing to meet AMAO targets for 2 and 4 years, including any technical assistance provided by the state;

  • Ways in which information is communicated from state to district and from school to parents, particularly with regard to districts failing to meet AMAO targets;

  • Ways in which small LEAs form consortia for Title III purposes and how accountability is established within those consortia;

  • Extent to which characteristics of the state’s LEP population plays a role in the implementation of state policies;

  • Ways in which results of ELP assessments and content area assessments are being considered together; and

  • Promising practices with regard to state policies and practices for Title III implementation.



All interviewing staff will have prior experience conducting interviews, will have reviewed results from piloting of interviews during instrument development, and will be trained to ensure consistency across interviewers prior to data collection. In preparation for these interviews, the research staff will use their prior knowledge of each state and a thorough document review, to annotate sections of the protocol. The purpose of this annotation is to identify areas for clarification and further elaboration, reduce duplication, and maximize the productivity of each interview. Prior to scheduling interviews, we will develop consent forms and review them through our Institutional Review Board (IRB).

The research team will conduct telephone interviews with state officials. The interview format will allow for some standardization across questions asked as well as an opportunity for respondents to elaborate on their responses in order to shed light on the reasons behind them. Further, in cases where the state interviews will be used to inform case studies, this interview format will allow us to get the detail needed to prepare for the case studies effectively.

Where necessary (i.e., the Title III director does not know answers to the assessment questions on the protocol), the research team will conduct a short interview with the assessment director focused only on the questions relevant to state ELP assessments. This will avoid unnecessary duplication of information from both the assessment and Title III directors while also enabling the team to obtain data on all questions of the protocol.

Primary research staff will conduct the interviews, which will last approximately one hour. Each interview will be recorded digitally. Notes will be summarized following each interview. In the event that exact quotes or verification is needed, the audio file will be available as a backup. Research staff will commence interviews with states no later than October 1, 2009, and will complete them no later than January 15, 2010.

Subgrantee Survey

We are employing a well-tested process for developing the district questionnaire. The process involves:

  1. Defining the content areas to be addressed by the questionnaire in a construct matrix;

  2. Developing a detailed instrument outline mapped to the evaluation questions and the matrix;

  3. Identifying questionnaire items from previous research studies that can be used or adapted and drafting new items as needed;

  4. Creating a first draft of the questionnaire;

  5. Having the questionnaire reviewed by senior project staff not associated with questionnaire development;

  6. Having the questionnaire reviewed by the TWG and ED staff;

  7. Conducting a pilot test of the questionnaire and cognitive interviews with less than nine potential survey respondents; and

  8. Developing a final version of the questionnaire.

We have currently completed steps 1-6 of this process. After the final version of the questionnaire has been developed, it will be passed to our Web design specialist who will create a Web-based version. This version will be thoroughly tested by our staff before it is released to district respondents. Drafts of the subgrantee survey, survey instructions, and respondent notification materials are provided in Appendix D.

The purpose of the subgrantee (district) survey is to determine how state policies relating to LEP students are being implemented at the local level. The subgrantee survey will give us a better understanding of how state policies are implemented in varying local contexts as well as insights into more locally developed policies relevant to the Title III focus on ELL student achievement. Topics likely to be addressed in the district survey include the following:

  • Measures and methods used to assess students for identification as LEP, for placement in LEP programs, and to monitor progress in such programs;

  • Criteria used for defining LEP status, for determining placement in LEP programs, and for determining exit from LEP status and LEP services;

  • Typical lengths of time that students remain in LEP status and LEP services;

  • Whether accountability issues influence judgments about how long students remain in LEP status and LEP services;

  • Procedures used by districts to report accountability information to states;

  • Title III accountability supports and consequences the districts has experienced;

  • Title III communications and technical assistance received from the state;

  • District policies on curriculum, instruction and programs for LEP students; and

  • District professional development and staffing activities to develop capacity to improve instruction to LEP students.

One basic issue for survey design relates to how to adapt the questionnaire for use by consortia of school districts (assuming that the consortia, rather than the districts within them, are the sampling unit). To address this issue, we will ask for information about the district in which the consortium director is housed. Our rationale for doing so is that the consortium directors will be the first point of contact during survey administration, and they are most likely to be able to respond accurately about their own districts. Also, the consortium director’s district is likely to be a fair proxy for the consortium as a whole.

We also plan to consider the applicability of each survey question to elementary and secondary grade levels and adapt the questions and response sets as needed. Likewise, given the complex relationship between LEP status and LEP services in individual school districts, the design of the questionnaire will allow districts to respond to separate questions about LEP students and students receiving LEP services.

Once data collection has been approved by OMB, we will send the sampled districts a package containing: (1) a cover letter; (2) promotional material describing the evaluation; (3) a copy of the district questionnaire for review; and (4) instructions for completing the online, web-based version of the questionnaire. The instructions will contain a unique ID number for each school district, to protect the security and confidentiality of survey responses. Survey data will be stored in files containing only ID numbers, without any identifying information, as a further measure of confidentiality.

Case Studies

The purpose of the case studies is to illuminate the ways in which states and districts are approaching the implementation of Title III standards, assessments, and accountability systems (and to complement the analyses in other aspects of the evaluation). Specifically, the case studies will take a closer look at how states and districts assess LEP students and at accountability measures for districts that are not meeting AMAO targets.

The research team has developed protocols to guide interviews at the state, district, and school levels for case studies to be conducted in 5 states. Draft versions of the case study district interview protocols are provided in Appendix E along with copies of the case study informed consent forms, recruitment letters, and promotional materials. Draft versions of the case study focus group protocols, informed consent forms, and promotional materials are provided in Appendix F. The visit to districts will be a central focus of the case studies and will target the following key areas:

  • Implementation environment and context

  • Implementation of state policies

  • Accountability measures for districts not making AMAOs

  • Curricular, programmatic, and assessment decisions at the district level with regard to LEP students and how these decisions are made

  • Professional development, staffing actions, and other assistance provided to principals and teachers (mainstream and LEP), particularly in districts that have missed AMAOs

The nature of the state-level interviews for case studies will be informed by the initial interviews that take place with state Title III directors and state assessment directors across the evaluation, and will focus on promising practices or policies related to Title III implementation.

The case study protocols have been designed to explore the evaluation questions without undue burden to the respondents. They have been designed to allow the interviewer to probe the key constructs and variables from our conceptual framework. In particular, we recognize the importance of developing broad questions to initiate discussion and allow the participants to express themselves, followed by probes that elicit insights in crucial areas. The questions avoid language that may be loaded or leading. A thorough preparation and review of data collection instruments is being conducted and consists of three essential steps (the first two are complete):

  1. Review of relevant literature and the evaluation questions, and identification of key variables at the state, district, and school levels.

  2. Review of existing instruments to seek valid and reliable means of collecting the required information and appropriate adaptation to the questions that drive this study. Additionally, interview questions will be tailored to each district’s policies based on our background research and previously gathered information about each site.

  3. Piloting of the instruments to determine if the protocols are an appropriate length, if the questions are understood appropriately by respondents, and if we are inadvertently omitting some important topics.

In preparation for the case studies, the research team will create and disseminate a packet of information to send to all the potential participants. This packet will include both an introductory letter and an informational brochure designed in a nontechnical manner. The introductory letter is extremely important for informing the respondents about the study and encouraging states, districts, and schools to participate. The letter will introduce the study, its purpose, the importance of the data collection, the anticipated products, the confidentiality of the data, aggregate reporting, provisions for maintaining the anonymity of the data, benefits of the study, and the names of the contractors. We will include all necessary contact information so that potential participants can contact the research team if they have questions. In addition, we will impress on the respondents the importance of participation, and will provide assurance that we have appropriate OMB clearance. All site visitors will participate in a common training before conducting site visits.

Two researchers will conduct visits to two to three districts in the five case study states (for a total of 12 districts). Each visit will take place over the course of approximately 4 days and will include interviews with district staff, school visits, and focus group interviews with school-level staff. Interviews with state officials will also be included in the case studies, although these may occur over the phone rather than during the visit itself, depending on scheduling considerations.

The site visit schedule for each state will be developed in concert with the appropriate stakeholders within that state. The same pair of researchers will be responsible for scheduling and conducting the visits. They will use those early contacts to develop a relationship with staff at each site and to gain an understanding of contextual and geographic considerations to inform the planning of the visit.

Throughout the process of data collection and reporting, the research team will make all efforts to protect the privacy of respondents participating in the case studies. We will not identify by name any of the interviewees; nor will we attribute quotes that could be construed in a negative manner. Although we will identify the names of states in the final reporting of case studies, districts and schools will be identified by pseudonyms.

Student Assessment Data

The collection of state assessment data begins with the identification of states that are most likely to maintain the necessary types of student-level longitudinally linked achievement data. One of the major goals of this study is to analyze general achievement trends of LEP, former or “monitored” LEP, and non-LEP students over time. In order to estimate these general trends, it is necessary to have access to various student-level variables from state and district databases over several years.

We will limit our request to states to the key variables needed to perform our analyses, given sensitivity to state burden and state concerns related to potentially individually identifiable data. In particular, we would like to request access to the following student-level variables of the state and district databases:

  • Unique pseudo student identifiers (consistent over time to replace unique student ID)

  • Unique School/District I.D. or unique pseudo identifiers (consistent over time)

  • Student-level scale scores in state content assessments in reading/English language arts (R/ELA) and mathematics

  • Student-level proficiency levels in state content assessments in reading/English language arts (R/ELA) and mathematics

  • Student-level scale scores in ELP assessments (for each domain and composite score)

  • Student-level proficiency levels in ELP assessments (for each domain and composite score)

  • ELP assessment grade cluster form that was administered

  • Limited English proficient status

  • Information on student’s years-in-program & type of program (if available)

  • If redesignated, date or year of redesignation

  • Student with disability status and special education services received (if available)

  • Test accommodations used

  • Grade

  • Birth Month and Year

  • Gender

  • Primary ethnicity

  • Primary language

  • Eligibility for free or reduced price lunch

  • Years living in the U.S.

  • Years a student has been attending a school in that district or state



States or districts do not need to provide actual unique student identifiers. They will be instructed to replace actual names and unique student identifiers with unique pseudo identifiers (random numbers that are consistently associated with the same single student over time). Alternatively, they may format the database such that each row in the data file includes all of the above information across all years for a single student and then strip all unique student identifiers. This formatting approach will enable us to link student characteristics (i.e., gender, ethnicity, etc.) to longitudinal student academic achievement, which is critical for our analysis.

The longitudinal analyses in this study require that we obtain several years of data. We should collect at least three but preferably five years of data for each state and district, up to 2007-08. If states or districts included in the original sample do not have a long enough history of high-quality data, an alternative state or district may be used in this analysis component.

The data collection will begin with a telephone contact with the state’s Title III and assessment directors or their offices’ designees to confirm the existence of the necessary data and to secure the state’s cooperation. To reduce state and district burden and promote efficient provision of data files for the study we will provide substantial technical assistance to states and districts in their delivery of data. We will provide clear written specifications of data elements required, will conduct a brief telephone interview to clarify data availability, will provide follow-up telephone consultation, and will accept files in the format that the state maintains them.

State departments of education, in particular their assessment divisions, are typically familiar with output data layouts, which are mechanisms that clarify what data should be extracted from the warehouse and in what format. In each state where data are to be collected, the research team will provide the assessment/accountability divisions with the data layout and any technical assistance necessary so that data can easily be extracted from the warehouse in a format that is consistent with the data layout and sent to us in a clear and understandable format. This procedure will greatly simplify and make transparent the data extraction and delivery process.

In previous interactions with state and district data staff, student assessment data have been extracted from data warehousing and management systems and sent to the research team as a flat file using common delimiters (e.g., commas or tabs). We anticipate the same method of data transfer from the states and districts during the course of this study, which would permit the research team to store and manage these data using common data warehousing systems. Because these data will be subject to statistical analyses, Stata is our preferred system for storing and managing the student-level data. We propose Stata based on considerations that include the projected number of users, the probable size of the database, transaction logging, data integrity and security, and administration and performance.

State Performance Data

State performance data will be analyzed across 50 states and the District of Columbia for school years 2004-05 through 2007-08. Our data sources will include the Biennial Report, which covers the school years of 2004-05 and 2005-06; the Consolidated State Performance Reports (CSPRs), which cover the school years of 2004-05, 2005-06, and 2006-07; and EDFacts, which covers part of the 2006-07 school year and all of 2007-08. In addition, we will collect supplemental information about state assessments through state education departments’ websites. The specific variables we will use include:

  • Percentage of Title III served LEP students making progress in learning English

  • Percentage of Title III served LEP students attaining English language proficiency

  • Percentage of students proficient or advanced in reading or Language Arts among grade 3-12 LEP subgroup identified for Title I services

  • Percentage of students proficient or advanced in mathematics among grade 3-12 LEP subgroup identified for Title I services

  • Whether states have met AMAOs

  • Number and percentage of subgrantees meeting all AMAOs

  • Number and percentage of subgrantees meeting AMAO1 (making progress)

  • Number and percentage of subgrantees meeting AMAO2 (attaining proficiency)

  • Number and percentage of subgrantees meeting AMAO3 (meeting AYP)

  • Number and percentage of subgrantees not meeting AMAOs for two consecutive years

  • Number and percentage of subgrantees not meeting AMAOs for four consecutive years



  1. Methods to Maximize Response Rate

Data collection is a complicated process that requires careful planning. The research team has developed interview, survey, and data collection protocols that are streamlined and that are designed to place as little burden on respondents as possible. The team will also pilot and subsequently refine all instruments to ensure they are user-friendly and easily understandable, all of which increases participants’ willingness to participate in the data collection activities and thus increases response rates.

To further ensure a high response rate on the district survey, we will not rely entirely on Web-based on-line administration. We will supplement our follow-up activities with telephone prompts, as necessary, and a hard copy pencil-and-paper survey questionnaire will be mailed to any respondent who requests one. Further, after mailing the packages to school districts, we will prepare a twice-weekly log of all responses received online and by mail. Approximately two weeks after the initial mailing, we will begin the process of survey follow-up. We will send a letter reminding respondents about the survey. After two more weeks, we will implement a series of three follow-up calls at approximately ten-day intervals. During the third call, we will offer to complete the questionnaire as a telephone interview. The research team has extensive experience administering Web- and e-mail-based surveys with high response rates using these procedures.

Due to these practices and given that under this grant program the states and subgrantees are required to provide data for this and other evaluation purposes, we anticipate 100 percent response rates from the states and an 85 percent or greater response rate from the districts.

  1. Expert Review and Piloting Procedures

For the current evaluation, we anticipate a pilot test of the district questionnaire with a total of six to eight districts, which will be located in different states. We will use a paper version of the questionnaire for the pilot test to limit Web site development costs. We will ask respondents to respond to the questionnaire in a natural manner, noting only the length of time required for completion. After the paper form is mailed or faxed to us, we will conduct a telephone interview with the respondents, asking about (1) the clarity and usefulness of the questionnaire instructions; (2) the overall organization and flow; (3) the clarity of questionnaire wording and language; (4) specific items that were unclear or difficult to answer; (5) any recommended changes; and (6) any other comments on the questionnaire. Responses from these interviews will be used in making any final changes to the questionnaire.

The state interview and case study protocols will also be pilot tested. The research team will work closely with ED to finalize the protocols to be piloted and identify states and districts where they can be tested. We propose to identify two states and two districts, each with varying characteristics, to conduct the initial pilot through phone interviews. Lessons learned during the piloting process will inform refinements to protocols, as well as procedures for scheduling and conducting interviews. Prior to conducting state visits, we will develop consent forms for interviews and have them and the protocols reviewed by IRB.

  1. Individuals and Organizations Involved in Project

AIR is the contractor for the Evaluation of State and Local Implementation of Title III Standards, Assessments, and Accountability Systems and, in collaboration with edCount and Windwalker, will carry out the study activities. Dr. Jennifer O’Day will serve as principal investigator and Dr. James Taylor as project director. Drs. Kenji Hakuta and David Francis, and Ms. Andrea Cook Ramsey will serve as consultants to the study, with Dr. Ellen Forte (edCount), and Mr. Howard Fleischman (Windwalker) serving as senior advisors and team leaders to the project for the standards and student assessment review and subgrantee survey, respectively. Drs. Kerstin LeFloch and Miguel Socias of AIR will be the analysis and reporting team co-leaders.

During data collection and particularly during the initial phase of analysis, we will draw on the cross-staffing of some key members of the project, including the project director and team leaders, to combine findings across these data sources to create the synergies that are at the heart of our mixed methods design. Contact information for these individuals and organizations is presented in Exhibit 10.

Exhibit 10. Organizations, Individuals Involved in Project

Responsibility

Organization

Contact Name

Telephone Number

Principal Investigator

AIR

Dr. Jennifer O’Day

650-843-8166

Project Director

AIR

Dr. James Taylor

202-403-5607

Case Study Team Leader

AIR

Dr. Kerstin LeFloch

202-403-5649

Student Assessment Data Team Leader

AIR

Dr. Miguel Socias

650-843-8271

Consultants

Consultant

Stanford University

Dr. Kenji Hakuta

650-723-5620

Consultant

University of Houston

Dr. David Francis

832-842-7018

State Interview Team Leader and Consultant

NA

Ms. Andrea Cook Ramsey

571-216-0187

Subcontractor

Standards and Assessments Review Team Leader

edCount

Dr. Ellen Forte

202-302-5652

Subgrantee Survey Team Leader

Windwalker

Mr. Howard Flieschman

703-970-3527



References

Birman, B., LeFloch, K.C., Klekotka, A., Ludwig, M. Taylor, J., Walters, K. Wayne, A. and Yoon, K.S. (2007). State and local implementation of the No Child Left Behind Act Volume II – Teacher Quality under NCLB: Interim Report. Washington, DC.: U.S. Department of Education, Office of Planning Evaluation and Policy Development, Policy and Program Studies Service.

Center on Education Policy (2006). From the Capital to the Classroom: Year 4 of the No Child Left Behind Act. Washington, D.C. Retrieved May 31, 2006, from http://www.cep-dc.org/NCLB/Year4/ CEP-NCLB-Report-4.pdf.

Lau v. Nichols. 414 U.S. 563. (1974).

LeFloch, K., Martinez, F., O’Day, J., Stecher, B and Taylor, J. (2007). State and local implementation of the No Child Left Behind Act Volume III – Accountability under NCLB: Interim Report. Washington, DC.: U.S. Department of Education, Office of Planning Evaluation and Policy Development, Policy and Program Studies Service.

National Clearinghouse for English Language Acquisition and Language Instruction Educational Programs (NCELA). (2008) History. Retrieved June 23, 2008, from http://www.ncela.gwu.edu/policy/1_history.htm.

Parrish, T.B, Merickel, A., Pérez, M., Linquanti, R., Socias, M., Spain, A., Speroni, C., Esra, P., Brock, L., & Delancey D. (2006). Effects of the implementation of Proposition 227 on the education of English learners, K–12: Findings from a five year evaluation. Palo Alto: American Institutes for Research.

Raudenbush, S. W. and Bryk, A. S. (2002). Hierarchical linear models application and data analysis methods (Second Edition). Newbury Park: Sage.

U.S. Department of Education. (October, 2007). Framework for high‑quality English language proficiency standards and assessments: Draft. Washington, DC: U.S. Department of Education, LEP Partnership, Office of the Deputy Secretary of Education, by the Assessment and Accountability Comprehensive Center.

1 World-Class Instructional Design and Assessment

2 English Language Development Assessment

3 Language Assessment System Links

1000 Thomas Jefferson Street, NW | Washington, DC 20007‑3835

File Typeapplication/msword
File TitleEVALUATION OF STATE AND LOCAL
AuthorInformation Technology Group
Last Modified By#Administrator
File Modified2009-10-23
File Created2009-10-23

© 2024 OMB.report | Privacy Policy