Part A of Conv Magnet Schools

Part A of Conv Magnet Schools.doc

Conversion Magnet Schools Evaluation

OMB: 1850-0832

Document [doc]
Download: doc | pdf



Conversion Magnet Schools Evaluation


OMB Clearance Request






April 2007




Prepared for:

Institute of Education Sciences

United States Department of Education

Contract No. ED-04-CO-0025/0009



Prepared By:

American Institutes for Research®

Table of Contents


Page



Appendices

Appendix A: Authorizing Legislation for Evaluation of Magnet Schools Assistance Grants

Appendix B: Grantee Screening Protocol-Module A: MSAP Project Director Interview

Appendix C: Grantee Screening Protocol-Module B: District Assessment Representative Interview

Appendix D: Grantee Screening Protocol-Module C: District School Choice Coordinator Interview

Appendix E: Grantee Screening Protocol-Module D: District Data Management System Representative Interview

Appendix F: Grantee Screening Notification Letter to MSAP Project Director

Appendix G: Feasibility Phase Brochure

Appendix H: Purposes of Variables in Grantee Screening Protocol for Modules A through D

Appendix I: Student Records Data Collection Plan

Appendix J: Evaluation Phase Brochure

Appendix K: Purposes of Variables in Student Data Collection Plan

Appendix L: Principal Survey 2006-2007 School Year

Appendix M: Purposes of Variables in Principal Survey

Appendix N: MSAP Project/School Choice Coordinator Interview Guide

Appendix O: Notification Letter to Coordinator for Follow-Up Interview

Appendix P: Purposes of Variables in Coordinator’s Follow-Up Interview

List of Exhibits Page





Introduction


Since the mid-1970s, magnet schools have been critical to school districts’ efforts to implement voluntary desegregation plans and, in some cases, court desegregation orders. More recently, they have become an important component of public school choice options available to parents, and admissions policies of such programs have evolved to include considerations other than ethnic balance—e.g., promoting socio-economic diversity, and providing options to families who want to move their children out of underperforming schools. It is estimated that, in 1999-2000, the U.S. had about 3,000 magnet schools enrolling 2.5 million students.1


The federal government has supported the development of magnet schools through the Magnet Schools Assistance Program (MSAP) since the mid-1980s. Legislative authority for MSAP is found in the Elementary and Secondary Education Act of 1965, as amended, Title V, Part C; 20 U.S.C. 7231-7231j.2 The program was most recently amended by sections 5301-5311 of the No Child Left Behind Act of 2001 (NCLB). The program awards 3-year grants to school districts for use in implementing new or revised magnet programs in elementary and secondary schools. In a given year, about 10 percent of the nation’s public magnet schools receive support from MSAP. During the current funding cycle (grants awarded in 2004), MSAP supported programs in over 200 schools located in 50 districts. The annual appropriation for MSAP between 2004 and 2006 averaged $107.7 million.3


Section 5310 of the NCLB statute authorizes the secretary of education to use MSAP monies to evaluate the program. The National Center for Education Evaluation (NCEE) of the Institute of Education Sciences (IES), in collaboration with the Office of Innovation and Improvement (OII), has awarded an 18-month contract to the American Institutes for Research (AIR) and its subcontractor, Berkeley Policy Associates (BPA), to assess the feasibility of evaluating the relationship between magnet school conversion, minority group isolation, and student achievement at the elementary school level. Depending on the results of the feasibility phase of the study, NCEE may exercise the option to extend the contract to a total of 60 months by authorizing work on additional tasks to implement the actual evaluation.


The full Conversion Magnet Schools Evaluation would investigate the relationship between the introduction of elementary magnet programs using funds from MSAP grants awarded in 2004 or 2007 and student outcomes that include student achievement and minority group isolation. This is a request for OMB clearance to carry out all the data collections that may take place during the two phases of the evaluation. Although the full evaluation will only go forward if the feasibility (and recruitment) phase is successful, the short timeframe of the feasibility period makes it necessary to request clearance now for the evaluation activities as well. Clearance is requested for the following data collection materials:


  • Grantee screening protocols that will be used to identify schools and districts in the 2004 and 2007 MSAP cohorts that are eligible to participate in the study;

  • Plan for collecting student records from participating school districts;

  • Survey of principals of MSAP-funded magnet schools and non-magnet comparison schools participating in the study; and

  • Protocol for conducting an end-of-grant interview with MSAP project director or school choice coordinator in each participating district.


This document contains two sections. This first introductory section provides background for the data collection instruments for which clearance is sought by describing the policy context within which magnet schools operate, the characteristics of the federally funded magnet schools that are the focus of the Conversion Magnet Schools Evaluation, and the major features of the evaluation study. The second section contains Part A (Justification) of the supporting statement for the Paperwork Reduction Act Submission. A set of appendices contains the instruments for which we are requesting clearance, and a companion document contains Part B (Statisical Methods) of the supporting statement for the Paperwork Reduction Act Submission.

A. Conversion Magnet Schools and Their Students

Federal statute defines a magnet school as a public school or education center that offers a special curriculum capable of attracting substantial numbers of students of different racial backgrounds.4 Most elementary magnet schools begin as “regular” schools that serve the population that lives within the boundaries of a residentially defined service area—the school’s attendance area (also often referred to as an attendance zone). Students in this area who attend the school are its “resident” students. Typically, these schools serve attendance areas with high concentrations of low-income, minority group students who historically have had low levels of academic achievement.


Schools become conversion magnet schools when they introduce a special curriculum or instructional approach (the magnet program) and seek to attract and enroll non-resident students (i.e., students living outside the school’s regular attendance area). The expectation is that by increasing the diversity of students attending the school and involving students in engaging and rigorous academic programs, the conversion will improve the resident students’ academic performance and reduce minority group isolation of students in schools with substantial proportions of minority students.5 Non-resident students are expected also to benefit academically from the quality of the magnet program and the diversity of their classmates. In addition, non-resident students may benefit because the magnet school that their family has chosen for them may be better able to provide them with opportunities that match their particular needs and/or interests.

B. The Conversion Magnet Schools Evaluation

Despite the popularity and persistence of magnet programs, there have been only a few quantitative studies of their relationship to important student outcomes. Results have been mixed, and no definitive conclusions can yet be drawn. Drawing broad conclusions is particularly challenging because the structures and target populations of magnet school programs are so varied, but the studies conducted have treated magnet schools as a single type of intervention. For instance, elementary and secondary programs differ in the degree to which they can capitalize on students’ career and college plans and the pattern of standardized testing that provides evidence of student achievement over time. Some schools operate “programs-within-a-school” (PWSs) in which the magnet program serves only a fraction of their enrollment, while others operate whole-school programs that are designed to affect all of their students. While most magnet schools (particularly at the elementary level) serve both the residents of an attendance area and non-residents who must apply for admission, some magnets have no attendance area and require all students to apply for admission. Magnet programs also vary in maturity—they may be new, well-established, or “stale” and in need of revision to strengthen the curriculum and generate new interest among potential applicants.


The study for which OMB clearance is being requested is an investigation of the relationships between some MSAP-funded magnet schools and the academic achievement and minority group isolation of the students who attend them. The study will avoid some of the limitations of earlier studies by focusing on a single, relatively large group of these schools that have several characteristics in common: they are elementary schools that converted into whole-school magnets during the 2004 or 2007 MSAP grant cycle, and serve a mixed enrollment of resident and non-resident students.


The differing circumstances of the two groups of students necessitate the use of different research designs to study the relationship between conversion magnet schools and student outcomes. Because resident students are not randomly selected for admission to the magnet school, resident student outcomes cannot be examined using an experimental design. Rather, we propose a quasi-experimental, interrupted time series design in which outcomes for resident students in MSAP-funded conversion elementary magnet schools will be compared with those for resident students in matched non-magnet comparison schools in the same district. Preliminary estimates indicate that longitudinal records data for the resident students who attend approximately 50 MSAP-funded elementary conversion magnet schools and approximately 100 comparison schools would be needed.6


For non-resident students we propose an experimental research design comparing the achievement of students who are randomly assigned to the magnet schools with the achievement of students who apply for admission, but are not assigned to them. This component of the study will capitalize on the randomization provided by the lotteries that districts use to select among magnet school applicants when schools are oversubscribed. The sample size needed for this research design depends on the availability of test data for students prior to their being randomly assigned to a magnet or non-magnet school. Preliminary estimates indicate that if no pre-test is available, a sample of 5,300 lottery applicants to oversubscribed schools would be needed.


Both research designs require individual student records data (students’ standardized test scores in English language arts and mathematics as well as demographic characteristics and residence information). Additionally, the experimental design for non-resident students requires information on the lotteries to which each student applied, the outcome of the lottery for each applicant, and the school(s) in which the student actually enrolled.


Feasibility Phase


The data requirements of the two parts of the proposed study may not be met by all the schools and districts in the 2004 and 2007 cohorts of MSAP grantees. Accordingly, the first 18 months of the Conversion Magnet Schools Evaluation will be a feasibility phase to determine the numbers of schools and lottery participants that (1) meet study criteria; and (2) are located in districts with the capacity and willingness to provide multiple years of longitudinal student records data to the evaluation. This determination will be made based on existing data about magnet and non-magnet schools available from NCES’s Common Core of Data (CCD) Non-fiscal Survey, the MSAP Office, and information collected through a series of semi-structured interviews with officials in grantee districts guided by the Grantee Screening Protocol. Information about data quality and availability will be used to decide what analysis approaches for studying outcomes for resident and for non-resident students would be feasible.7


Evaluation Phase


If one or both of the proposed research designs are found to be feasible, an evaluation of student achievement and minority group isolation outcomes will be conducted during a 46-month evaluation phase.8,9 Beginning in December 2007 and continuing through March 2011, AIR/BPA staff will work with data managers in each participating district to obtain student records data for the analyses of achievement, as described in the Plan for Student Records Data Collection. In addition, enrollment data for the analysis of desegregation outcomes will be extracted from the CCD.


In 2007 (for 2004 grantees) and again in 2010 (for both 2004 and 2007 grantees), a principal survey will be administered to the principals of the conversion magnet schools and the comparison schools. The principal survey will collect contextual data needed to interpret study findings, allow comparisons to national samples of schools from the Schools and Staffing Survey (SASS) and the Early Childhood Longitudinal Study (ECLS-K), and describe the evolution of the magnet programs over the course of the grants. During the last year of the study, a brief, semi-structured interview will be conducted with the MSAP project director or school choice director in each district. This interview will provide the study with an update on the evolution of the district’s magnet programs between 2007 and 2010. At that time, magnet schools from the 2007 cohort will be completing their three years of federal support, while magnet schools from the 2004 cohort will be able to report on 3 years of experience beyond the end of federal funding. The schedules for data collection and study deliverables are provided in Exhibits 4 and 5 of Part A of the Supporting Statement.

Supporting Statement for Paperwork Reduction Act Submission


A. Justification

1. Circumstances Making Collection of Information Necessary

A central message of the No Child Left Behind (NCLB) Act of 2001 is that the federal government seeks to improve the quality of schooling in the United States for all students. The federal government’s support of magnet schools through the Magnet School Assistance Program (MSAP) aims to make a significant contribution to that objective by promoting an intervention strategy that


  • offers distinctive educational curriculum or teaching methods in a manner that increases parental choice options by making programs available to students both inside and outside of neighborhood attendance areas; and

  • prevents, reduces, or eliminates minority group isolation by attracting students of diverse backgrounds.


The research literature identifies each of these mechanisms—curricular focus or teaching method, parental choice, and diversity in student compositions—as possible avenues for enhancing the academic achievement of students. Moreover, the explicit objective of reducing racial and ethnic minority group isolation makes MSAP unique among the federally funded programs of NCLB.


Section 5310 of NCLB authorizes the Secretary of Education to carry out evaluations that address, among other things, how and to what extent magnet school programs lead to educational quality and improvement, and the extent to which they lead to elimination, reduction, or prevention of minority group isolation. The legislation also directs the Secretary to collect and disseminate information on successful magnet schools to the general public (see Appendix A).


Through its aim of improving schooling for all students, NCLB has brought a sharp focus on addressing inter-group differences in student achievement. Under this federal law, schools are accountable not only for overall student proficiency, but also for proficiency rates of numerically significant subgroups at each school (where subgroups are defined by socioeconomic status, race and ethnicity, English Language Learner status, etc.).


Moreover, the literature on the determinants of student achievement provides pertinent examples in which the overall relation between a given school program and achievement was quite weak, but a far stronger relation emerged between the same characteristic and achievement of one or more subgroup of students. In the area of school choice, Howell and Peterson’s study of the impact of the New York voucher program found that winning a voucher lottery was associated with zero gains in achievement in many cases, but positive gains in achievement for certain races and grades, with other races and grade levels exhibiting no effect.10 A student fixed-effect analysis of achievement gains of students in San Diego by Betts, Zau and Rice showed that variations in class size had twice as large an effect on English language learners compared with other students. 11 Krueger and Whitmore reported from their re-analysis of the Tennessee STAR experiment that low-income students responded far more strongly to reductions in class size than did other students.12 Thus it is reasonable that the NCLB legislation reflects the need to assess the performance of subgroups of students and not simply the performance of students overall. Accordingly, the proposed study seeks to examine the relationship between magnet schools and student achievement not just for students overall, but for subgroups such as low-income and race-ethnic minority students, and English language learners, that are of interest to policymakers as reflected in NCLB.


Despite the popularity and durability of the magnet school concept, scientifically rigorous research on the effectiveness of magnet school programs is limited. A review of the research literature on the effects of magnet programs on student achievement identifies 3 random assignment studies using lotteries and 12 quasi-experimental studies using non-randomized comparison groups with pre and post-tests controls.


Collectively, the lottery based studies of the effects of magnet schools on student achievement are inconclusive. A study of programs-within-a-school (PWSs) and whole school “career” magnet programs in New York City high schools in the late 1980s and early 1990s determined that the programs not only failed to have an effect on reading scores, absenteeism, or on the likelihood of students taking advanced graduation/college admissions tests, they also appeared to have a negative effect on high school graduation rates and mathematics test scores.13 In a recent study of elementary and secondary schools in San Diego, the authors generally found no differences in mathematics or reading test scores between lottery winners and losers in any of the district’s school choice programs, including magnet programs. As an important exception, the authors did report that winners of lotteries to attend magnet high schools performed better on mathematics achievement tests 2 and 3 years later.14 In the third study of lottery winners and losers to middle school magnet programs in a mid-sized Southern school district, the positive effect of magnet programs on mathematics achievement tests disappeared when the authors controlled for student demographics and prior achievement. The authors suggest that the most likely explanation for this is a differential pattern of attrition among lottery winners and losers.15, 16


The 12 quasi-experimental studies of the relationship between magnet programs and student achievement date largely from student cohorts of the 1980s and early 1990s and include analysis of the relationship of magnet programs to test scores in reading, mathematics, and other subjects.17 Seven studies were conducted on elementary school magnet programs, three on middle school programs, and two on high school magnets. Some of the studies examine whole school programs, while others focus on program-within-a-school (PWS) magnets, and a few consider both. The studies of PWS magnets tend to be more consistent in showing positive outcomes than studies of whole school programs. The PWS magnets, however, are often very selective of students, and the studies of those programs may be particularly subject to selection bias. Although whole school magnets provide programs that are generally more available to all students, the results from studies of those magnet programs tend to be mixed.


Each of the prior studies has limitations, and the mixed findings indicate that no definitive conclusions can yet be drawn about the effects of magnet schools and programs on important student outcomes. Drawing broad conclusions is particularly challenging because the structure and target population of magnet school programs are so varied. Conclusions based on evaluations that aggregate across programmatic approaches and education levels may not be meaningful. A more targeted evaluation of magnet schools would focus on a single category of schools receiving funding through MSAP. The proposed study will focus on elementary schools that convert to a whole-school magnet, which is one of the more common types of magnet schools.18,19


Another important limitation of existing studies is that most focus on students who have actively applied for a program, thereby overlooking the effect of magnet programs on resident students who may have been admitted to the program because they live in the school’s attendance area. The evaluation of conversion magnet schools is explicitly designed to study the relationship between participation in magnet programs and important outcomes for both resident and non-resident students.


Research on the relationship of magnet schools to desegregation is even more limited than research on the relationship of magnet schools to student achievement. Two descriptive studies reported that over half of desegregation-targeted schools in MSAP funded districts succeeded in preventing, eliminating or reducing minority group isolation.20,21 The San Diego study cited previously, uses the outcomes of lotteries to examine the effect of school choice options, including magnet schools, on racial, socioeconomic and other forms of integration district-wide.22 The results indicated that magnet schools increased the exposure of White to non-White students, and vice versa. The effect of magnet schools on socioeconomic integration was inconclusive.23 While the San Diego study makes an important contribution to examining the relationship of magnet schools to integration, the study is restricted to a single district. The earlier descriptive studies of the relationship of MSAP funded magnet schools to minority group isolation do not provide for a controlled comparison between magnet and non-magnet schools. The proposed evaluation of conversion magnet schools is designed to include control school comparisons and to include more than a single district.

In brief, this evaluation will make a significant contribution to informing policymakers, practitioners, and parents about the relationship of magnet schools to student outcomes by


  • focusing on conversion, whole school magnet programs;

  • distinguishing the relationship of magnet schools to achievement of resident and non-resident students; and

  • examining the relationship of magnet schools to minority group isolation through a controlled comparison of magnet and non-magnet schools.


This submission requests approval for collecting data to estimate the relationship between federally funded magnet schools and student outcomes that include the student achievement of resident students and non-resident students and minority group isolation of students. The study will be based on a combination of research designs, including quasi-experimental designs and random control trials using lotteries. The data collection efforts will begin in the feasibility phase with screening interviews of district officials to determine the number of qualified districts and schools from which to draw participants for an evaluation. If an evaluation is deemed feasible, the data collection efforts in the evaluation phase will include acquisition of student record data, surveying of principals at selected magnet and comparison schools, and a follow-up interview with a representative knowledgeable about the school choice program in the district.


In the case of resident students, preliminary estimates, which are discussed in Part B, indicate 50 conversion magnets and 100 comparison schools with a total of 15,000 students are needed. In the case of non-resident students, preliminary estimates indicate that a sample of 5,300 students is needed.24 Samples of these sizes would permit estimating the relationship between magnet schools and student outcomes for resident students in general, non-residents students in general, and for subgroups that constitute 20 percent or more of the resident or non-resident students in the study.

2. Purposes and Uses of the Data

The data to be collected for the Conversion Magnet School Evaluation are needed to address the following main research questions:


  1. How does the conversion of a neighborhood school to a magnet school affect the educational achievement of resident (neighborhood) students?

  2. To what extent does the conversion of a neighborhood school to a magnet school reduce minority group isolation in the school?

  3. If sufficient data are available from lotteries of applicants to magnet school programs, what do these data indicate is the relationship between these magnet schools and the educational achievement of students applying to over-subscribed magnet programs?

  4. To what extent do the new magnet schools funded through the 2004 and 2007 MSAP evolve over time in terms of their program structure and content?


Before these research questions are addressed, data will be collected to assess the feasibility of conducting an evaluation addressing those questions with magnet schools funded by the 2004 and 2007 cycles of MSAP grants. The feasibility assessment will address the following issues:


  1. How many of the schools funded through 2004 or 2007 MSAP grants are elementary conversion magnet schools that are appropriate for an evaluation? (That is, they operate whole-school magnet programs, have a well-defined neighborhood attendance area necessary to distinguish resident from non-resident students, and can be paired with at least one non-magnet comparison school in the same district?)

  2. How many of these conversion elementary magnet schools are in districts that can provide consistent longitudinal student test data for conversion magnet and comparison school students in at least one of the grades served by the magnets?

  3. Is the number of schools in districts that are willing and able to provide student records data sufficient to permit the detection of meaningful relationship between conversion magnet schools and student achievement or minority group isolation?

  4. How many of the 2004 or 2007 MSAP grantee districts that are willing to provide data on conversion magnet schools are also able to provide data on students applying to magnet schools through lotteries that would permit researchers to obtain experimental estimates of the relationship between such programs and achievement of non-resident students?


The study will use a grantee screening protocol to address the research questions in the feasibility phase, and rely on student record data, a principal survey, and a brief end-of-study interview with MSAP project directors or school choice coordinators for the district in the evaluation phase. The following is a brief description of each data instrument.

Grantee Screening Protocol (Feasibility Phase)

The purpose of the district screening protocol is to gather information about the 2004 and 2007 MSAP grantee districts and schools needed to determine whether a rigorous analysis of the relationship between attending a magnet school and student achievement is feasible, and whether the districts have the capacity to provide the necessary data for such a study. The protocol is organized by topic into four modules directed at a district official who is knowledgeable about the subject. These include:


  • Module A:  MSAP Project Director Interview, which covers the number and characteristics of both elementary magnet schools funded by MSAP and potential comparison schools. (see Appendix B);

  • Module B:  District Assessment Representative Interview, which covers the the assessments used by the district since 2001-2002. (see Appendix C);

  • Module C:  District School Choice Coordinator Interview, which covers the operation of the district’s magnet assignment procedures, including the numbers of students winning and losing in lotteries to attend the magnet elementary schools, and record-keeping of these data. (see Appendix D); and

  • Module D:  District Data Management System Representative Interview, which covers the content, format, and linkability of record-data in the district’s student data management system(s). (See Appendix E).


A notification letter will be sent to the 2004 and 2007 MSAP grantees that are potentially eligible to be in the study. A sample notification letter to the MSAP Project Directors is presented in Appendix F. A brochure describing the overall study will accompany the notification letter and be distributed to all interviewees in the screening process. This brochure is shown in Appendix G. The variables from the modules in the protocol and their purposes are detailed in Appendix H.


The screening of grantee districts will, of necessity, be a semi-structured process as districts vary considerably in size, data management capacity and sophistication, and the details of their choice systems. While the protocol indicates specific information needed by the study, the interviewers will ask for clarification if initial responses are ambiguous, incomplete, or unanticipated. While the information needed to assess the feasibility of including each district is fairly extensive, the screening process is designed to reduce burden in three ways. First, it is organized by topic into four modules (each of which will require approximately 30 minutes to complete) for use with officials who are particularly knowledgeable about specific subjects. The two modules that will be administered first (those pertaining to the district’s magnet/comparison schools and student achievement tests) will allow researchers to eliminate some districts from consideration without administering the two other modules (pertaining to the school choice system/lotteries and the data management system). Thus, while the estimated average time to complete all four modules is 2 hours (distributed among three or four different individuals), the burden for some districts will be an hour or less. We estimate that all four modules will be administered to about two-thirds of the districts. Second, to the extent possible, existing data about schools and assessments (e.g., enrollment data from the CCD and assessment information from federal, state, and district websites) will be collected and pre-coded into each district’s protocol prior to the interviews.
Pre-coding will enable district staff to verify some information rather than search for it themselves. In addition, it will help to highlight areas in which a district may not meet study criteria so that interviewers can focus their initial questions on the factors most likely to eliminate the district from consideration. Finally, advance copies of the pre-coded protocol will be sent to the district so that officials know what information will be requested during the interview.

Student Data Collection Plan (Evaluation Phase)

Student test scores (the outcome measure) and data on student background characteristics (covariate measures) are the core data needed by the Conversion Magnet Schools Evaluation to answer the research questions pertaining to magnet schools’ relationship to student achievement and minority group isolation. Districts that agree to participate in the study from both the 2004 and 2007 cohorts will be asked to provide individual student record data for each of the 3 years prior to their receiving an MSAP grant and for at least 3 years after the grant award.25 The 2004 grantees will be asked to provide data for the 6-years from the award of their grants, which is to say from 2004-2005 through 2009-2010. The 2007 grantees will be asked to provide data for the 3 years from the award of their grants, which is from 2007-2008 to 2009-2010. Data for all years prior to 2007-2008 will be requested in 2008; the remaining data would be obtained annually in 2009, 2010, and 2011. However, the study will accommodate districts that would prefer an alternative delivery schedule.


Data required by the studies (regardless of which analytic approach is used) include individual student test scores in English language arts and mathematics, information that will allow each student to be identified as a resident or non-resident of the school he or she attends; and demographic variables that will be used as covariates to reduce variance in the analysis. Data required to estimate the relationship between magnet schools and achievement of non-resident students also include information on the student’s participation in lotteries—what schools were requested, to what schools was the student accepted, and in which did he or she enroll. Data for each student stored in different data systems must either be linked together in the files provided by the district or transmitted in a form that will allow the study team to make the linkages themselves.


In as many districts as possible, student records data will be collected for all elementary school students in the district rather than for selected schools. This will be done for three reasons. First, it will enable the study to examine outcomes for resident students in the magnet and comparison schools, lottery winners, and lottery losers. (While resident students and lottery winners will be enrolled in the magnet and comparison schools selected for the study, lottery losers could conceivably attend any of the district’s elementary schools.) Second, having data for all elementary students in the district will permit supplementary analyses that will either strengthen the interrupted time series and lottery analysis, or allow alternative analyses should those approaches not be feasible for a particular district. Third, extracting data for all students may be easier than extracting data for selected schools and/or individual students, and thus would reduce the burden of data collection for the districts’ data managers.


Most of the data required to investigate the reduction of minority group isolation in conversion magnet schools will be based on data from the CCD. In addition to examining the composition of entire schools, the minority group isolation study will also investigate to the extent feasible the composition of classrooms within schools. Where available, the study will request information on individual students’ classroom assignments as part of the student data request. Where student-level data are not available, the study will determine whether summary data are maintained and can be readily provided on classroom composition by such factors as gender, race-ethnic group, income, and English language status.


The student data collection plan describes the longitudinal student records data and the classroom- and school-level enrollment summaries that will be requested from each district. The Student Data Collection Plan appears in Appendix I. It is accompanied by a brochure describing the evaluation (Appendix J) that will be given to data managers for districts in the study. The variables to be collected with the student record data and their purposes are detailed in Appendix K.


The study will take several steps to minimize the burden on districts of providing this data. First, in the feasibility phase, districts whose data would be extremely difficult to extract and process will be screened out of the study. Second, AIR has systems programmers with experience in dealing with complex data formats and a variety of data platforms. The experience of these programmers can be used to provide assistance, for example, in linking data files. Third, an individual AIR or BPA staff member will be assigned to work with each district to compile detailed information on the data being provided. For some districts, the compilation of disparate sets of documentation may, in fact, be a side benefit that districts receive by participating in this evaluation. A fourth method of assisting districts may be to obtain student level achievement scores from the district’s State Education Agency (SEA). However, states would need to be able to provide student identification codes that could be linked to background data that are not likely to be included in the SEA files, such as indicators of a students attendance zone and lotteries to which they may have applied.

Principal Survey (Evaluation Phase)

The principal survey, administered to the 2004 cohort principals in 2007 and to principals in both cohorts in 2010, provides key information needed to (1) interpret the results of the achievement and desegregation analyses; (2) document the differences between the magnet and comparison schools; (3) place these schools in the larger context of the nation’s schools (through comparisons on key variables between the study schools and the schools surveyed in SASS and ECLS); and (4) describe the nature and evolution of the magnet schools in the years following the grant award. Questions relating to the first three of these purposes will be answered by both magnet and comparison school principals; questions relating to the evolution of magnet program will only be answered by the magnet school principals. The Principal Survey appears in Appendix L. The variables from the survey and their purposes are detailed in Appendix M.26

Final MSAP Project/School Choice Coordinator Interview Guide (Evaluation Phase)

Given the length of time over which student outcomes will be tracked, it is possible that lottery processes, recruitment strategies, or other district conditions might change in ways that could affect the validity or interpretation of the Conversion Magnet Schools Evaluation’s analyses. A brief (15-20 minutes on average), semi-structured interview will be held with participating MSAP project directors or school choice coordinators at the end of the study period (2011) to inform the analyses of achievement and minority group isolation outcomes and to provide descriptive data on the evolution of the magnet programs. Some of these retrospective questions will also be discussed with project directors of the 2004 MSAP grants during the screening interviews in 2007. The interview guide appears in Appendix N. A letter of notification will be sent to the MSAP Project or School Choice Coordinator reminding them of the study and letting them know that they will be contacted for a brief follow-up interview (Appendix O). The variables to be recorded through the interview and their purposes are detailed in Appendix P.


The results of this study will be of immediate interest and importance for policymakers, researchers, and practitioners by demonstrating the degree to which magnet schools can affect student achievement and minority group isolation. The results of such a study will also be of general interest to parents who may be considering the value of sending their child to a magnet school.


The schedule for data collection and dissemination of results are reported in Exhibits 4 and 5, respectively in response to item 16.

3. Use of Information Technology to Reduce Burden

During the feasibility phase, the initial screening of the grantees will involve semi-structured telephone interviews with district officials. The participating officials will be sent copies of the interview protocols via email in advance of the interviews so they can prepare themselves for the discussion, but they will not be required to produce written responses. Where possible, the study team will reduce burden on district officials by obtaining enrollment and accountability data from the CCD and other electronic sources (e.g., from district websites and state testing agencies).


During the evaluation phase, the major data collections involve extraction of student demographic and achievement data from pre-existing electronic data files maintained by districts. All data and documentation will be submitted to the study team via secure electronic transmissions.


Only one paper-and-pencil survey will be administered during this study. During the evaluation phase, a survey of magnet and comparison school principals will be administered at the end of each grant period. This survey will not involve information technology, as developing web-based methods would not be cost-effective given the relatively modest sample size of the study. In some circumstances, information technology will be used to reduce burden. For instance, telephone calls will be used to remind respondents to complete the surveys, and project staff will offer to help non-respondents complete the survey over the phone. The survey is designed to reduce burden on respondents: most of its questions are answered by checking a response option, and the form is short enough to be answered in 35 minutes or less.

4. Efforts to Identify Duplication

The Conversion Magnet Schools Evaluation is the only large-scale study currently underway that will apply rigorous experimental and quasi-experimental methods to examining the relationship between magnet school programs and the achievement and minority group isolation of the students who attend them. Magnet schools are identified in NCES’s CCD and SASS surveys, and this information is also included in the National Longitudinal School-Level Standardized Score Database. However, data available through these surveys do not allow analysts to differentiate between resident and non-resident students and thus would not permit rigorous analyses based on disaggregated data.


States and districts nationwide do collect student assessment and background data. The Conversion Magnet Evaluation will assemble these existing data rather than administering its own achievement tests, thereby reducing burden on the participating districts and schools and avoiding duplication of data collections.

5. Methods to Minimize Burden on Small Entities

No small business will be impacted by this project. At present it is not known whether any other type of small entity will be impacted. However, it is possible that school districts of under 50,000 (OMB’s definition of a school district that is a small entity) could be included in the study. Using data from the CCD, a preliminary review of the 2004 MSAP grantees indicated that of the 27 districts with one or more magnet schools that are potentially eligible for the study, 6 districts had student populations greater than 50,000, and 21 school districts had student populations under 50,000. The data collection effort minimizes the burden on school districts by conducting a feasibility assessment to assess the suitability of the district to participate in the evaluation before requesting any data for the evaluation. One of the considerations in determining the feasibility of conducting an evaluation will be that it not have a significant economic impact on a substantial number of entities. During the evaluation phase, the burden on school districts is minimized by relying on existing student records and limiting other data collection efforts to a survey of a small number of each district’s principals and a brief interview of their MSAP project director or school choice official. In addition, technical assistance can be provided to districts that need help in making student record data available.

6. Consequences of Not Collecting the Information

As required by NCLB, school districts must provide families whose children attend underperforming schools additional school choices in which to enroll their children. The Conversion Magnet Schools Evaluation represents one of the first efforts by the Department of Education to conduct a rigorous study of therelationship between magnet school programs and student outcomes. Failure to collect the information proposed in this request will prevent ED from evaluating this form of special educational programming that is supported by federal Magnet Schools Assistance Program grants. More generally, without this study, policymakers will have a limited basis on which to judge the value of investing in magnet schools, and parents will lack information for deciding whether to send their children to magnet schools.

7. Special Circumstances Justifying Inconsistencies with Guidelines in 5 CFR 1320.5

No special circumstances apply to this study with respect to any requirements of respondents, pledges of confidentiality, or use of statistical data classification. While the results from the study are expected to be valid and reliable, the generalizability of the results may be limited by the selection of conversion magnet schools for the study from districts that have the capacity to provide the student records data needed by for the study rather than by random sampling.

8. Consultation Outside the Agency

The study will employ a technical work group (TWG) to advise AIR/BPA on data collection instruments, study feasibility, research designs, and methodological issues including combining results across sites that use diverse types of student assessments. The consultants, all of whom have agreed to serve on the panel, bring expertise in magnet schools, school choice, studies of racial segregation and achievement gaps, experimental and interrupted time series designs, and the analysis of complex student assessment data. The consultants and their affiliations are listed in Exhibit 2.



Exhibit 2. Technical Working Group Members for the Evaluation of Conversion Magnet Schools Study


Name


Position and Institution

Expertise

Adam Gamoran

Professor of Sociology and Educational Policy Studies (University of Wisconsin—Madison)

Empirical studies of magnet schools and school choice

Dale Ballou

Associate Professor of Public Policy and Education (Vanderbilt University)

Empirical studies of magnet schools and school choice

Ellen Goldring

Professor of Education Policy and Leadership (Vanderbilt University)

Studies of magnet program characteristics

Ronald Ferguson

Lecturer in Public Policy and Senior Research Associate (Kennedy School of Government, Harvard University)

Studies of racial segregation and achievement gaps

Steven Rivkin

Associate Professor of Economics (Amherst College)

Studies of racial segregation and achievement gaps

Thomas Dee

Associate Professor of Economics (Swarthmore College—on leave at Stanford University School of Education, 2006-7)

Studies of racial segregation and achievement gaps

Jason Snipes

Director of Research (Council of Great City Schools)

Experimental and interrupted time series designs

Larry Hedges

Professor of Sociology and Psychology (University of Chicago)

Combining measurements from different state tests



During the feasibility phase of the project, the TWG members are available for consultation on an as-needed basis to comment on data collection instruments and to review drafts of the memorandum on evaluation feasibility and study options. TWG members will meet twice in the evaluation phase. The first meeting will be to review and comment on the analysis plan for the evaluation. The second meeting will be to review and comment on preliminary results from the analyses. TWG members may also review drafts of the final report.

9. Payments or Gifts to Respondents

Principals will be compensated $25 to complete the principal survey, which will require about 35 minutes of their time. Principals have many demands on their time during the school day and thus have limited time to respond to surveys. Typically, they complete surveys outside normal work hours. Therefore, we will pay to offset the time principals spend completing the survey, with the payment proportional to their estimated hourly wage. Since the survey is expected to take 35 minutes to complete, about 60% of an hour, the principals will be compensated $25 or 60% of the hourly rate for elementary principals based on the median elementary school principal’s hourly rate. We believe that this is a medium burden activity.27

10. Assurances of Confidentiality

None of the information collected will be reported or published in a manner that would identify individual respondents.


To ensure that the data collected are not available to anyone other than authorized project staff of the contractor and subcontractor, a set of standard confidentiality procedures will be followed during the data collection process:


  • All project staff will agree in writing to an assurance of confidentiality.

  • All project staff will keep completely confidential the names of all respondents, all information or opinions collected during the course of the study, and any information about respondents learned incidentally.

  • Reasonable caution will be exercised in limiting access to data during transfer, storage, and analysis to persons working on the project who have been instructed in the applicable confidentiality requirements for the project. In particular, electronic data files will be encrypted for transfer between the districts and the contractor, and stored on secure servers at AIR and BPA. Respondents will send paper surveys directly to AIR/BPA via FedEx (which provides rapid transmission and tracking if problems arise). Paper surveys will be stored in locked files. After the project is completed, the contractors will destroy all identifying information.

  • To allow the linking of student data across files (e.g., connecting demographic information and test scores, or test scores from successive years) without using student names or the identification codes used by their districts, participating districts will be asked to create a randomly assigned pseudo-ID number for each student. Pseudo-ID numbers will replace actual ID codes in data files used in analyses in order to protect the identities of individual students. The cross-walk between actual identifiers and the pseudo-IDs will be stored separately from the analysis files during the study and destroyed after the study is completed. As acquisition of the student record data will need to be authorized by the district, we will not be requesting consent from individual students.

  • The Project Director will be responsible for ensuring that all contractor personnel involved in handling data on the project are instructed in these procedures and will comply with these procedures throughout the study.

  • The Project Director will ensure that the data collection process adheres to provisions of the U.S. Privacy Act of 1974 with regard to surveys of individuals for the Federal government.


Finally, the following explicit statement regarding confidentiality will be included in notification letters, study descriptions, and instructions to survey respondents: “Responses to this data collection will be used only for statistical purposes. The reports prepared for this study will summarize findings across the sample and will not associate responses with a specific district or individual. We will not provide information that identifies you or your district to anyone outside the study team, except as required by law.”

11. Questions of a Sensitive Nature

The data collection instruments do not request any data of a sensitive nature to individual persons.

12. Estimates of Response Burden

Exhibit 3a presents estimates of total respondent burden for the data collections during the 3-year period for which OMB approval of instruments is sought. Estimates are based on the assumptions that 40 MSAP grantee districts will be screened, that 50 conversion magnet schools and 100 comparison schools from 20 districts will participate in the study, and 10 of the participating districts will come from each of the two grantee cohorts. The total estimated hour burden for these 3 years is 3,186 hours: 64 hours for district officials; 3,085 hours for district data managers; and 37 hours for principals. Based on median hourly wages for respondents in the relevant professions, this amounts to an annual average monetary burden of $53,687. Across these 3 years, the average annual number of responses is 94 and the average response hours are 1,062.28


This burden estimate includes the following data collection activities:


  • District officials from 40 districts will complete up to four modules of the grantee screening protocol during the feasibility component of the study. Each of the four modules involves an interview averaging 30 minutes. The modules cover different topics and will be discussed with different members of the district staff. In some districts, responses to Module A or B will eliminate a district from consideration, and the remaining modules will not be administered. Accordingly, it is estimated that Module A will be administered in 40 districts, Module B in 36 districts, and Modules C and D in 26 districts.

  • District data management staff from 20 MSAP grantee districts will respond to three requests for student data—one covering past school years through 2006-2007 and the others covering 2007-2008 and 2008-2009. They will (1) extract, format, and submit student records data along with a data dictionary; and (2) create and maintain a file of pseudo-ID numbers cross-walked with students’ actual identifiers.29,30

  • District data management staff from 10 MSAP grantee districts will extract and submit classroom composition data for 2006-2007 and prior years. It is expected that 10 MSAP grantee districts will be able to provide classroom composition data. 31

  • District data management staff from 10 MSAP grantee districts will extract and submit classroom composition data for 2007-2008 and 2008-2009. It is expected that 10 MSAP grantee districts will be able to provide classroom composition data.

  • Principals of 64 magnet and comparison schools in the 2004 grantee districts will spend about 35 minutes completing a principal survey in 2007 (assumes 75 schools and an 85 percent response rate).


Exhibit 3b summarizes estimated respondent burden for the data collections planned for the final year of the study, which is beyond the 3-year period covered by this review.



Exhibit 3a. Time Burden for Respondents During the First Three Years of the Evaluation



Task

Total Sample Size

Estimated Response Rate

Projected Number of Respondents

Time Estimate (in person-hours)

Respondent Compensation

Number of Administrations or Collections

Total Hours

Hourly Rate5

Estimated Monetary Burden

District Officials










Grantee Screening Interview (Module A)1

40

100%

40

0.5

none

1

20

$70

$1,400

Grantee Screening Interview (Module B)

36

100%

36

0.5

none

1

18

$70

$1,260

Grantee Screening Interview (Module C)

26

100%

26

0.5

none

1

13

$70

$910

Grantee Screening Interview(Module D)

26

100%

26

0.5

none

1

13

$70

$910

District Data Managers










First Student Data Request for 2004 Grantee Cohort (2001-2002 through 2006-2007)2

10

100%

10

121

none

1

1,210

$50

$60,500

First Student Data Request for 2007 Grantee Cohort (2004-2005 through 2006-2007) 2

10

100%

10

76

none

1

760

$50

$38,000

Second Student Data Requests for 2004 and 2007 Grantee Cohorts (2007-2008) 2

20

100%

20

23

none

1

460

$50

$23,000

Third Student Data Requests for 2004 and 2007 Grantee Cohorts (2008-2009) 2

20

100%

20

23

none

1

460

$50

$23,000

First Classroom Data Request for 2004 Grantee Cohort (2001-2002 through 2006-2007)3

5

100%

5

18

none

1

90

$50

$4,500

First Classroom Data Request for 2007 Grantee Cohort (2004-2005 through 2006-2007)3

5

100%

5

9

none

1

45

$50

$2,250

Second Classroom Data Requests for 2004 and 2007 Grantee Cohorts (2007-2008 )3

10

100%

10

3

none

1

30

$50

$1,500

Third Classroom Data Requests for 2004 and 2007 Grantee Cohorts (2008-2009)3

10

100%

10

3

none

1

30

$50

$1,500

(continued on next page)

Exhibit 3a. Continued



Task

Total Sample Size

Estimated Response Rate

Projected Number of Respondents

Time Estimate (in person-hours)

Respondent Compensation

Number of Administrations or Collections

Total Hours

Hourly Rate5

Estimated Monetary Burden

Principals










2007 Principal Survey (2004 Grantee Cohort)4

75

85%

64

0.6

$25

1

37

$63

$2,331

Totals



282




3,186


$161,061


1 The grantee screening protocol consists of four modules, each of which will require an average of 30 minutes to complete. Information gathered in Modules A and B will eliminate some districts from the simple, making administration of other modules unnecessary.

2 Hours estimated for providing student records data include time in consultation with AIR/BPA staff about content and formatting needs, identifying lottery applicants, retrieving and extracting data, linking data across files, and documenting and transmitting files. Estimates assume that providing the three earliest years of data (2001-2002 through 2003-2004) will require 20 hours per year of data while providing more recent years of data will require 15 hours per year of data. Estimates also include 16 hours to create a file of student pseudo-IDs during the first wave of student data collection and 8 hours for each of the three subsequent waves to update the file.

3 Assumes that half of the districts in each grantee cohort supply classroom data, and that file extraction requires 3 hours per year of data.

4 The principal survey is estimated to take 35 minutes (0.58 hours) to complete.

5 Hourly rates were derived from median annual salaries reported in the Department of Labor’s Occupational Outlook Handbook, 2006-2007 edition. Data are for 2004 or 2005, and thus slightly underestimate current pay rates. The median annual salary for elementary school principals in 2004-2005 was $74,062 and for high school principals (a proxy for mid-level district staff) was $82,225. Median annual earnings for network and computer systems operators were $58,190 in May 2004. Assuming 220 workdays (or 1,760 work hours) per year and a 50% fringe benefit rate yields estimated hourly labor + fringe benefit rates of $63 for elementary school principals, $70 for high school principals, and $50 for network and computer systems operators. These amounts are an estimate of labor costs to districts. The hourly rate for elementary principals without fringe benefits, used to establish compensation for completing the principal survey, is $42.

Exhibit 3b. Time Burden for Respondents During the Fourth Year of the Evaluation



Task

Total Sample Size

Estimated Response Rate

Projected Number of Respondents

Time Estimate (in person-hours)

Respondent Compensation

Number of Administrations or Collections

Total Hours

Hourly Rate4

Estimated Monetary Burden

District Officials










Coordinator Interview in 2010

20

85%

17

0.33

none

1

5.6

$70

$393

District Data Managers










Fourth Student Data Request for 2004 and 2007 Grantee Cohorts (2009-2010)1

20

100%

20

23

none

1

460

$50

$23,000

Fourth Classroom Data Request for 2004 and 2007 Grantee Cohorts (2009-2010)2

10

100%

10

3

none

1

30

$50

$1,500

Principals










2010 Principal Survey (2004 and 2007 Grantee Cohorts)3

150

85%

128

0.6

$25

1

75

$63

$4,725

Totals



175




570.6


$29,618


1 Hours estimated for providing student records data include time in consultation with AIR/BPA staff about content and formatting needs, identifying lottery applicants, retrieving and extracting data, linking data across files, and documenting and transmitting files.

2 Assumes that half of the districts in each grantee cohort supply classroom data, and that file extraction requires 3 hours per year of data.

3 The principal survey is estimated to take 35 minutes (0.58 hours) to complete.

4 Hourly rates were derived from median annual salaries reported in the Department of Labor’s Occupational Outlook Handbook, 2006-2007 edition. Data are for 2004 or 2005, and thus slightly underestimate current pay rates. The median annual salary for elementary school principals in 2004-2005 was $74,062 and for high school principals (a proxy for mid-level district staff) was $82,225. Median annual earnings for network and computer systems operators were $58,190 in May 2004. Assuming 220 workdays (or 1,760 work hours) per year and a 50% fringe benefit rate yields estimated hourly labor + fringe benefit rates of $63 for elementary school principals, $70 for high school principals, and $50 for network and computer systems operators. These amounts are an estimate of labor costs to districts. The hourly rate for elementary principals without fringe benefits, used to establish compensation for completing the principal survey, is $42

.

13. Estimate of Total Capital and Startup Costs/Operation and Maintenance Costs to Respondents or Record-Keepers

There are no capital or start up costs. Item 12 accounts for the costs to respondents and record-keepers.

14. Estimates of Costs to the Federal Government

Cost of the evaluation contract: The estimated cost for the 5-year study contracted to AIR/BPA—including conducting a feasibility investigation; developing a detailed study design, data collection instruments, and justification package, and completing data collection, data analysis, and preparing reports—is $3,420,130 overall, averaging $684,026 per year. The total cost of the feasibility phase is $495,279, and total cost of evaluation phase is $2,924,851.


Additional costs to the federal government: Staff in the Magnet Schools Assistance Program Office will provide the study with copies of approximately 50 grant applications and annual performance reports from the 2004 and 2007 funding cycles as well as a summary of the characteristics of the 2007 grantees. The total cost of this assistance from ED, estimated at $3,800 ($760 per year), includes:


  • Approximately 40 hours of ED staff time to photocopy and pack MSAP grant applications and annual performance reports of 50 grantees from the 2004 and 2007 cohorts ($1,200, assuming an hourly rate of $20 labor +50% fringe benefits);

  • Cost of photocopies (estimate $1,050 photocopying = 300 pp x 50 x $0.07 per page) and $200 in postage = $1,250; and

  • Approximately 30 hours of ED staff time to direct the photocopying of grantee documents and to construct a spreadsheet of information about the 2007 grantees based on information from MSAP applications ($1,350, assuming a rate of $30 labor + fringe).

15. Changes in Burden

Since this is a new collection, there is a program change for the requested hours.

16. Plans and Schedule for Analysis, Tabulation and Publication of Results

The schedule for the publication of results for the two phases of the Conversion Magnet Schools Evaluation is shown in Exhibit 4.

A. Description of Study Reports

The schedules for data collections and the publication of results for the two phases of the Conversion Magnet Schools Evaluation are shown at the end of this section in Exhibits 4 and 5, respectively.

Feasibility Phase Reports

Two documents will be submitted at the end of the feasibility phase. The first will be a memorandum documenting the results of the feasibility investigation and presenting recommendations about whether and how to proceed with a full evaluation of conversion magnet schools. The second will be a descriptive report that focuses on school characteristics during the years leading up to the conversion of the magnet schools funded through either the 2004 or 2007 MSAP grants. Tabulations will show levels and trends in characteristics such as total enrollment, proportions of students from minority groups, and proportions of students eligible for free and reduced price meals of the conversion magnet schools compared with other schools in their districts.

Evaluation Phase Report

A final report will be submitted at the end of the evaluation phase. It will present the results of analyses estimating relationships between conversion magnet schools and key desired outcomes of the Magnet Schools Assistance Program—improvement of students’ academic achievement and reduction of minority group isolation. These analyses will be based on data for the school years 2001-2002 through 2009-2010. The report will include the following tabulations and analyses:

  • Descriptive Information

  • Description of the characteristics of conversion magnet schools in the 2004 and 2007 MSAP grantee cohorts (e.g., location, size, and themes)

  • Comparison of study schools (conversion magnets and non-magnet comparison schools) and schools nationwide to provide context for understanding how the study schools differ (if they do) from other magnet and non-magnet schools

  • Description of dependent variables (student achievement scores, minority group isolation in the schools)

  • Description of changes in enrollment composition of the districts, conversion magnets, and comparison schools between 2001-2002 and 2009-2010 (from 3 years prior to the MSAP grant to 6 years after the 2004 MSAP grants and 3 years after the 2007 MSAP grants)

  • Description of changes in the content and structure of the magnet programs during the 3 years following the award of their MSAP grants


  • Estimation of the relationship between conversion magnets and key outcomes: student achievement and minority group isolation

  • Results of analyses of the achievement of resident students in conversion magnet schools in the 2004 and 2007 MSAP grantee cohorts before and after the schools converted

  • Results of analyses of the relationship between magnet schooling and achievement for on non-resident students’ selected by lottery to enroll in the magnet school

  • Results of analyses of trends in minority group isolation in the conversion magnet schools in the 2004 and 2007 MSAP cohorts

B. Complex Techniques for Analyses of the Relationship Between Conversion Magnet Schools, Student Achievement, and Minority Group Isolation

As has been explained earlier, differences in the manner in which resident and non-resident students are assigned to conversion magnet schools require different methods for analyzing the relationship between conversion magnet schools and student achievement. This section provides additional detail on the estimation methods that will be used for the core and supplemental achievement analyses that will be conducted for each group of students. The section also discusses models for analyzing the relationship between magnet conversions and minority group isolation.

Core Analysis for Resident Students: Comparative Interrupted Time Series

The primary estimation method for analysis of the relationship between conversion magnet schools and the achievement of resident students will be an interrupted time series analysis that uses a three-level hierarchical linear model (HLM), with students at the first level, year at the second level, and school at the highest level. The analysis will be conducted separately by grade level (2nd, 3rd, 4th, or 5th grade) and by subject area (English language arts/reading or mathematics). Each grade level of resident students will be compared before and after the conversion of the school. A grade level sample is comprised of the students in a particular grade level, 3 years before and at least 3 years after their school converts; for example, the third-grade class in years 2001 to 2006 comprise a sample for analysis.32, 33

The HLM is specified as follows:


Level 1 – Individual Student Level

(1)

where

the outcome for student i, in grade level j, from school pair k and school s.

mean-centered background characteristics q for student i in grade level j from school pair k and school s.

a random error term for student i in grade level j from school pair k and school s (assumed independently and identically distributed (iid) across students in a grade level).


Level 2 – Year/Grade Level (e.g., third-grade students over 6 years or more)


(2)

where

= the mean outcome for grade level j from school pair k and school s for students with mean background characteristics for grade level j and school s.

= dummy variable equal to 1 for first year after conversion. 0 otherwise.

= dummy variable equal to 1 for second year after conversion. 0 otherwise.

= dummy variable equal to 1 for third year after conversion. 0 otherwise.

= dummy variable equal to 1 if school s is a magnet school, pre- or post-conversion.
0 otherwise.

= random error term for grade level j, school pair k, iid across school pairs.

= random error term for grade level j, school pair k, school s, iid across schools



Level 3 – School Pair Level


(3)


where are random error terms, iid across school pairs.34


The first level is a simple regression of outcomes for individual students in a given grade j as a function of their background characteristics. The equation is included in the model to control statistically for any compositional shifts that might occur over time in the measured background characteristics of students at a given school.


The second level of the model is the comparative interrupted time-series analysis of regression-adjusted mean outcomes for a given grade-level group from a single school within a single magnet/pooled-comparison schools pair. π0k is the regression-adjusted baseline mean student outcome for the 2 comparison schools combined. π1k, π2k, and π3k are the deviations from the baseline mean student outcome for the comparison schools in the three follow-up years. π4k is the difference between the regression adjusted baseline mean outcome for the magnet school (in years before and after conversion) and that of its comparison schools. π5k, π6k, and π7k are the differences between the deviations from the baseline mean for the magnet school and its comparison school counterpart—the estimated effects of transforming the magnet school in the first, second, and third years of magnet status, respectively. For the 2004 cohort, it may be possible to include as many as six years after conversion in the analysis.


The third level represents the distribution of parameters across magnet/comparison school pairs, providing a way to summarize these findings. π5k, π6k, and π7k are the best available estimates of the typical effects of a magnet school. The standard deviations of these estimates provide measures of the consistency of these effects.

Core Analysis for Non-resident Magnet Students: Experimental Analysis of the Effect of
Winning a Lottery

The following outlines the core experimental method that will be used to estimate the effect of winning a lottery on magnet applicants.


First, the study will identify non-resident students who participated in lotteries that were “true” in the sense that the lotteries were oversubscribed, and some of the applicants won admission while others did not. Typically magnet schools admit students by grade, and the school may be oversubscribed in just a few grades. Further, the district may divide applicants into smaller groups based on criteria such as students’ residential location, socioeconomic status, and having siblings already enrolled in the school. Each group-within-grade combination of students comprises a lottery, and only some of these may be oversubscribed and thus eligible for inclusion in the experimental study. Once the “true” lotteries have been identified, tests will be run to verify that the observable characteristics of winners and losers in each lottery are statistically indistinguishable. A finding that the winners and losers of a lottery were not indistinguishable would raise the possibility that the lottery was not conducted randomly. If the non-random nature of the lottery is verified, the students will be excluded from the study.35


Using data from the lotteries that have been identified as “fair,” the analysis will then model the test score for student i in year t, where t is one of the post-lottery years. Unlike the analyses of resident students’ achievement, two types of time must be tracked here: (1) the school year t in which the achievement outcome is measured and (2) how many years the year t is after the student won or lost the lottery (referred to with the subscript p in the model). Student i applies to lottery j, and in year t attends school s, which is p years after he entered the lottery, so his score on the achievement test in the model is Sijstp. This test score is modeled as a function of a set of fixed effects αj for the lottery applied to, a dummy variable WINijtp and corresponding coefficient βp indicating whether the student i, whose test score is modeled in year t, won lottery j, and a composite error term in parentheses consisting of an error component for school s in year t, ηst, and a normally distributed random error term, εijstp.

(4)

The variables account for any differences in achievement between students who apply to different lotteries j. The analysis will allow the effects of magnets on non-resident students to vary by the number of years after winning the lottery, explaining the subscript p on the coefficient(s) designated by βp.36,.37


If pre-lottery test scores are available, they will be included in model (4). Pre-test scores can often improve precision of the estimates because, although students have been randomly assigned by the lottery, the initial test scores of the lottery winners and losers may not be identical.38,39 For the same reason, the model will incorporate any available demographic characteristics of students.

Refined Analyses of Non-resident Students to Take Account of Self-selection and Substitution Biases

In estimating the achievement relationship between magnet schools and achievement of non-resident students, it is important to distinguish between two closely related hypotheses. What the randomization allows analysts to do convincingly is to estimate the effect of winning a lottery, which is known as the “effect of the offer to treat” or the effect of the “intent to treat”. But winning a lottery to attend a certain school is not the same as winning a lottery, accepting the offer, and actually attending. The overall effect on winners who accept and attend is known as the effect of “treatment on the treated.” While the core analysis will focus on the effect of winning a lottery, the study will attempt to obtain refined estimates of the effect of treatment using techniques described below.


The literature on experimental evaluation of training programs suggests that obtaining accurate estimates of the effect of treatment is difficult but sometimes possible.40,41 For instance under certain assumptions, one way to estimate the effect of treatment on the treated is simply to scale up the coefficient on winning a lottery in model (4) by the reciprocal of the fraction of lottery winners who do switch to the conversion magnets.42


The mirror image to the problem of lottery winners not enrolling in the magnet is lottery losers reacting by enrolling in another magnet or school of choice. This problem, known as substitution bias, will also bias the estimated effect of winning a lottery toward zero. Just as for the earlier problem, under certain assumptions it will still be possible to estimate the effect of treatment on the treated by adjusting the estimate from equation (4). But this correction for substitution bias assumes that those lottery losers who do leave their local schools are identical to those who stay. These assumptions will be tested and, if possible, appropriate corrections for selectivity bias will be made.43

Supplementary Estimation Techniques for Student Achievement Studies

Pending data availability, the core estimation techniques discussed above will provide quasi-experimental and experimental measures of the effect of attending a conversion magnet school for resident and non-resident students respectively. This section briefly addresses supplementary methods. In the case of resident students, it may be possible to strengthen the analysis should data for individual students linked across years be available. For non-resident students, should a sufficient number of lottery participants be unavailable, two alternative quasi-experimental methods will be considered.

Student Fixed-Effect Model on the Effects of Staying in a School After Conversion. The HLM analysis proposed for resident students is a quasi-experimental method that requires individual student data, but which does not require those data to be linked over time. If the feasibility phase investigation indicates that student data linked over time are available, then the study team will also consider estimating student fixed-effect models of the effect of converting schools to magnet status on student achievement. These models can improve the internal validity of conclusions about the estimted effect of magnet schools by controlling for all fixed characteristics of students, both observed and unobserved, thereby reducing the chance that unobserved differences between students are being confounded with the magnet status of the school.


One approach to this fixed-effect model that is highly analogous to the comparative interrupted time series HLM model is to estimate a difference-in-difference model that compares individual students’ gains in achievement before and after a school converts to magnet status, while using gains of individual students at comparison schools to control for district-wide trends.

Alternatives for Non-resident Students Should Lottery Data Not Be Widely Available. Should large lottery samples not be available, two quasi-experimental approaches are proposed to assess the effect of switching to a conversion magnet school for non-resident students. First, if districts use some non-random cutoff criterion to admit students to conversion magnets, the study team may implement a regression discontinuity design. For example, if districts admitted students on a first-come first-serve basis, then it would be possible to compare achievement gains for students who applied just before and just after the cutoff date. Presumably these two sets of students would have similar motivation in that they applied to the same school at nearly the same time, so that a comparison of achievement gains for these students would be relatively free of omitted variable bias. A second method that would be considered is student fixed-effect models in which the achievement gains of individual students would be compared before and after the students switched between the conversion magnet schools and traditional public schools. This approach avoids making inter-student comparisons, and removes the influence of unobserved characteristics such as students’ level of motivation, to the extent that these characteristics are fixed during the period under study.

Core Analysis of the Estimated Effect of Magnet Conversion on Minority Group Isolation

The study will also evaluate the relationship between converting elementary schools into magnets and various measures of minority group isolation, such as the percentage of students at the school who are from minority groups. Let Rjks = one of the measures of minority group isolation for grade level j from school pair k and school s. Then this variable can be modeled using equations (2) and (3) from the HLM model proposed earlier for test scores. (A student-level model (1) is dispensed with because measures of minority group isolation are typically measured at the school or a higher level.) In this version of the HLM model, the coefficients on π5k, π6k, and π7k indicate whether the conversion magnet school changed its trend in the given measure of minority group isolation relative to the comparison school(s).


It is also possible to test whether district-wide measures of minority group isolation have changed from their pre-existing trends in the years after one or more elementary schools are converted to magnet status. However, such an approach does not allow for a comparison set of schools to be used, and so this corresponds to an interrupted time series research design rather than a comparative interrupted time series design.

C. Schedules of Data Collections and Reports

Exhibit 4 presents the schedule for the data collection activities during the feasibility and evaluation phases of the Conversion Magnet Schools Evaluation, and Exhibit 5 presents the schedule of deliverables for the two phases.



Exhibit 4. Schedule for Data Collections


Feasibility Phase

Beginning and End Dates

Screen 2004 MSAP Cohort

Screen 2007 MSAP Cohort

May 1, 2007-August 1, 2007

July 2, 2007-September 28, 2007

Evaluation Phase


Survey principals in 2004 cohort

December 3, 2007-March 31, 2008

Student records data collection for 2001-2002

through 2006-2007

December 3, 2007-March 31, 2008

Student records data collection for 2007-2008

December 1, 2008-March 31, 2009

Student records data collection for 2008-2009

December 1, 2009-March 31, 2010

Student records data collection for 2009-2010

December 1, 2010-March 31, 2011

Survey principals in both cohorts

December 1, 2010-March 31, 2011

Final interview of MSAP project directors or

district choice representatives

December 1, 2010-March 31, 2011



Exhibit 5. Schedule for Dissemination of Study Results


Feasibility Phase

Deliverable Dates

Feasibility Memorandum

First draft of memo

Final draft of memo



October 30, 2007

December 31, 2007

Descriptive Report

First draft of report

Second draft of report

Final report



December 31, 2007

January 31, 2008

March 31, 2008

Evaluation Phase Deliverables


Evaluation Report

First draft of report

Second draft of report

Final report



May 31, 2011

July 31, 2011

September 30, 2011


17. Approval to Not Display Expiration Date

All data collection instruments will include the OMB expiration date.

18. Exceptions to Item 19 of OMB Form 83-1

No exceptions are requested.

1 See Christenson, B., Eaton, M., Garet, M., & Doolittle, F. (2004). Review of literature on magnet schools (Report submitted to U.S. Department of Education). Washington, DC: American Institutes for Research and MDRC, p. 3. See also U.S. Department of Education. (2005). Magnet schools assistance. Retrieved January 8, 2007, from http://www.ed.gov/legislation/ESEA02/pg65.html; and Rossell, C. H. (2005, Spring). What ever happened to magnet schools? Education Next, 2. Retrieved January 8, 2007, from http://www.educationnext.org/20052/44.html

2 Program regulations are in 34 CFR 280.

3 Information downloaded from www.ed.gov/programs/magnet/funding.html

4 The statute authorizes funding of grants to districts to carry out magnet programs that are part of an approved desegregation plan and are designed to bring students from different social, economic, ethnic, and racial backgrounds together. (Sec. 5303 of NCLB)

5 The Code of Federal Regulations definition of terms for the federal government’s magnet assistance program identifies minority group to include American Indian or Alaskan Natives, Asian or Pacific Islanders, Hispanics, and Blacks (not of Hispanic origin). The code defines minority group isolation in reference to a school as, “the condition in which minority group children constitute more than 50 percent of the enrollment of the school.” (34 CFR 280.4(b) )

6 The schools will be sought in two grantee cohorts because a single cohort is unlikely to yield enough eligible schools in districts that can provide data to make the evaluation feasible.

7 In addition to the interrupted time series and experimental studies mentioned above, the feasibility of a number of other approaches will also be explored. These studies would supplement the primary analyses or serve as “fallback” methods to address the research questions if the ITS or experimental studies proved infeasible. All of these approaches are discussed in detail in Section 16 of Part A of the Supporting Statement.

8 If the evaluation phase is conducted, some of its tasks will begin as early as the fifteenth month of the feasibility phase.

9 The study would also provide descriptive information about how the content and program structure of these new magnet schools change over time. As they mature, magnet school programs may change their organizational structure and content in response to a variety of circumstances. A description of these changes and the factors that motivate them will also be informative to policymakers.

10 Howell, William G., & Paul E. Peterson (2002) The education gap: Vouchers and urban schools. Washington, D.C.: Brookings Institution

11 Betts, Julian R., Zau, Andrew, & Rice, Lorien (2003). Determinants of student achievement: New evidence from San Diego. San Francisco: Public Policy Institute of California.

12 Krueger, Alan & Whitmore, Diane (2000). “The effect of attending a small class in the early grades on college test-taking and middle school test results: Evidence from project STAR.” National Bureau of Economic Research Working Paper 7656.

13 Crain, R., Allen, A., Thaler, R., Sullivan, D., Zellman, G., Little, J., & Quigley, D. (1999). The effects of academic career magnet education on high schools and their graduates. Berkeley, CA: University of California Berkeley, National Center for Research in Vocational Education.

14 Betts, J., Rice, L., Zau, A., Tang ,Y., & Koedel, C. (2006). Does school choice work? Effects on student integration and achievement. San Francisco: Public Policy Institute of California.

15 Ballou, D., Goldring, E., & Liu, K. (2006). Magnet schools and student achievement. New York: Columbia University, Teachers College, National Center for the Study of Privatization in Education.

16 The lottery applicants in the study by Ballou are non-resident students applying to schools outside of the assigned school zone. Although not specifically discussed, the magnet lottery applicants in the San Diego study by Betts et al. are also likely to be non-resident students, particularly at the elementary school level. In Cain et al.’s New York City study, however, students had to apply to participate in the career magnet academies and were selected through a lottery process without apparent regard to residents.

17 See Christenson, B., Eaton, M., Garet, M., & Doolittle, F. (2004). Review of literature on magnet schools (report submitted to U.S. Department of Education). Washington, DC: American Institutes for Research and MDRC.

18 Of the 207 magnet schools funded through 2004 MSAP grantees, nearly two-thirds (134 schools) were new magnet schools, and of these, nearly three-fifths (76 schools) were located in elementary schools.

19 Nationally in 1999-2000, over three-fifths of all magnet schools were located in elementary schools (Christenson et al., 2004, p. 3).

20 A descriptive evaluation of the 1998 MSAP funded school districts reports that 57 percent of schools targeted for desegregation with MSAP funds succeeded in preventing, eliminating, or reducing minority group isolation. (U. S. Department of Education, Office of the Under Secretary. (2003). Evaluation of the Magnet Schools Assistance Program, 1998 grantees. Washington, DC: Author. See page xii. [Retrieved January 8, 2007, from http://www.ed.gov/rschstat/eval/choice/magneteval/finalreport.pdf]). An earlier descriptive study of the 1989 and 1991 MSAP funded school districts reported that 64 percent of desegregation targeted schools met or made progress in meeting their desegregation objectives, with 57 percent moving closer to the district-wide average in terms of minority enrollment. (U.S. Department of Education, Office of the Under Secretary. (1996). Reducing, eliminating, and preventing minority isolation in American schools: The impact of the Magnet Schools Assistance Program. Washington, DC: Author.)

21 It should be noted that not all of the magnet schools that MSAP supports are the target for desegregation. In a small proportion of cases, the magnet program in one school is intended to affect the enrollment composition of one or more other schools by drawing students from those schools into the magnet school.. In such cases, these other schools are the targets for desegregation.

22 Betts, J., Rice, L., Zau, A., Tang ,Y., & Koedel, C. (2006). Does school choice work? Effects on student integration and achievement. San Francisco: Public Policy Institute of California.

23 While the exposure of students of parents with a low education to students whose parents have a high education increased, so did the exposure of students whose parents education is unknown to students of parents with a high education. Since it is unclear what the unknown category represents, it is difficult to draw a conclusion.

24 To the extent that pre-tests with sufficient predictive power are available for elementary students, the sample size for the lottery study of non-resident students could be reduced in half to approximately 2,600 students. This is discussed in section 2.3 of Part B of this OMB Clearance Request.

25 Minimally, the data being collected will be used to track cohorts of students across school years. To the extent that each district is able to able to link individual records between years, we will request this linkage as it would strengthen the analyses that could be conducted.

26 Question #2 on the survey asks the principal to indicate the number of full and part time teaching staff in each of x race-ethnic categories. As the Department of Education is in the process of formulating guidelines on the collection of this type of information, this data collection effort will adopt the Department’s guidance once it is finalized.

27 According to the Department of Labor’s Occupational Outlook Handbook, 2006-2007 edition, the median annual salary for elementary school principals in 2004-2005 was $74,062. Assuming 220 workdays (or 1760 work hours) per year, the hourly rate for elementary principals without fringe benefits is $42.

28 Each row in Exhibits 3a and 3b represent a distinct data collection activity. The total projected number of respondents for the first 3 years of the project is obtained by summing the projected number of respondents shown in column 4 of Exhibit 3a. As shown in the bottom row of that exhibit, we project a total of 282 respondents during the first 3 years of the evaluation, or an average of 94 respondents per year. The total number of hours per data collection activity (column 8) is obtained by multiplying the projected number of respondents (column 4) by the estimated time it will take each respondent to complete that activity (column 5). As shown in the bottom row of Exhibit 3a, we project a total of 3,186 hours of respondent time during the first 3 years of the evaluation, or an average of 1,062 hours per year.

29 Actual student identifiers will be replaced by pseudo-ID codes in the files provided to AIR/BPA to ensure confidentiality of individual students’ records.

30 The burden estimates for the first data request differ for the districts in the 2004 and 2007 grantee cohorts because the districts in the earlier cohort will retrieve 6 years of data (2001-2002 through 2006-2007) while the 2007 grantees will only retrieve 3 years of data (2004-2005 through 2006-2007). (The estimates assume that 20 participating districts will be split evenly between the 2004 and 2007 grantees and that the response rate for the data collections will be 100 percent because districts unable or unwilling to participate will be screened out during the feasibility phase.)

31 The burden estimates for the first data request differ for the districts in the 2004 and 2007 grantee cohorts because the districts in the earlier cohort will retrieve 6 years of data (2001-2002 through 2006-2007) while the 2007 grantees will only retrieve 3 years of data (2004-2005 through 2006-2007). (The estimates assume that 10 participating districts will be split evenly between the 2004 and 2007 grantees and that the response rate for the data collections will be 100 percent because districts unwilling to participate will be screened out during the feasibility phase.)


32 For this analysis, students in the two comparison schools for each magnet will be pooled together as one pseudo-comparison school before being compared with the magnet school.

33 Schools using the same achievement tests will be grouped together for analysis, so it is likely that the analyses will be grouped by state.

34 The model as shown treats variation in estimated effects across magnet schools as random effects; it would also be possible to treat this variation as fixed effects.

35 Initial identification of lotteries will take place during the feasibility phase, but will continue into the evaluation phase.

36 The power analysis for the lottery discussed in Part B of this submission focuses on results for a single grade level, and assumes one lottery per magnet school at the selected grade. The incorporation of multiple lotteries per grade level for magnet schools that run separate lotteries for different subgroups of students should not appreciably affect the power.

37 For simplicity, the model shown in equation (4) assumes a common estimate of impact βp across magnet schools. The power estimates discussed in the overview assumed a separate impact estimate βp would be obtained for each magnet school, and the estimates would be pooled to obtain an overall average estimate.

38 Donner, Allan and Neil Klar (2000). Design and analysis of cluster randomization trials in health research. London: Arnold; New York: Oxford University Press.

39 Bloom, Howard S. (2003). “Sample design for an evaluation of the Reading First Program.” MDRC Working Papers on Research Methodology, MDRC.

40 Heckman, J.S. (1997). Instrumental variables: A study of implicit behavioral assumptions used in making program evaluations. Journal of Human Resources, 32(3), 441-462.

41 Heckman, J.S., Lalonde, R.J., & Smith, J.A. (1999). The economics and econometrics of active labor market programs. In O. Ashenfelter & D. Card (Eds.), Handbook of labor economics: Volume 3 (pp. 1865-2097). North-Holland, Netherlands: Elsevier.

42 Heckman, J.S. (1996). Randomization as an instrumental variable. Review of Economics and Statistics, 78(2), 336-341.

43 See Heckman,Lalonde, Smith (1999) for a discussion.

File Typeapplication/msword
File TitleSUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSION
AuthorMarian Eaton
Last Modified ByDoED
File Modified2007-04-06
File Created2007-04-06

© 2024 OMB.report | Privacy Policy