ED Response to OMB Comments

Conversion Magnet Schools Evaluation.Responses to OMB Questions.doc

Conversion Magnet Schools Evaluation

ED Response to OMB Comments

OMB: 1850-0832

Document [doc]
Download: doc | pdf

OMB Comments on

Conversion Magnet Schools Evaluation


  1. What criteria will be used to determine if this evaluation will continue from the feasibility to the evaluation phase?  For example, if there are not enough students involved in a lottery, will the quasi experimental design automatically be used?  If so, will the study still have a stronger research design than already completed studies of the effect of magnet schools on student achievement?  What effect will limited access to data have on the proposed interrupted time series design? What are the minimum data elements need to proceed?


  1. We believe a decision tree would be helpful to describe how AIR will decide what study design to use and when it is necessary to abandon the study after the feasibility phase. 


Response to questions 1 and 2:


The determination of whether or not to implement the full evaluation will be made by, and is the sole responsibility of, the department. The contractor, AIR, will develop information that will inform ED’s decision. The decision will be made by determining if it is possible to conduct the analyses needed to answer the research questions with the data that are available from the school districts.


Magnet schools largely serve low-income and minority students. These schools

typically implement a distinctive curriculum or instructional approach with an expectation that it will attract a more diverse population of students. It is thought that this increase in the diversity, combined with a better academic program, will improve academic performance and reduce minority group isolation, particularly of resident students. Resident students attend the school because they reside in the neighborhood attendance zone and do not have to apply for admission. These students are typically disadvantaged. A second, smaller group of students, non-resident students, must actively apply for admission and tend to be more advantaged. ED’s greatest policy interest concerns students who attend magnet schools because they reside in the school’s attendance zone due to (1) the tendency for resident students to be disadvantaged and (2) resident students comprise the largest group of students served by magnet programs.


Therefore, we will make the determination of whether or not to implement the evaluation based on the availability of data to support the interrupted time series, with a fallback of implementing student fixed-effect analysis because that approach best answers questions about the effects of magnet schools on resident students. Students who attend the magnet schools through lotteries are of secondary concern. The availability of lottery data will not influence the decision about whether to proceed with the evaluation although it will be a useful compliment to the interrupted time series analysis.


The decision will be made by first determining if the data are available for the interrupted time series analysis and, if not, if the data are available for student fixed effects analysis.




STEP/QUESTION 1 – Is it possible to complete the interrupted time series analysis? If it is possible to complete the interrupted time series analysis, the full evaluation will be conducted. If it is not possible, we will move to Step/Question 2.

The data necessary for the interrupted time series are 50 elementary conversion magnet schools and 100 non-magnet comparison schools from the same set of districts with the following requirements:

  • Each magnet school must be accompanied by one or more non-magnet comparison schools from the same district with similar demographic and achievement profiles

  • The magnet and comparison schools must have existed and administered the same standardized tests to their students for at least 3 years prior to and 3 years after the magnet conversion date.

  • The districts must be able and willing to provide longitudinal individual student records data (including demographic information, residence information, and test scores).


STEP/QUESTION 2 – Is it possible to complete the student fixed-effects analysis as a backup? If it is possible to complete the student fixed-effects analysis, the full evaluation will be conducted. If it is not possible, we will not carry out the full evaluation.

The data necessary for the student fixed-effect analysis are 50 elementary conversion magnet schools and 100 non-magnet comparison schools from the same set of districts with the following requirements:

  • Magnet school student achievement data must be available that can be linked across years for individual students prior to and after the conversion of the school.

  • The districts must be able and willing to provide similar achievement data for non-magnet comparison students.


  1. Given that there are a few possible study designs being contemplated, are there cost differentials between implementing one design or another?


The cost of obtaining and analyzing the data will be the same regardless of the design that is implemented. The cost will not change because we are collecting a similar amount of pre-existing student records data from districts for all of their students or just students in the treatment and comparison schools regardless of which analytic decision is used.


  1. Will the study be limited to magnet schools that converted from schools that failed to make AYP for three consecutive years? Is the context for the conversion to a magnet school a factor in the study? For example, are the magnet schools in the study restricted to those that have the same background  - i.e. converted after failure to achieve AYP, converted based on a specific curriculum. 


While many schools become magnets because of performance issues, there is no proclivity for the schools to convert to magnet status at the specific point when the schools reach three consecutive years of failing to make AYP. We expect most, if not all, of the magnet schools in the sample to have had performance issues, as these issues often serve as the motivation to apply for a MSAP grant. Additionally, there has been a competitive priority in favor of these low-performing schools in both the 2004 and 2007 MSAP grant competitions.


While performance will not be used for screening candidates for the evaluation, the information will provide a context for interpreting the results. However, we do not believe that the schools that cover to magnet status after two years of failing to make AYP will be substantially different from those that convert at the critical three year mark, and thus are willing to includes both types in the evaluation.


  1. Will the study address the self-selected status of non-resident magnet school students - even those in the lottery?


All studies of school choice produce findings that are only relevant to the group of families who want to move to another school. We can describe the characteristics of students and families who choose to apply to magnet schools, but the results of the lottery-based analyses are not meant to be generalized to non-applying students.


  1. You indicate that a major limitation of earlier studies is their treatment of all magnet schools as homogenous.  How does this study avoid the same treatment of schools where lottery “losers” attend?  Specifically, given that a lottery “loser” may attend another magnet school, a traditional public school, a private school, or be home-schooled, how will this range of student outcomes as a single measure compared to magnet school outcomes be meaningful?


For the lottery-based analyses, the comparison group will attend a variety of schools. This is true within any school choice framework. There will be no attempt made to control for this heterogeneity in the screening of schools. However, these analyses are not the main focus of the study. The interrupted time series analysis, with a back up of student fixed-effect analysis, is the main focus of the study. For this analysis, we will be selective in choosing comparison schools that are as similar as possible to the magnet schools.


  1. How will the study verify that the lotteries are in fact random?


Information will be collected through the Grantee Screening Protocol as well as the MSAP Project/School Choice Coordinator Interview that will provide an understanding of the randomization process. These instruments will provide information that will assist us in determining if the lotteries are truly random.


There are several questions in the Grantee Screening Protocol that ask for information about the lotteries. For example, in Module A, questions 2c and 2d involve what grade students can apply and are accepted for admission. In Module C, there are additional questions about the application process including items 2 and 3 that ask about the lottery application and a date by which students must apply for admission. There is an entire section on the admissions process designed to attain a comprehensive understanding of this process and how decisions are made to admit students (i.e., before or after reviewing applications). These items are all in addition to questions 7 through 9 in Module C that specifically ask about the randomization process (i.e. what method is used, who conducts the randomized ordering, how the admission order is kept, and how the information is maintained). The interviewee is also asked to describe the process through an open-ended question that can bring forth the peculiarities of a lottery process making it less than random.


There are also questions in the MSAP Project/School Choice Coordinator Interview Guide that request information on the admissions process of each school (section E). This information can be compared to information about the lottery previously collected.


Finally, in addition to collecting and comparing information about the randomization process, analytic tests will be run on the data to verify that the observable characteristics of winners and losers in each lottery are statistically indistinguishable. A finding that the winners and losers of a lottery were not indistinguishable would raise the possibility that the lottery was not conducted randomly. If the non-random nature of the lottery is verified, the students will be excluded from the study.


  1. How will children in comparison schools be chosen if the number available exceeds the number needed to meet statistical requirements?


Our strategy for data collection was designed to impose the least amount of burden as possible on school districts. Therefore, we will be requesting information for the entire district, or just the subset of the treatment and comparison schools, so that district data managers do not need to spend time and computing resources to subset the data for a sample of students. However, districts will have that option. We plan to use all available data in the analyses, increasing statistical precision with no additional cost to the study or burden to districts.


  1. To what degree do MSAP grantees tend to apply and/or tend to be selected because they are schools failing to make AYP?  Is there a typical profile of grantees, such as failing to make AYP or expanding from PWSs because of their popularity and long waiting lists?


As noted in question #4, there has been a competitive priority in both the 2004 and 2007 MSAP grant competitions to build capacity of low-performing schools by providing choice. Consequently, schools that have applied and schools that have been selected to participate in MSAP projects tend to (1) have a higher minority group student enrollment than the district-wide minority group student enrollment and (2) be Title I schools. This is particularly true of elementary schools.

  1. Given that the results will not be generalizable to all magnet schools, please talk further about the specific practical utility of these results.


We are focusing on a type of magnet school that is common. While it is true that any evaluation that does not draw a random sample of grantees will not be statistically generalizable, it still can provide important lessons for districts considering similar conversions and for districts implementing this type of program.



  1. Is there any way to increase the study’s external validity at the student level?


Because the data required for this study requires a purposeful sample, it limits the external validity at the school level. Conversely, at the student level, there is no sampling conducted (data from the entire population of magnet school students are used in the analyses) so that the analysis does represent the school as a whole. Therefore, the challenges to external validity are in the selection of schools, a necessity of the study, but should not limit the student level external validity.


  1. How will this evaluation work with the invitational priority in the 2007 MSAP grant competition for grantees complete a rigorous evaluation?  Will there be a duplication of effort at some sites?


A small fraction of 2004 grantees included a rigorous evaluation (10-12%) and many of these grantees do not implement conversion elementary schools. We expect the number of grantees conducting rigorous evaluations to be similar in 2007. Therefore the overlap of the grantees conducting rigorous evaluations and the grantees who will be participating in this study will be minimal.


Additionally, while some grantees may have conducted a rigorous evaluation of their individual program, we expect that these evaluations are likely underpowered. We also believe that the grantee evaluators will be precluded from sharing data with us, a third party, without complicated legal agreements to ensure adherence to the privacy act. Nor can an ED contractor share data with another contractor without violating the same rules. However, if a district gives us permission and the grantee evaluator has the data we need, we will request student data from the grantee evaluator to reduce burden on the district.


We will use report information produced by the individual evaluations to add detail to our study findings.


  1. Will you be providing confidentiality under the Privacy Act or the Educational Sciences Reform Act of 2002?  There are inconsistencies between Part A and the Principal Survey. 

American Institutes for Research will follow procedures for ensuring and maintaining participant privacy, consistent with Education Sciences Reform Act of 2002. Title I, Part E, Section 183 of this Act requires, “All collection, maintenance, use, and wise dissemination of data by the Institute” to “conform with the requirements of section 552 of title 5, United States Code, the confidentiality standards of subsection (c) of this section, and sections 444 and 445 of the General Education Provision Act (20 U.S.C. 1232g, 1232h).” These citations refer to the Privacy Act, the Family Educational Rights and Privacy Act, and the Protection of Pupil Rights Amendment. While this is specified in the appendix, it should also be reflected in Part A in the section Assurances of Confidentiality.





  1. What response rate do you anticipate to the Principal survey absent the incentive? 


Section 9, Payments of Gifts to Respondents, specifies that principals will be compensated $25 to complete the principal survey, with an estimated response rate of 85%.



File Typeapplication/msword
File TitleOMB Comments on
AuthorCafarella_N
Last Modified Bylauren.angelo
File Modified2007-06-19
File Created2007-06-19

© 2024 OMB.report | Privacy Policy