ISFFMS OMB Part A 4.24.18 (2)

ISFFMS OMB Part A 4.24.18 (2).docx

Impact Study of Federally-Funded Magnet Schools

OMB: 1850-0943

Document [docx]
Download: docx | pdf

p
art a

Impact Study of Federally-Funded Magnet Schools: OMB Data Collection Package

January 10, 2018



Submitted to:

Institute of Education Sciences
550 12th Street, SW

Room 4104
Washington, DC 20004

Project Officer: Lauren Angelo
Contract Number: ED-IES-17-C-0066

Submitted by:

Mathematica Policy Research

P.O. Box 2393
Princeton, NJ 08543-2393
Telephone: (609) 799-3535
Facsimile: (609) 799-0005

Project Director: Christina Tuttle
Reference Number: 50526.01.026.220.000



This page has been left blank for double-sided copying.



CONTENTS

PART A. SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSION 5

A. Justification 5

1. Circumstances necessitating the collection of information 5

2. Purposes and uses of the data 8

3. Use of technology to reduce burden 10

4. Efforts to avoid duplication of effort 11

5. Methods to minimize burden on small entities 11

6. Consequences of not collecting data 11

7. Special circumstances 12

8. Federal Register announcement and consultation 12

9. Payments or gifts 12

10. Assurances of confidentiality 12

11. Additional justification for sensitive questions 13

12. Estimates of hours burden 13

13. Estimates of cost burden to respondents 15

14. Estimates of annual costs to the federal government 15

15. Reasons for program changes or adjustments 15

16. Plan for tabulation and publication of results 16

17. Approval not to display the OMB expiration date 19

18. Explanation of exceptions 19

References 20

APPENDIX A: DISTRICT INTERVIEW PROTOCOL

APPENDIX B: SCHOOL RECRUITMENT AND ADMISSIONS SURVEY

Appendix C: STUDY BROCHURE

APPENDIX D: CONFIDENTIALITY PLEDGE

TABLES

A.1. Research questions and data sources 7

A.2. Source, mode, and timing 9

A.3. Annual reporting and record-keeping hour burden 15

A.4. Timeline for project publications 19



PART A. SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSION

This Office of Management and Budget (OMB) package requests clearance for data collection activities to support a rigorous Impact Study of Federally-Funded Magnet Schools. The Institute of Education Sciences (IES) at the U.S. Department of Education (ED) has contracted with Mathematica Policy Research and its subcontractor, Social Policy Research Associates (SPR), to conduct this evaluation (ED-IES-17-C-0066). The evaluation includes an initial feasibility assessment, the focus of this clearance package, to determine whether an impact study can be conducted appropriately. First, the study team will gather information from fiscal year (FY) 2016 and 2017 Magnet Schools Assistance Program (MSAP) grantee districts and schools on student recruitment and admissions policies and practices, paying particular attention to the use of randomized lotteries for student admissions. The feasibility phase will result in a brief describing how MSAP-funded schools recruit and select students for admission, a topic of interest to the program office. Second, if a sufficient number of students are being admitted to these schools through lotteries, a revised clearance package will be submitted and the impact study will collect survey data from principals and district administrative records on admissions lotteries and student progress. The study would use these data to estimate the impacts of magnet schools on student achievement and diversity and to describe whether particular features of magnet schools are associated with greater success.

A. Justification

1. Circumstances necessitating the collection of information

a. Statement of need for a rigorous evaluation of magnet schools

Magnet schools are an important component of public school choice, as well as a strategy used by districts with the aim of improving student achievement and school diversity. Magnet schools – or programs within schools - typically implement a distinctive theme or instructional method and recruit at least some students from outside the school’s neighborhood to attract students with a broader range of backgrounds to the school. The theory behind magnets is that all students will benefit academically from the school’s thematic focus and, potentially, from exposure to a diverse student body.

Magnet schools hold a prominent place in the history of education reforms in the United States and were the earliest form of school choice, arising at the turn of the century as competitive-admission options that provided a rigorous curriculum for districts’ highest-achieving students (Steel & Levine, 1994; Finn & Hockett, 2012). Beginning in the 1960s and 1970s, magnet schools have been critical to school districts’ efforts to carry out desegregation plans by providing parents with the option of sending their children to schools with attractive offerings outside their own neighborhoods. The establishment of magnet schools has also been designed to encourage parents to keep their children in a district’s public school system rather than placing them in private schools or relocating to more suburban school districts (Blank, Levine, and Steel 1996). Along with other school choice options, the number of magnet schools has grown in recent years to more than 3,200 schools in more than 600 districts across the country (Glander 2016).

Growth in the magnet school sector has been aided by the Magnet Schools Assistance Program (MSAP) at the U.S. Department of Education (ED), which has provided grants to school districts to support magnet programs since 1985. In recent years, federal funding for MSAP has declined at the same time that funding for other forms of school choice (such as public charter schools and private school voucher programs) has increased. One hypothesis for these trends is the lack of conclusive evidence about the effectiveness of magnet schools in general (Ballou 2009) and of the MSAP program in particular.

Overall, evidence on magnet school effects is mixed. Few studies have used the types of rigorous methods that allow for drawing conclusions about impacts with confidence, and the studies that have used these methods (random assignment through school lotteries) have been conducted in in a single state or a single city and have not focused on MSAP-funded schools (Bifulco et al. 2009; Crain et al. 1999; Berends and Waddington 2017; Wang et al. 2014 Christenson et al. 2003).

The most recent FY 2016 and FY 2017 MSAP grant competitions provide a potential opportunity to conduct a rigorous study of the MSAP program, drawing primarily on administrative records. The grant notices in those years included a competitive preference priority for grant applicants that proposed to select students to attend the magnet schools by methods such as lotteries. This incentive introduced by the program office at ED builds directly on findings from an earlier, descriptive study of magnet schools conducted by the Institute of Education Sciences, which raised concerns about MSAP schools’ success in recruiting and admitting students from outside the schools’ neighborhood to encourage educational and demographic diversity.

The potential impact evaluation would measure the effects of a wide range of MSAP schools on student achievement and school diversity, using a rigorous lottery-based random assignment design. The impact study data collection would make it possible to estimate the impact of admission to a magnet school on: (1) student achievement and other available academic outcomes; and (2) school environment, including exposure to a more diverse set of peers. The study would also examine whether particular features of MSAP schools are associated with greater success which could inform program improvement efforts in the future.

This impact evaluation of MSAP schools is authorized by Part B section 8601 of the Elementary and Secondary Education Act of 1965 (ESEA) as amended by the Every Student Succeeds Act (ESSA), which allows the Department to pool resources across ESEA programs in order to fund rigorous evaluations of individual Federal education programs that currently lack sufficient evaluation dollars. The Magnet Schools Assistance Program (MSAP), itself, is authorized under Title IV, Part D of the ESEA, as amended by ESSA, and provides grants to local educational agencies (LEAs) and consortia of LEAs to support magnet schools under an approved, required or voluntary, desegregation plan.

b. Overview of study design and research questions

Because the study is a program evaluation, not just an assessment of a particular form of school choice, the analysis would focus on the districts currently receiving federal support through multi-year MSAP grants awarded in FY2016 and FY2017. We anticipate recruiting approximately 30 MSAP schools.1 To reach that target, the study team will first collect information on schools’ admissions and lottery procedures, administrative data availability, and willingness to participate in an impact evaluation. Information about admission and lottery procedures will provide the basis not only for the feasibility assessment but also an evaluation brief on schools’ student recruitment and admissions practices, an area of interest for the program office that oversees the MSAP.

Then, if enough schools meet the study’s criteria, a revised clearance package would be submitted and the impact evaluation would address specific questions about the impact of being admitted to a MSAP school and explore other questions that may help ED and the magnet school community in improving the program. The study design plans to take advantage of the lotteries that districts or schools conduct when there are more students interested in being admitted to a MSAP school than there is available space for them. Specifically, the analysis would estimate the impact of each magnet school in the study sample by comparing the outcomes of students who receive an admission offer through the lottery to the outcomes of students who do not receive an offer through the lottery. The lotteries randomly determine which students are in the study’s treatment group (“lottery winners,” who are offered admission to the study’s magnet schools) and which are in the control group (“lottery losers,” who are not offered admission). The study would examine the outcomes of two cohorts of students: the first study cohort (applying through a lottery to enter an MSAP school in 2018-2019) would be followed for four years, and the second study cohort (applying through a lottery to enter an MSAP school in 2019-2020) would be followed for three years. Our analytic approach takes into account that variation will exist in schools’ lottery procedures, grade levels of applicants, and the ratio of applicants to available seats; including as many different magnet school lotteries as possible in the study will maximize the sample’s size and policy relevance.

Table A.1 describes these research questions, as well as data sources and analysis methods for addressing those questions.

Table A.1. Research questions and data sources

Research questions

Data sources and analysis methods

Feasibility study

  1. How do the districts and schools funded through the 2016 and 2017 MSAP grants recruit and select students for admission?

  • Screener interviews with 2016 and 2017 MSAP districts and school surveys on student recruitment and admissions

  • Comparison of reported school/district practices

  1. How many schools hold eligible lotteries and are willing to participate in the study?

  • Screener interviews with 2016 and 2017 MSAP districts

  1. Is the number of lottery participants and eligible schools sufficient to conduct the study?

  • Screener interviews on lottery procedures and information available in district administrative data

  • Projections of the number of students participating in usable lotteries in spring 2018 and 2019

  • Analysis of statistical power to detect meaningful impacts

Impact evaluation

  1. What is the impact of admission to the magnet program on student academic outcomes (achievement and/or other relevant measures of student success, such as persistence in school or graduation)?

  • Data on student outcomes from district student-level records

  • Experimental impact analysis comparing outcomes of treatment group students (who win lotteries and are admitted to magnet schools) and control group students (who lose lotteries)

  1. What is the impact of admission to the magnet program on the type of school that students attend, including their school’s educational programs and the students’ exposure to a diverse range of peers?

  • Student- and school-level administrative records, supplemented by data on school characteristics from principal survey

  • Experimental impact analysis comparing characteristics of schools attended by treatment group and control group students

  1. To what extent is there a relationship between school characteristics, including measures of diversity, and school impacts on student outcomes?

  • Subgroup analysis, in which impacts are calculated for subgroups of schools with particular characteristics

  • Correlational analysis, examining relationship between school characteristics and measures of diversity (from school survey and extant data) and impacts


By comparing the outcomes of treatment group and control group students, we would estimate the impact of admission to a MSAP school. Using primarily administrative records, we would measure these outcomes for four years.2 Following the sample for four years makes it possible to observe student test scores in grade 3 for students entering a magnet school in kindergarten, and for some outcomes it would also make it possible to assess whether any impacts grow or diminish over time (which could be particularly important in examining the effectiveness of newer magnet schools). By comparing the characteristics of the schools that treatment group and control group students attend, we also would assess the effect of admission to a magnet program on the type of schools that students attend, including the content of their schools’ educational programs and the students’ exposure to diverse groups of peers (Exhibit A.1, Box D). We expect a broad set of schools to participate in the study, so our approach also includes comparisons of the impacts of admission to different groups of magnet schools, such as between newer magnet schools versus older magnet schools or between magnets with different themes. This correlational analysis may yield policy-relevant findings on the conditions under which magnet schools may be most likely to be successful.

2. Purposes and uses of the data

To address the study’s research questions, the evaluation would collect and analyze data from several sources (Table A.2). These data will be used to understand the impact of admission to a magnet program on student academic outcomes and type of school attended (including peer characteristics), providing evidence on the extent to which magnet schools are improving student outcomes and increasing diversity. Next, we describe how the study would use each data source.

Table A.2. Source, mode, and timing

Data source

Mode and timing

Respondent group

District interview (feasibility study)

District interview protocol

60-minute interview, administered in fall 2018a

All nine FY 2016 and 32 2017 MSAP grantee districts (for a total of 40 unique MSAP districts)b

School survey (feasibility study)

School recruitment and admissions survey

10-minute survey, administered in fall 2018a

All 38 FY 2016 and 124 2017 MSAP grantee schools (for a total of 162 unique MSAP schools)

Principal survey (impact study)

Survey on school organization and instruction c

30-minute survey, administered electronically, with telephone option, in spring 2019, 2020, 2021, and 2022

Principals of schools attended by impact study sample treatment group and control group students (including charter schools and private schools, as feasible)

District/school records (impact study)

Lottery records extraction memoc

Electronic data and characteristics of lottery implementation for prior and current school years, collected from districts in spring through fall 2019, for 2018–2019 school lottery (retrospectively) and 2019–2020 school lottery

All impact study sample districts (comprising schools attended by treatment group and control group students)

District records data extraction memoc

Electronic student-level records data for prior school years, requested from districts in fall 2020 (for 2017–2018 and 2019–2020 school years) and fall 2022 (for 2020–2021 and 2021–2022 school years)

All impact study sample districts (comprising schools attended by treatment group and control group students)

a We assume that the respondents will need an additional hour to locate and compile information in advance of the interview.

b One district (New Haven Public Schools) received a grant in both FY 2016 and FY 2017. Also, this count includes seven different ‘community school district’ grantees managed by the New York City Department of Education—the study plans to separately contact a representative of each of these community school districts.

cThese data collection activities will occur only if the impact evaluation is feasible.


District interview protocol (feasibility study). District interviews will collect information on student recruitment activities, student selection, the structure of MSAP grantee admissions lotteries, the extent to which grantee schools are oversubscribed, and the availability of administrative data. This information will determine whether a school is eligible for the impact study.

To reduce burden on district staff, we will review grantees’ district websites and other publicly available information relevant to our items of interest before the interview; we will ask respondents to verify or update that information, rather than submit new answers. We developed the protocol so as to complete a discussion with one staff member in 60 minutes. We will provide the protocol to respondents in advance of the phone discussion along with a study brochure (Appendix C) and assume that they may spend up to an hour locating and compiling information for the interview.

School recruitment and admissions survey (feasibility study). Short school surveys will collect only essential information on student recruitment activities and student selection. This information will also help determine whether a school is eligible for the impact study and will provide information of interest to the program office.

Lottery records memo (impact study). For the impact study, the study team would share a memo with districts that defines the data that districts and schools will need to provide for each cohort of lottery applicants. Details about how the lottery was conducted are crucial for compiling the appropriate data to accurately define students’ treatment group or control group status. We would structure the memo to define lottery data in a manner that accounts for several factors, such as single-school versus districtwide common lotteries, processes for making admissions offers (for example, location or sibling preferences), processes for late offers from wait lists, and identifiers to match lottery records to enrollment records. Districts would provide the associated records in the format used by the district.

School survey on organization and instruction (impact study). Magnet schools offer specialized curricula or instructional methods to attract students from diverse backgrounds. A 30-minute web-based survey of school organization and instruction would provide a description of the magnet programs in MSAP schools, or the “treatment” in this evaluation. An important contribution of the impact study would be to gather information not only on the magnet schools but also on the schools that the study’s control group students attend. Research offers little insight into the relationship between the practices and characteristics of magnet schools and school and student outcomes. The survey of school organization and instruction would allow us to analyze which characteristics may be related to the effectiveness of magnet schools in improving student outcomes and increasing integration.

District student-level records memo (impact study). In the impact study, we would share a memo with districts that describes the data we will ask them to provide. These student-level administrative records include demographic characteristics, school enrollment, and test scores, as well as attendance, persistence, and graduation, where applicable and available. The data request memo would clearly and concisely summarize (1) the sample of students for whom we are requesting data, (2) the data elements, and (3) the school years for which we are requesting each data element.

We also would use publicly available school-level data from the Common Core of Data and district websites to measure school and staff characteristics. These characteristics will include school size, racial/ethnic and socioeconomic student composition, and teacher/pupil ratio.

3. Use of technology to reduce burden

To minimize respondent burden, the data collection plan is designed to obtain information efficiently. When feasible, we will gather information from existing data sources, using the most efficient methods available. This will be especially helpful in reducing burden for the district/school interviews, because we will collect publicly available data on the schools’ admissions and lottery processes before the interview. Table A.2 provides information on the source, mode, and timing for each data collection activity.

We would survey principals primarily electronically. This mode allows respondents to complete the data collection instrument where and when they choose, and its built-in editing checks and programmed skips will reduce response errors. We also would offer a telephone or paper mode for principals who prefer these modes.

The types of district records needed for the evaluation are almost always available as computer files, and would be transferred to Mathematica in that way. We would specify the required data elements; however, to reduce burden on the district, we would accept any format in which the data are provided. We would convert these data to a consistent format so they can be combined with data that other districts submit and will be suitable for analysis.

4. Efforts to avoid duplication of effort

The study will rely extensively on pre-existing administrative records, which would help to limit respondent burden. District records provide the basis of the study’s key outcomes, including student achievement and school diversity, and we would collect admission lottery information from schools and districts that already document these procedures in detail (rather than asking schools to conduct or document lotteries differently for the purposes of the study). The study will only conduct screening interviews with district / school staff and school surveys on organization and instruction because they are essential to determining school eligibility for study participation as well as describing participating schools, and no pre-existing data on those topics are available.

5. Methods to minimize burden on small entities

No small businesses or entities will be involved as respondents.

6. Consequences of not collecting data

These data are needed to evaluate the MSAP and its magnet schools. Little is known about these schools’ success in promoting diversity and boosting student achievement, the program’s main goals. Failing to collect these data would make it impossible to inform policymakers and the field about effectiveness and potential ways to address program improvement.

Consequences of not collecting data from specific sources. Each data source provides information needed for the evaluation.

Without information from the screener interviews with districts and surveys of schools on recruitment and admissions, we would be unable to collect information on the MSAP grantee admissions lotteries, the extent to which grantee schools are oversubscribed, and the availability of administrative data. Without all this information, we would be unable to determine whether there is a sufficient number of schools and students for the impact evaluation. In addition, without this information we would be unable to collect information on student recruitment and admission practices, which will be valuable to education policymakers interested in understanding these practices. If we did not collect this information, it would not be possible for the study to highlight common and innovative approaches to attracting students and increasing diversity in magnet schools.

Without the information from school surveys on organization and instruction, we would be unable to understand the characteristics and organization of magnet schools and determine how they differ from schools enrolling control group students and we would be unable to analyze which characteristics may be related to the effectiveness of magnet schools in improving student outcomes and diversity.

Without lottery records, we cannot conduct the impact evaluation. We would be unable to (1) accurately define students’ treatment group or control group status, and (2) identify the largest group of schools and students appropriate for the analyses.

Without district records, we would not be able to analyze the impact of magnet schools on student outcomes, including test scores, persistence, graduation, and exposure to diverse groups of peers.

7. Special circumstances

There are no special circumstances involved with this data collection.

8. Federal Register announcement and consultation

a. Federal Register announcement

The 60-day notice to solicit public comments was published in Volume 83, Number 11, and page 2,440 of the Federal Register on January 17, 2018. No public comments were received. The 30-day notice to solicit public comments will be published.

b. Consultations outside the agency

We have not consulted any experts not directly involved in the study regarding the subject of this clearance. We expect to consult with outside experts on the evaluation design and impact evaluation.

9. Payments or gifts

As part of their funding requirements, MSAP grantees must participate in evaluation activities; however, the same is not true of schools that become part of the control group samples. Because a strength of the intended impact evaluation is an exploration of the relationship between school organization/instruction and magnet school effects, the survey of treatment and control group principals is a critical data collection activity. The survey is expected to take 30 minutes to complete. Providing data collection incentives is important in federal studies, given the recognized burden and need for high response rates. The use of incentives has been shown to be effective in increasing response rates (Dillman, 2007). In 2005, the National Center for Education Evaluation (NCEE) submitted a memorandum to OMB outlining guidelines for incentives for NCEE Evaluation Studies and tying recommended incentive levels to the level of burden (represented by the length of the survey). We believe respondent payments are necessary for the control group sample for the principal survey, as they are not required to participate in the evaluation and principals often have a full schedule with multiple demands for their time. We do not propose respondent payments for principals in grant-funded schools because we believe grant requirements will be sufficient to adequately obtain their responses. During each of the four rounds of data collection, control principals completing the survey will receive a $30 payment, which is consistent with a high-burden survey.

10. Assurances of confidentiality

The study team has established procedures to protect the confidentiality and security of its data. This approach will be in accordance with all relevant regulations and requirements, in particular the Education Sciences Institute Reform Act of 2002, Title I, Subsection (c) of Section 183, which requires the director of IES to “develop and enforce standards designed to protect the confidentiality of persons in the collection, reporting, and publication of data.” The study also will adhere to requirements of Subsection (d) of Section 183 prohibiting disclosure of individually identifiable information, as well as making the publishing or inappropriate communication of individually identifiable information by employees or staff a felony.

The study team will protect the full privacy and confidentiality of all people who provide data. The study will not have data associated with personally identifiable information (PII), because study staff will be assigning random ID numbers to all data records and then stripping any PII from the data records. In addition to the data safeguards described here, the study team will ensure that no respondent names are identified in publicly available reports or findings, and, if necessary, the study team will mask distinguishing characteristics. The following statement to this effect will be included with all requests for data:

Mathematica Policy Research and its subcontractor SPR follow the confidentiality and data protection requirements of IES (The Education Sciences Reform Act of 2002, Title I, Part E, Section 183). Responses to this data collection will be used only for research purposes. The reports prepared for the study will summarize findings across the sample and will not associate responses with a specific district, school, or individual. We will not provide information that identifies respondents to anyone outside the study team, except as required by law.”

Mathematica uses the following safeguards to protect confidentiality:

  • All Mathematica employees sign a pledge that emphasizes the importance of confidentiality and describes their obligation (see Appendix D).

  • All internal networks are protected from unauthorized access by using defense-in-depth best practices, which incorporate firewalls and intrusion detection and prevention systems. The networks are configured so that each user has a tailored set of rights, granted by the network administrator, to files approved for access and stored on the network. Access to hard-copy documents is strictly limited. Documents are stored in locked files and cabinets. Discarded materials are shredded.

  • Computer data files are protected with passwords, and access is limited to specific users, who must change their passwords on a regular basis and conform to strong password policies.

  • Especially sensitive data are maintained on removable storage devices that are kept physically secure when not in use.

After the study concludes, the study data may be transmitted to the National Center for Education Statistics (NCES) for safekeeping as a restricted-use file. Before transmittal, the data will be stripped of any individual identifiers. Researchers wishing to access the data for secondary analysis must apply for an NCES license and agree to the rules and procedures guiding the use of restricted-use files.

11. Additional justification for sensitive questions

No questions of a sensitive nature will be included in this study.

12. Estimates of hours burden

Table A.3 provides an estimate of time burden for the data collections, broken down by instrument and respondent. The total of 107 hours covers the feasibility study, the focus of this request (2 years). ED may authorize the full impact study, at which point a revised OMB package will be submitted. These estimates are based on our experience collecting administrative data from districts and administering interviews and surveys to district officials and school principals. The table also presents annualized estimates of indirect costs to all respondents for each data collection instrument. Next, we provide details on the time and cost burdens for each data collection activity.

District interview (feasibility study): This estimate includes up to 2 hours, in fall 2018, for all 40 2016 and 2017 MSAP grantees to complete the screener interview. This estimate assumes one hour for the interview and one hour to collect and assemble information on lottery procedures and student recruitment and admission. The estimated cost burden for the district screener is based on an average wage of $44.46 per hour for 80 hours, coming to a total of $3,556.80.

School recruitment and admissions survey (feasibility study): This estimate includes 10 minutes, in fall 2018, for all 162 2016 and 2017 MSAP grantee schools to complete the survey. This should not require respondents to collect or assemble information. The estimated cost burden for the school survey is based on an average wage of $44.46 per hour for 27 hours, coming to a total of $1,200.42.

Lottery records (impact study). We assume 8 hours for each of the 30 districts in the impact study, in fall 2019, to work with each district liaison to compile data requested in the lottery records extraction memo (240 total hours). An average wage of $44.46 per hour was used to calculate the total burden cost of $10,670.40 for collecting these records.

School survey on organization and instruction (impact study). Based on power calculations, we anticipate recruiting 30 MSAP schools into the study sample; in addition to the principals in these schools, we anticipate interviewing principals at approximately 220 schools attended by students who were not admitted to MSAP schools through the lottery (these are schools attended by students in the study’s control group). In total, 250 principals will be contacted to complete the school survey in 2019, 2020, and 2021. We assume 213 principals (85 percent of the anticipated 250 principals in the sample) will complete the half-hour (30 minute) survey (319.5 total hours). Assuming an average wage of $44.46 per hour, the total burden for the school surveys comes to a total of $14,204.97.

Student records (impact study). Similar to the burden associated with lottery records, we assume 8 hours for each of the 30 districts in the impact study at three points in time (in fall 2019, fall 2020, and fall 2021), for working with each district liaison to obtain administrative records defined in the memo (720 total hours). An average wage of $44.46 per hour was used to calculate the total burden cost of $32,011.20 for collecting these records.

The total annual burden for this feasibility study collection is 107 hours. A total burden cost of $4,757.22 was calculated across all data collection activities for an average yearly burden of $2,378.61.



Table A.3. Annual reporting and record-keeping hour burden

Respondent/data request

Total number
of respondents

Total responses

Total amount of time per response (hours)

Total hours

Respondent average hourly wagea

Respondent labor cost

Feasibility study







District interview (Appendix A)

40

40

2a

80

$44.46

$3,556.80

School recruitment and admissions survey (Appendix B)

162

162

.17

27

$44.46

$1,200.42

Impact Study







Student records request

(reporting / burden to be included in revised clearance package if impact evaluation is feasible)

Lottery records request

(reporting / burden to be included in revised clearance package if impact evaluation is feasible)


School survey on organization and instruction

(reporting / burden to be included in revised clearance package if impact evaluation is feasible)



Total

202

202

107

n.a.

$4,757.22

Average annual (across 2 years)

101

101

53.5

$2,378.61

aWe assume that the telephone interview will take about an hour to complete and an hour for the respondent to gather information to prepare for the call.

n.a. = not applicable



13. Estimates of cost burden to respondents

There are no additional respondent costs associated with this data collection beyond the burden estimated in item A.12.

14. Estimates of annual costs to the federal government

The estimated cost for this six-and-a-half-year study, including establishing a technical working group, design and feasibility memo, brief on grantee recruitment and admissions, data collection instruments, district recruitment, data collection, data analysis, and report preparation, is $3,563,881 or approximately $548,289 per year.

15. Reasons for program changes or adjustments

This is a new information collection request. Therefore all burden is new. This is a program change that results in an increase in burden and responses of 53 hours and 101 responses respectively.

16. Plan for tabulation and publication of results

a. Tabulation plans

Descriptive brief

The purpose of the descriptive brief will be to provide information about the recruitment and admissions practices of MSAP grantees, which may be useful to future MSAP grantees and other magnet schools outside the MSAP program. The brief will summarize information collected in the study’s feasibility assessment (in particular, the initial screener interviews with district and school officials in spring 2018). The analyses for this brief will be purely descriptive; the brief will not contain any impact analyses or complex statistical modeling. The document will be limited to descriptive tables and figures reporting how often magnet schools in the sample report using specific types of recruitment and admissions practices.

Impact report (If the full impact study is feasible)

The impact report will summarize the results of the full impact study, including rigorous estimates of the impacts of magnet schools on student achievement and diversity. The impact analyses summarized in this report will examine the effects of being offered admission to a magnet school on student achievement and other outcomes. Key elements of the plan include the following five components:

Basic model. The basic model shows how we will use treatment/control comparisons to estimate the impact of being offered admission to a magnet school on student achievement and other outcomes, including characteristics of the schools that students attend. Impacts on the characteristics of schools that students attend will tell us whether magnet schools affect the diversity of students’ peers and the characteristics of students’ educational programs. This analysis also will shed light on possible mechanisms for magnet school impacts, by revealing how these schools differ in their organization, instruction, and other characteristics from the schools their students would otherwise attend. We will base our primary student achievement outcome on scores for state assessments in reading and math (grades 3 through 8), covering additional subjects and grades as available from the district (including high school assessments). These scores will be standardized using state- and grade-level means and populations, to make comparisons across states and grades easier. We will examine other achievement outcomes (such as science or social studies test scores) to the extent that students are tested in these areas to assess whether magnet schools with themes related to these subjects affect achievement in these subjects. We also will examine impacts reflecting student progress and attainment, including grade promotion and graduation, if applicable.

A simple version of the model for estimating magnet school impacts is:

(1) ,


where is the outcome for student in year t who participated in lottery , is an indicator of whether student i was assigned to the treatment group, represents student baseline characteristics including prior test scores, is a set of lottery fixed effects, and is a random error term. The lottery fixed effects account for differences between characteristics of applicants to different schools (and applicants who participate in different lotteries in a given school), as well as differences in the proportion of students offered admission in different lotteries (Abdulkadiroglu et al. 2011). The coefficient represents the estimated impact of being offered admission to a magnet school—an intent-to-treat estimate, because not all those offered admission will attend the school. We also will estimate the impact of attending a study magnet school, using a complier average causal effect (CACE) estimator (Schochet and Chiang 2009).

Variation in lottery designs. We will use the information we collect on lottery processes and outcomes to carefully account for issues that might otherwise thwart efforts to define the treatment group of magnet school students and the proper counterfactual against which their outcomes should be compared. For example, we will be prepared to analyze data not only from a simple, single-school lottery (based on the model described above), but also from a districtwide unified lottery. In the latter case, students or their parents may express their preferences for a number of schools, and the district randomly assigns students to schools. To account for different types of lotteries, we will adjust the sample included in the model estimation, as well as the lottery fixed-effects variables included in the model, accordingly.

Other lottery features that our analysis approach will be prepared to handle include exemptions, preferences, and stratification. Lottery exemptions allow certain students (such as siblings or those from a feeder school on a magnet school pathway) to be admitted to the school without having to participate in the lottery. Stratification occurs when certain groups of students are randomized before others or when a set number of seats are reserved for the group regardless of the number of applicants. Finally, we also may encounter situations in which students apply to multiple magnet schools with single-school lotteries. We will adapt methods developed in prior work on schools of choice (Gleason et al. 2010) to handle these types of situations.

The counterfactual. The counterfactual condition tells us what would happen to treatment group students if they had not received an admission offer to a magnet school. The experiences of control group students provide evidence about this counterfactual. For example, a large proportion of control group students attending private schools suggests that magnet schools attract students who would otherwise leave the district. The counterfactual condition affects the interpretation of impact estimates. In the simplest case, control group students would all attend neighborhood public schools, and we would interpret impacts as reflecting a comparison of magnets and these neighborhood schools. A more likely scenario is that the counterfactual is more varied, with control group students attending a mix of neighborhood, charter, and private schools. If many control group students attend private schools where data on administrative outcomes are not available (or in districts where charter school data is not included with district data on traditional schools and magnet schools), this could lead to differential attrition in the sample (with a higher attrition rate in the control group than in the treatment group), which can pose a threat to the study’s internal validity. In addition, control group students also may attend other magnet schools, which could complicate interpretation of the study’s results. We will minimize these risks by avoiding lottery situations in which such conditions are widespread. In cases in which a large proportion of control group students in our sample do attend magnet schools, we will focus on complier average causal effect (CACE) estimates.

We will estimate the counterfactual based on schools that control group students attend. To the extent possible, we will measure magnet school impacts separately for subgroups of students based on the type of school they would likely have attended if they did not attend the magnet school (for example, students who would otherwise attend a traditional public school versus a charter school). We will explore multiple ways of carrying out this type of subgroup analysis. For example, one approach would be to identify magnet schools where nearly all control group students attend a traditional public school, and compare these schools to other magnet schools where nearly all control group students attend a charter school. An alternative approach would be to examine unified district-wide lotteries where students rank multiple schools: in these lotteries, it may be possible to estimate impacts separately for subgroups of treatment students who chose either traditional public schools or charter schools as their second-ranked option.

Correlational analysis. Our impact analysis will not only provide rigorous evidence on the overall impacts of magnet schools for students, but will also explore differential effects among subgroups of students and schools. This involves defining subgroups of schools with particular characteristics, then examining correlations between these characteristics and impacts. One focus will be on magnet school themes, to shed light on whether some types of schools—for example, those with a Science, Technology, Engineering, and Mathematics (STEM) focus—may be more successful than others. We will examine school characteristics that the charter school literature suggests may correlate with more positive impacts, such as serving larger proportions of disadvantaged students (Gleason et al. 2010), providing intensive tutoring for students (Furgeson et al. 2012), or providing substantial coaching and feedback to teachers (Dobbie and Fryer 2013). We also will assess the estimated impacts of different types of magnets, such as older magnets versus newer ones. This estimate will be relevant to districts considering whether to expand their magnet school sector, either by opening new magnet programs or by expanding the number of seats available in existing programs. Finally, we will estimate differential impacts by school grade levels.

Longitudinal data. Using student-level administrative data and school-level data from principal surveys in each of up to four follow-up years, we will track changes in key student- and school-level characteristics. Tracking students for four years will make it possible to observe administrative data on test scores in third grade (typically the earliest grant in which standardized tests are administered) for students who entered a magnet elementary school in kindergarten. One key student-level measure will be students’ school enrollment. We will examine whether treatment group students who enter magnet schools stay there or move to nonmagnet schools. We will examine whether control group members enter magnet schools at any point during the follow-up period. At the school level, we will describe how magnet schools’ characteristics evolve during the study period. For example, schools that convert to magnet status may not have fully put the school’s theme in place in the first year or two, so we may observe such changes in key school characteristics as student composition, curricular focus, or instructional approach. Finally, the data will allow us to track magnet school impacts after additional years of potential enrollment in the schools.

b. Publication plans

The study team will prepare two reports. As part of the feasibility assessment, the study team will prepare a descriptive brief documenting the recruitment and admissions practices used by FY 2016 and 2017 MSAP grantees. If the impact evaluation is feasible, the study team will prepare a single comprehensive report presenting a portrait of magnet schools and their impacts on student achievement and school diversity. We will draw on experience preparing IES reports that use figures and accompanying briefs to present clear, accessible findings that can guide district, school, and policymaker decisions. Table A.4 presents the timeline for these reports.

17. Approval not to display the OMB expiration date

All data collection instruments will include the OMB expiration date.

18. Explanation of exceptions

No exceptions are requested.

Table A.4. Timeline for project publications

Activity

Date

Descriptive brief

First draft

May 2019

Revised draft

August 2019

Final brief

October 2019

Study report

First draft

July 2023

Revised draft

September 2023

Final report

April 2024





References

Abdulkadiroglu, A., J. Angrist, S. Dynarski, T.J. Kane, and P. Pathak. “Accountability and Flexibility in Public Schools: Evidence from Boston’s Charters and Pilots.” Quarterly Journal of Economics, vol. 126, no. 2, 2011, pp. 699–748.

Ballou, Dale. “Magnet School Outcomes.” In Handbook of Research on School Choice, edited by Mark Berends, Matthew Spring, Dale Ballou, and Herbert Walberg. New York: Taylor and Francis, 2009.

Berends, Mark, and Joseph R. Waddington. “Impact of the Indiana Choice Scholarship Program: Achievement Effects for Students in Upper Elementary and Middle School.” Working paper. Available at http://creo.nd.edu/images/people/Waddington__Berends_Indiana_Voucher_Impacts_06.24.17.pdf. Accessed July 13, 2017.

Betts, J., S. Kitmitto, J. Levin, J. Bos, and M. Eaton. “What Happens When Schools Become Magnet Schools? A Longitudinal Study of Diversity and Achievement.” Washington, DC: American Institutes for Research, May 2015.

Bifulco, Robert, Casey Cobb, and Courtney Bell. “Can Interdistrict Choice Boost Student Achievement? The Case of Connecticut’s Interdistrict Magnet School Program.” Educational Evaluation and Policy Analysis, vol. 31, no. 4, December 2009, pp. 323–345.

Blank, R., Levine, R., & Steel, L. “After Fifteen Years: Magnet Schools in Urban Education.” In B. Fuller, R. Elmore, & G. Orfield (eds.), Who Chooses? Who Loses? Culture, Institutions, and the Unequal Effects of School Choice, (pp. 154-172). New York: Teachers College Press, 1996.

Christenson, B., M. Eaton, M. Garet, L. Miller, H. Hiwaka, and P. DuBois. “Evaluation of the Magnet Schools Assistance Program, 1998 Grantees.” Washington, DC: American Institutes for Research, 2003.

Crain, R.L., A. Allen, R. Thaler, D. Sullivan, G. Zellman, J. Warren Little, and D. Quigley. “The Effects of Academic Career Magnet Education on High Schools and Their Graduates.” Berkeley, CA: National Center for Research in Vocational Education, 1999.

Dillman, Don A. 2007. Mail and Internet Surveys: The Tailored Design Method (2nd ed.). New Jersey: John Wiley & Sons, Inc.

Dobbie, Will, and Roland G. Fryer Jr. "Getting beneath the Veil of Effective Schools: Evidence from New York City." American Economic Journal: Applied Economics, vol. 5, no. 4, October 2013, pp. 28–60.

Finn, Chester and Jessica Hocket. “Exam Schools: Inside America’s Most Selective Public High Schools.” Princeton, NJ: Princeton University Press, 2012.

Furgeson, Joshua, Brian Gill, Joshua Haimson, Alexandra Killewald, Moira McCullough, Ira Nichols-Barrer, Bing ru Teh, Natalya Verbitsky Savitz, Melissa Bowen, Allison Demeritt, Paul Hill, and Robin Lake. “Charter-School Management Organizations: Diverse Strategies and Diverse Student Impacts.” Cambridge, MA: Mathematica Policy Research, January 2012.

Glander, Mark. “Selected Statistics from Public Elementary and Secondary Education Universe: School Year 2014-15.” Washington, DC: National Center for Education Statistics, September 2016.

Gleason, P., M. Clark, C. Tuttle, and E. Dwoyer. “The Evaluation of Charter School Impacts.” Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, June 2010.

National Center for Education Evaluation. 2005. Guidelines for incentives for NCEE impact evaluations. (March 22). National Center for Education Evaluation: Washington, DC.

Polikoff, Morgan, and Tenice Hardaway. “Don’t Forget Magnet Schools When Thinking About School Choice.” Washington, DC: Brookings Institution, March 2017.

Schochet, Peter, and Hanley Chiang. “Technical Methods Report: Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs.” Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, April 2009.

Steel, L., and R. Levine. "Educational Innovation in Multiracial Contexts: The Growth of Magnet Schools in American Education." Prepared for the U.S. Department of Education under contract by American Institutes for Research, Palo Alto, California, 1994.

Wang, J., J. Schweig, and J. Herman. “Is There a Magnet School Effect? Using Meta-Analysis to Explore Variation in Magnet School Success.” Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing and Center for the Study of Evaluation, December 2014.

This page has been left blank for double-sided copying.



www.mathematica-mpr.com

Improving public well-being by conducting high quality,
objective research and data collection

Princeton, NJ Ann Arbor, MI Cambridge, MA Chicago, IL Oakland, CA SEATTLE, WA TUCSON, AZ Washington, DC ■ Woodlawn, MD




1 Though the focus is on schools funded with the new grants, we may consider including magnet schools that received MSAP grant funds between 2010 and 2014 in the same 2016–2017 grantee districts. These earlier-funded schools are likely to have more mature magnet programs and stronger demand for admission, making them potentially better candidates for a lottery-based impact study of the MSAP.

2 Data collection activities scheduled to occur after the first three years of the study are included as an option in the contract with ED for this project. If ED chooses to exercise the option for additional years of data collection beyond the first three-year period, at that time another forms clearance package will be submitted for approval.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorForest Crigler
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy