PN OMB Package Part B 9 27 2021

PN OMB Package Part B 9 27 2021.docx

Evaluation of Promise Neighborhoods

OMB: 1850-0965

Document [docx]
Download: docx | pdf

Shape1

Evaluation of Promise Neighborhoods

Part B: Supporting Statement for Paperwork Reduction Act Submission

September 2021



Submitted to:

Submitted by:

U.S. Department of Education

Institute of Education Sciences

National Center for Education Evaluation and Regional Assistance

550 12th Street, S.W.

Washington, DC 20202

Project Officer: Erica Johnson
Contract Number: 91990020C0001

Mathematica

P.O. Box 2393

Princeton, NJ 08543-2393

Telephone: (609) 799-3535

Fax: (609) 799-0005

Project Director: Lisa Dragoset

Reference Number: 50947



Tables



Appendices

Appendix A: Email Notifications

Appendix B: Current Grantee Survey

Appendix C: Excel Workbook for Current Grantees

Appendix D: Previous Grantee Survey

Appendix E: Data Request Memo

Part B. Supporting Statement for Paperwork Reduction Act Submission

The U.S. Department of Education’s (ED) Institute of Education Sciences (IES) requests clearance for data collection activities to support a study of the Promise Neighborhoods program. This program is funded through federal grants authorized by Title IV of the Elementary and Secondary Education Act (ESEA), most recently reauthorized as the Every Student Succeeds Act (ESSA). Congress has invested $506 million in Promise Neighborhoods grants and mandated an evaluation of the program.1 Modeled in part after the Harlem Children’s Zone, the program aims to build on existing community services and strengths to provide a comprehensive and coordinated pipeline of educational and developmental services from "cradle to career" to benefit children and families in the country’s most distressed neighborhoods.

This package requests approval to conduct a survey of Promise Neighborhoods grantees and to collect multiple years of administrative school records from districts. These data will be used to study the implementation and outcomes of the Promise Neighborhoods program. IES has contracted with Mathematica and its partners – Social Policy Research Associates and the Urban Institute – to conduct this study.

B1. Respondent universe and sampling methods

This study will have two components. The first is an implementation analysis, which will describe the implementation of the Promise Neighborhoods grants in terms of the services offered, the characteristics of service recipients, the degree to which services are coordinated, implementation challenges, and funding sources. The second component is an outcomes analysis, which will assess whether any changes in outcomes after the grant award were unique to Promise Neighborhoods schools or whether similar changes were observed in other similar schools. The unit of analysis for the outcomes analysis is the school: the study team will compare outcomes in Promise Neighborhoods schools before and after the grant award to the same outcomes for similar schools not served by a Promise Neighborhoods grant (called comparison schools).

For the implementation analysis, this study will collect and analyze data from Promise Neighborhoods grantees. For the outcomes analysis, this study will collect and analyze school-level data. To be efficient, the study team will collect these school-level data from the districts in which Promise Neighborhoods schools and comparison schools are located, rather than collecting data from individual schools.

Implementation Analysis

  • Promise Neighborhoods grantees. This sample includes current grantees who received five-year Promise Neighborhoods implementation grants in FY2016, FY2017, or FY2018 and previous grantees who received five-year implementation grants in FY2011 or FY2012. Respondents will include 12 current grantees and 10 previous grantees.2





Outcomes Analysis

  • Promise Neighborhoods school districts. The respondent universe will consist of districts in which FY2011 and FY2012 Promise Neighborhoods schools are located.

  • Comparison school districts. The respondent universe will consist of all districts in states that have Promise Neighborhoods grantees. From that universe, the study team will select a purposive sample of comparison districts and schools that are most similar to the Promise Neighborhoods districts and schools at the time of grant award. The study team will use a propensity score matching approach to identify schools that are similar to the Promise Neighborhoods schools associated with the FY2011 or FY2012 grantees. The study team will draw comparison schools from the same state as each Promise Neighborhood school. The number of comparison schools is likely to be larger than the number of Promise Neighborhoods schools, but the exact number depends on the number of schools that are good matches. The matching algorithm will select comparison schools with replacement, meaning that the same comparison school can be matched to more than one Promise Neighborhood school. Also, the matching algorithm may select multiple matches for each Promise Neighborhood school. For planning purposes, the study team assumes the algorithm will select at most 10 matching schools for each Promise Neighborhood school. The propensity score model will use a comprehensive set of demographics and outcomes to identify well-matched comparison schools. To constrain costs and reduce burden on districts, the study team will request administrative data from at most 47 districts.

B2. Statistical methods for sample selection and degree of accuracy needed

1. Sample selection

Below is additional information about the sample of Promise Neighborhoods grantees for the implementation analysis and the sample of schools for the outcomes analysis.

Implementation Analysis

  • Promise Neighborhoods grantees. All current grantees and previous grantees will be asked to complete a survey. No statistical methods for sample selection will be needed.

Outcomes Analysis

  • Promise Neighborhoods schools. The study team will ask school districts to provide school-level data for all schools located within FY2011 and FY2012 Promise Neighborhoods. Specifically, the study team will request school-level data on the background characteristics of the student body and on kindergarten readiness, achievement outcomes, attendance, high school graduation, college enrollment, and student mobility. No statistical methods for sample selection will be needed.

  • Comparison schools. The study team will ask school districts to provide school-level data for a selected sample of comparison schools matched with each FY2011 and FY2012 Promise Neighborhood school. The study team will use a propensity score matching approach to identify up to 1,050 comparison schools. For each matched school, the study team will request that districts provide school-level data on the background characteristics of the student body and on kindergarten readiness, achievement outcomes, attendance, high school graduation, college enrollment, and student mobility.

2. Data collection

This study includes two data collection efforts summarized here.

Implementation analysis

Promise Neighborhoods grantee survey. The study team will ask grantees to complete a survey in fall 2021. Respondents will include 12 current grantees who were awarded grants in FY2016, 2017, or 2018 and 10 previous grantees who were awarded grants in FY2011 or FY2012. This sample represents all Promise Neighborhoods grantees from FY2011 through FY2020, as grants were not awarded in FY2013, 2014, 2015, 2019, or 2020.

The survey will gather information about the Promise Neighborhoods services offered during the grant period, including the number and types of services by pipeline stage3, the recipients served, the needs each service focused on, and whether each service was added, improved, or expanded during the grant period. It will also ask how services changed during the grant period, the types of schools and students served by the Promise Neighborhood, implementation challenges, how services are coordinated and connected, and funding/cost of the program. The surveys for previous and current grantees include similar sets of questions, with a few purposeful differences. For example, the survey for previous grantees does not ask about the proportion of recipients who received the intended dosage of services, because it would likely be challenging for respondents to provide this type of in-depth information for grants that have ended. In addition, the current grantee survey includes a few questions about how the COVID-19 pandemic may have influenced community needs, the services provided, and the allocation of Promise Neighborhood grant funds.

One distinct component of the survey for current grantees is an Excel workbook which will be pre-populated with existing information on each Promise Neighborhood’s services to minimize burden. The existing information will be pulled from previous annual performance reports provided to Urban Institute by the grantees and previous conversations with grantees that Mathematica had when designing the evaluation. Respondents will be asked to confirm and supplement (if necessary) the existing data. A separate tab in the workbook will automatically calculate counts that the grantees can use to more quickly answer questions in the survey.

The study team estimates the survey of current and previous grantees to take 75 minutes. For previous grantees, this estimate includes time to locate and review information, talk with others at their organization, work through the exercise of thinking back to their grant period, and answer the survey questions. For current grantees, this time estimate includes time to locate and review information, talk with others at their organization, review and complete the Excel workbook, and answer the survey questions. We expect current grantees will need less time than previous grantees to locate information and talk with others, as most of the information we request in our materials should be easily accessible and/or at the front of their minds. But current grantees will also need to complete the Excel spreadsheet. As a result, we expect it will take both previous and current grantees 75 minutes to complete the materials. The study team’s subcontractor, Urban Institute, will help to identify a primary respondent from each Promise Neighborhood grantee based on their technical assistance work with grantees. The study team will send an advance email to the identified contact with an overview of the data collection effort. After the advance email goes out, the study team will follow-up with an invitation email that contains specific instructions on how to complete and return the survey. The study team will also send a reminder email to grantees during the data collection period to encourage responses.

These data will be used to describe Promise Neighborhoods services and provide context for interpreting observed changes in student outcomes in Promise Neighborhoods schools.

Outcomes analysis

School-level administrative records obtained from districts. In fall 2021, to estimate the changes in school-level student outcomes associated with FY2011 and FY2012 Promise Neighborhoods, the study team will contact school districts to collect school-level administrative data for schools located in all FY2011 and FY2012 Promise Neighborhoods and for a group of similar schools—called comparison schools—that were not served by a Promise neighborhoods grant. The outcomes analysis will focus on longer-term outcomes, which is consistent with the program’s theory of action and with grantees’ reports that the initial period after grant award is often focused on start-up activities. The outcomes analysis will not include the FY2016 Promise Neighborhoods because there would be missing data for some outcomes in some years due to the coronavirus pandemic occurring during the middle of these grants and the resulting sparse and/or uneven administration of assessments for multiple years. In addition, the outcomes analysis will not include the FY2017, FY2018, and FY2021 Promise Neighborhoods because their grant periods are not yet completed. Fewer years of outcome data would be available for schools in these neighborhoods, which would not allow for an analysis of longer-term outcomes.

The study team will obtain school-level electronic records from districts where Promise Neighborhoods and comparison schools are located to gather information on school enrollment, achievement, attendance, graduation, college enrollment, kindergarten readiness, student mobility, and background characteristics of the student body. The study team will collect these data for the three years before the grant was received and all five years of the grant period. For example, for a FY2011 Promise Neighborhood, the study team will collect data for school years 2008–2009 through 2015–2016. The study team will use school-level demographic, socioeconomic, and baseline outcome data to describe the students in Promise Neighborhoods schools and comparison schools and to increase the precision of estimates.

3. Estimation procedures

This study will conduct analyses for a final report that includes both implementation and outcomes analyses. This submission requests clearance for collecting data that will be used for the final report.

Implementation analysis. The implementation analysis has two important goals. The first is to provide useful, quantifiable information about how current and previous Promise Neighborhoods are implemented and how implementation may have changed over time. The implementation analyses will describe the number and types of services provided, the proportion of students who receive services, the extent to which services are connected and coordinated, implementation challenges, and the costs of implementing a Promise Neighborhood. The second goal of the implementation analysis is to help interpret the findings from the outcomes analysis. For example, if the study team finds no improvement in student outcomes in Promise Neighborhoods schools compared to similar schools, there may be particular findings from the implementation analysis that could help explain why.

Outcomes analysis. The study team will compare outcomes in Promise Neighborhoods schools before and after the grant award to the same outcomes for similar schools not served by a Promise Neighborhoods grant. In particular, the study team will first calculate the change in outcomes for FY2011 and FY2012 Promise Neighborhoods schools before and after the grant. The study team will calculate that same difference for the comparison schools not served by a Promise Neighborhoods grant. The study team will then subtract the change in outcomes for comparison schools from the change in outcomes for Promise Neighborhoods schools. Key outcomes include math and reading achievement, kindergarten readiness scores, attendance, high school graduation, and college enrollment. The study team will focus on these outcomes for two reasons. First, they are Government Performance and Results Act (GPRA) indicators that Promise Neighborhoods services aim to affect. Second, the Promise Neighborhoods grantees Mathematica staff spoke to during an earlier study that assessed the feasibility of evaluating Promise Neighborhoods listed these outcomes as high-priority objectives for their neighborhood.

To estimate differences, the study team will use the following regression equation:

Where y is the outcome, PN is an indicator variable that equals 1 for schools in the Promise Neighborhood and 0 for comparison schools, is an indicator for observations after the grant award, is the difference-in-differences indicator ( equals 1 for observations that are both after the grant award and in the Promise Neighborhood), is a vector of school covariates, u is a neighborhood-level random effect, and e is a school-level error term. In this regression, the unit of analysis is the school.

The covariates will include baseline math and reading achievement and school-level demographic characteristics (such as gender, race, ethnicity, free or reduced-price lunch eligibility, special education status, and English Language Learners).

To interpret the difference-in-differences estimates, the study team will use the BASIE approach, which stands for BAyeSian Interpretation of Estimates. In particular, the study team will calculate the probability that Promise Neighborhoods truly led to a positive difference in outcomes given the estimated outcome difference (Deke and Finucane 2019). This type of probability is called a Bayesian posterior probability. This approach supports making statements that are easier to interpret than p-values, which are often misinterpreted (Wasserstein and Lazar 2016; Greenland et al. 2016). The study team will complement this approach by presenting in an appendix the standard (frequentist) statistical interpretation, including the standard error, statistical significance, and p-value of the difference-in-differences estimate.

The study team will also conduct a descriptive analysis of school-level student mobility rates, that is, the number of students who move into and out of a school each year. In particular, the study team will compare mobility rates in Promise Neighborhoods schools to mobility rates in comparison neighborhood schools. This comparison will provide context for interpreting observed changes in student outcomes in Promise Neighborhoods schools. For example, if mobility rates are similar across Promise Neighborhoods schools and comparison schools, the study team will have increased confidence that any observed changes in student outcomes are related to the Promise Neighborhoods program, as opposed to being the result of student mobility.

4. Degree of accuracy needed

The approach to calculating statistical power aligns with the BASIE (BAyeSian Interpretation of Estimates) approach the study team will use to interpret the significance of the difference-in-differences estimates. As described in the previous section, the study team will interpret difference-in-differences estimates by calculating the probability that Promise Neighborhoods truly led to a positive difference in outcomes given the estimated difference in outcomes. This type of probability is called a Bayesian posterior probability.

To present statistical power in a manner that is familiar to readers who are accustomed to the hypothesis testing framework, the study team will present minimum detectable differences. The study team regards a difference estimate to be significant if there is a sufficiently large probability that the true difference in outcomes is greater than zero, given the difference estimate. The study team uses multiple levels of significance for this power analysis—90 percent, 95 percent, and 97.5 percent. Loosely speaking, these levels of significance are analogous to reporting significance at the 0.20, 0.10, and 0.05 levels under the null hypothesis significance testing framework.

The power calculations are based on an expected sample size of 105 Promise Neighborhoods schools and 525 comparison schools, as described in Table B.1. The study team assumed a smaller number of comparison schools than the study sample size mentioned earlier (1,050 comparison schools) because some Promise Neighborhoods schools may have fewer than 10 well-matched comparison schools in the same state. Thus, the study team’s power calculations are conservative; if each Promise Neighborhood school has 10 well-matched comparison schools in the same state, statistical power will be higher than what is shown in Table B.1.

The study team believes the minimum detectable differences shown in Table B.1 are reasonable to expect because they are in line with impacts found for charter schools in the Harlem Children’s Zone, the program upon which the Promise Neighborhoods program is modeled (Dobbie and Fryer 2011, Dobbie and Fryer 2015).

Table B.1. Minimum Detectable Difference (MDD) for Promise Neighborhoods outcomes analysis

Difference in outcomes is detected if the probability of a positive difference in outcomes is at least

MDD

90 percent

0.10

95 percent

0.12

97.5 percent

0.14

Note: A positive difference in outcomes is detected if the probability of a truly positive difference (given the difference-in-differences estimate and prior distribution) is 90, 95, or 97.5 percent. MDDs are reported in effect size units. The prior distribution of intervention effect sizes used in these calculations is normal with mean 0.04 and standard deviation 0.23. The prior distribution is based on a two-level meta-analysis of prior evidence from the What Works Clearinghouse and incorporates a correction for bias resulting from the file drawer problem (that is, publication bias favoring studies that produce a statistically significant result). Effect sizes are calculated relative to the student-level standard deviation in outcomes. The study team assumes power of 80 percent.

MDD = minimum detectable difference.

5. Unusual problems requiring specialized sampling procedures

The study team does not anticipate any unusual problems that require specialized sampling procedures.

6. Use of periodic (less frequent than annual) data collection cycles to reduce burden

In order to limit respondent burden as much as possible, the study team has carefully considered what the minimum amount of data is needed to answer the research questions and how to structure the data collection. For example, the study team will request multiple years of data within a single request to reduce the number of separate requests.

B3. Methods to maximize response rates

The study will employ multiple strategies to maximize response rates while minimizing burden on respondents. These strategies include sending emails to respondents to alert them to upcoming requests to complete the survey; providing the survey electronically so it is accessible to multiple grantee staff; and accepting administrative data files in formats that are most convenient for districts to provide. The study team will also build on the positive relationships they developed with grantees through previous work, including Mathematica’s previous study that examined the feasibility of evaluating Promise Neighborhoods and the technical assistance that Urban Institute and Erika Bernabei (through the Promise Neighborhoods Institute) provided to grantees during their grant period. To reassure respondents that the data they provide will be kept confidential, the study team will encrypt survey materials with a password and include a statement on confidentiality and data collection requirements (Education Sciences Reform Act of 2002, Title I, Part E, Section 183) in all letters and data collection instruments. Specific methods to maximize response rates on each data collection activity are as follows:

Implementation analysis

Survey for current and previous Promise Neighborhoods grantees. The study team will use email to distribute the survey and collect responses. Prior to sharing materials with grantees, the study team will encrypt electronic versions of the survey materials with a password so they can be shared securely via email. Encrypting files will enable respondents to (1) safely provide or confirm fine-grained service information electronically; (2) complete sections of the survey as they gather information from colleagues or other sources, without having to re-navigate to a specific question in a web survey; and (3) electronically share the survey with colleagues, if necessary. Respondents will complete the survey materials electronically and return them to Mathematica securely via email, using the password-protected files.

The study team will pre-populate the Excel workbook component of the survey with existing information on the current grantees, as described in Section B2 above. This approach will allow respondents to confirm and update much of the data instead of entering it themselves. The survey is designed to ask questions that can easily be answered in an electronic Word document format such as questions with predefined response categories that respondents will select from.

The study team will set up an initial phone call with respondents to describe the survey and answer questions. The study team will also use follow-up calls to check on progress and allow respondents to ask additional questions. Study staff will be trained to respond to frequently asked questions about the study and the survey, so they can provide technical assistance and respond to any issues that come up in the field.

The study team anticipates achieving a response rate of 92 percent for surveys of current grantees and a response rate of 90 percent for surveys with previous grantees. These response rates allow for the possibility that one grantee from the cohort of 12 current grantees and one grantee from the cohort of 10 previous grantees may not be willing or able to complete the survey. The study team will obtain these high levels of response by building on existing relationships with grantee staff and asking respondents to confirm and update much of the data, instead of entering it all by hand. The study team also anticipates that the Promise Neighborhoods grantee staff will be motivated to participate based on their prior involvement in work with ED, including their receipt of grant funds.

Dealing with nonresponse. The study team will identify nonresponse and reporting errors by checking for complete and reasonable answers as soon as the completed survey is received and will follow-up with respondents as needed about any errors. Additionally, the study team will follow-up with grantees as needed to minimize overall nonresponse and ensure the target response rate is achieved.

Outcomes analysis

School-level administrative records. The study team anticipates full district participation for school-level administrative records that are not publicly available. To solidify districts’ cooperation, the study team will adhere to additional data collection requirements that districts may have such as preparing research applications and providing documentation of institutional review board (IRB) approvals.

B4. Test of procedures

The study team pretested the survey with two current grantees and two previous grantees to ensure that questions are clear and that the average completion time is within expectations.

B5. Individuals consulted on statistical aspects of the design and on collecting and analyzing data

The following individuals were consulted on the statistical aspects of the study:

Table B.2. Individuals consulted on statistical design

Name

Title

Telephone Number

John Deke

Senior Fellow, Mathematica

609-275-2240

Lisa Dragoset

Senior Researcher, Mathematica

609-945-3348

Moira McCullough

Senior Researcher, Mathematica

617-301-8965

Susanne James-Burdumy

Vice President, Mathematica

609-275-2248





References

Deke, J., and M. Finucane. “Moving Beyond Statistical Significance: The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations.” OPRE Report 2019-35. Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, 2019.

Deke, J., L. Dragoset, and R. Moore. “Precision Gains from Publically Available School Proficiency Measures Compared to Study-Collected Test Scores in Education Cluster-Randomized Trials.” NCEE 2010-4003. Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, October 2010.

Dobbie, W., and R. G. Fryer, Jr. “Are High-Quality Schools Enough to Increase Achievement Among the Poor? Evidence from the Harlem Children’s Zone.” American Economic Journal: Applied Economics, vol. 3, no. 3, July 2011, pp. 158–187.

Dobbie, W., and R. G. Fryer, Jr. “The Medium-Term Impacts of High-Achieving Charter Schools.” Journal of Political Economy, vol. 123, no. 5, October 2015. doi:10.1086/682718.

Greenland, S., S. J. Senn, K. J. Rothman, J. B. Carlin, C. Poole, S. N. Goodman, and D. G. Altman. “Statistical Tests, p-Values, Confidence Intervals, and Power: A Guide to Misinterpretations.” European Journal of Epidemiology, vol. 31, no. 4, 2016, pp. 337–350.

Wasserstein, R. L., and N. A. Lazar. “The ASA’s Statement on p-Values: Context, Process, and Purpose.” The American Statistician, vol. 70, no. 2, March 2016. doi:10.1080/00031305.2016.1154108.

Shape3 Shape2

1 Title IV Part F Section 4624(i).

2 One grantee that operated two separate Promise Neighborhoods had their grant terminated and no longer exists. An additional grantee dissolved at the end of their grant period. These grantees are excluded from the sample. The sample also excludes the FY2021 grantees because they will likely not have had enough time to fully implement their Promise Neighborhoods by the time the surveys are administered.

3 The Promise Neighborhoods cradle-to-career pipeline includes four stages: early childhood; K-12 education; college and career readiness; and family and community supports.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitlePromise Neighborhoods OMB Part B
AuthorMATHEMATICA
File Modified0000-00-00
File Created2021-10-05

© 2024 OMB.report | Privacy Policy