ED Response to OMB Comments

Response to Ed Branch Questions (2).docx

Impact Evaluation of Race to the Top (RTT) and School Improvement Grants (SIG)

ED Response to OMB Comments

OMB: 1850-0884

Document [docx]
Download: docx | pdf

RTT-SIG Education Branch Questions

(Original questions in black text; responses in blue text)

General

  1. The Department has a number of research activities and data collections focused on low-performing schools. Could ED give us a sense for how all these studies relate to each other? In particular, what is the timing of each study (data collection, public release), how does the purpose of each study complement the others, and how will ED use each study to answer questions such as which school interventions work and which provide the best return on their investment?

To minimize burden on study participants and avoid duplication of effort as much as possible, the RTT-SIG study team has carefully assessed coordination opportunities with other ongoing evaluations and data collections, including those focused on low-performing schools. For low-performing schools, the most notable of these efforts is the Study of School Turnaround (SST), which is also sponsored by IES. Following IES’ guidance, both studies have been designed to minimize overlap in study samples and data collection (to the extent possible) and to yield complementary, as opposed to duplicative, information.

  • Study Goals. The focus of the two studies differs in important ways. The RTT-SIG evaluation includes a rigorous analysis of the impacts of school turnaround models funded either by SIG or RTT and also aims to obtain quantifiable information on concrete reforms and improvement strategies being implemented, and to compare this information between RTT/SIG grant recipients and non-recipients. It will therefore provide critical information on which turnaround models are “working” along with useful correlational information on strategies associated with successful turnaround. The SST, in contrast, focuses on the change process and the way in which SIG reforms are rolled out in states, districts, and schools, and how these reforms are perceived by stakeholders. SST data collection is, therefore, focused much more on the qualitative details of describing SIG implementation.



  • Study Samples. The RTT-SIG study sample includes districts, schools, and states in which the planned regression discontinuity (RD) approach to estimate the impacts of school turnaround models is feasible. The SST sample focuses on schools suitable for in-depth study of SIG implementation (and their districts and states) regardless of whether they might be suitable for inclusion in an RD-based impact analysis. Because of the broader scope of the RTT-SIG study, the sample is much larger and covers many more grant and non-grant recipient schools, districts and states than the SST sample (which includes approximately 35 case study SIG schools in 6 states).



  • Timing of Data Collection. The SST began data collection in the spring of 2011 and will continue through 2013. Contingent upon OMB approval, the RTT-SIG evaluation will begin data collection in spring of 2012; data collection will continue until 2014.



  • Reporting Schedules. The SST has published one report of descriptive baseline analyses of SIG eligible and awarded schools (April 2011); additional reports are planned for summer 2012, 2013, and 2014 based on each of the three years of data collection. Release of reports from the RTT-SIG evaluation is planned for winter 2014, fall 2014, and summer 2015.



  • How ED Plans to Use Findings from Both Studies. The RTT-SIG evaluation is the only source of information on the impacts of school turnaround models. This evaluation will also examine which improvement strategies are associated with changes in school outcomes. As noted, the SST will yield information on the change process and the details about implementation of school improvement efforts. The SST will not include any analyses of student achievement outcomes, but will instead focus on understanding in detail the change process in a handful of case study schools that are trying to turnaround with the help of SIG funds. In contrast, the RTT-SIG evaluation will focus on estimating impacts on student outcomes of these SIG funds in a significantly broader sample of schools, and on understanding how those impacts might vary based on turnaround strategies. Both types of information should help inform ED’s policies regarding low-performing schools and the guidance and support that ED provides to schools, districts, and states to help improve low-performing schools.



  1. How will the research team factor RTT Phase 3 grantees in their analysis, given that these states received funds to carry partial implementation of their Phase 2 applications? Will they be included in the treatment or comparison groups in the analysis?

Since our original RTT outcomes analysis plans were developed, phase 3 grantees and award amounts have been announced. We will revise our outcomes analysis plans to account for this third round of grantees by examining changes in outcomes over time for three groups of states: (1) Phase 1 and 2 RTT grantees, (2) Phase 3 RTT grantees, and (3) RTT applicants that were not grantees. Specifically, we will calculate Interrupted Time Series (ITS)-style outcome gains for each of these three groups. Thus, Phase 3 winners will be analyzed as a separate treatment group.

State

Phase 1 Winner

Phase 2 Winner

Phase 3 Winner

Delaware

X



Tennessee

X



Georgia


X


Florida


X


Rhode Island


X


Ohio


X


North Carolina


X


Massachusetts


X


New York


X


District of Columbia


X


Hawaii


X


Maryland


X


Illinois



X

Pennsylvania



X

Kentucky



X

Louisiana



X

Colorado



X

New Jersey



X

Arizona



X



  1. Will you be using data from the Annual Performance Reports to supplement the data collected by the surveys and other administrative data?

We do not plan to use data from the RTT or SIG Annual Performance Reports (APR) to supplement the data collected in this evaluation through interviews, surveys, and administrative data because those data are only available for grantees. Because our data collection is being conducted in the context of an impact evaluation, it is important that the data we collect is available for both the treatment and comparison groups to support comparisons between these two groups (which is not the case for the APR data).

However, to reduce burden on respondents by shortening the data collection instruments as much as possible, we plan to exploit existing data from five other sources to supplement the data we’ll be collecting for the evaluation. These sources are:

  • NAEP. The primary outcome of interest for the RTT outcomes analysis will be state-level mean National Assessment of Educational Progress (NAEP) scores. The NAEP scores are available for grades four and eight, for both math and reading, every other year. This data will provide information on student outcomes prior to and after the implementation of RTT.

  • CCD. For the RTT outcomes analysis, we will collect state-level high school graduation rates from the Common Core of Data (CCD). We will also collect school characteristics data from the CCD that will be used to compare the baseline characteristics of treatment and comparison schools. For the purposes of calculating state-level college enrollment rates (an outcome of interest for the RTT outcomes analysis), we will also obtain data on the number of high school graduates and GED recipients from the CCD (the way in which this data will be used to calculate college enrollment rates is described in more detail below where IPEDS is discussed).

  • EDFacts. We plan to collect state-level college enrollment rates for the years 2004, 2006, and 2008 from the 2011 EDFacts State Trends Profiles. College enrollment rates for other years will be calculated using data from the CCD and the Integrated Postsecondary Education Data System (IPEDS), described below. Another outcome of interest for the RTT outcomes analysis is school-level high school graduation rates. These data are not publicly available, but will be collected from EDFacts through our project officer at IES. The school-level data enables us to differentiate between participating and non-participating districts in a given RTT state.



  • IPEDS. As noted above, IPEDS and CCD data will be used to calculate college enrollment rates for some years. Specifically, the college enrollment rates are calculated as the number of fall college freshmen who graduated from high school or received a GED in the previous 12 months (from IPEDS) divided by the number of high school graduates and GED recipients from spring of the same year (from the CCD). We will ensure that these rates are calculated consistently across all data sources to ensure comparability across states over time.



  • State Fiscal Stabilization Fund (SFSF) Annual Progress Reports (APR). We will collect data on whether states have implemented the twelve America COMPETES Act elements (along with narratives of different length and specificity about steps states are taking to address elements they have not yet implemented) from the second State Fiscal Stabilization Fund (SFSF) annual progress reports (APR). This data will be used to describe and compare progress on these elements in RTT and non-RTT states.



  1. We did not see any questions documenting whether states had SLDS with all 12 COMPETE Act elements as is required by RTT and their progress in doing so. What’s the rationale for not asking about this?

To reduce burden on study respondents, we plan to obtain information on the twelve America COMPETES Act elements from another source. See the last bulleted response to question 3 above for more detailed information.

State Survey

State Capacity

SC15 and SC16. Could this question be reframed? The concern is that State officials will likely agree that adding more staff in specific areas will improve support in those areas (this seems straightforward and will not yield very valuable information). If the intent of the question is to determine the areas where a State could improve its capacity, could the question be written something like: “In what areas does the State need greater expertise to aid in the support it provides to districts and schools?

In response to feedback during pilot testing of the instrument, we changed the wording of these questions to read, “Do you have significant gaps in any of the following areas of expertise at the state level? Please briefly describe any yes responses” (now question SC16) and “Focusing specifically on the School Improvement Grants program, do you have significant gaps in any of the following areas of expertise at the state level? Again, please briefly describe any yes responses” (now question SC17).

SC20. Options 3 and 4 seem like they could overlap. Could you clarify?

We agree with this comment and have deleted option 4 to avoid overlap. (This question is now SC21.)

Data Systems

DA15. Could this question specify whether it is looking for the top three barriers at the State level or what the State respondent perceives to be the barriers at the district level? This is the State survey, but since the previous questions asked about district level supports, it may be confusing.

We have revised the question to read, “Which of the following would you say are currently the top three barriers to the use of data by state-level staff to make instructional improvements?”

Teachers and Leaders

TL4: Did the research team consider asking if there were changes to state regulations or policies that required certification to be based partially on teacher effectiveness (after a probationary period)?

The study team had to make difficult choices about what to include in the interview given the length of the instrument and opted to focus on the highest-priority items based on extensive ED guidance from the Office of Planning, Evaluation, and Policy Development (OPEPD) and the RTT Implementation and Support Unit (ISU). Note that respondents may still report whether there were important changes to state regulations or policies to require certification to be based partially on teacher effectiveness (after a probationary period) under the “other” response option for this question (now TL5). In addition, other questions in the instrument focus on the narrower concept of whether student growth is a factor in tenure decisions.

TL9a: We recommend including job placement and retention rates as descriptive statistics used to assess the effectiveness of teacher certification programs.

We have added two new sub-items to this question: one that reads “the percentage of enrollees placed in teaching jobs” and a second that reads “rates of retention in the profession.” We have also revised the text in the previous row to read “the percentage of enrollees who earn certification.” (This question is now TL11.) To maintain parallel wording, we also added similar response options to the corresponding question about principal certification (This question is now TL28.) Note that, in the principal question, the item about job placement reads “the percentage of enrollees placed in school administration jobs.”

TL10: Did you think about asking about whether states used results of evaluations of teacher certification programs to provide TA/support to and help improve those programs identified as ineffective?

The study team opted not to include a question about whether states used results of evaluations of teacher certification programs to provide technical assistance or other support and help improve those programs identified as ineffective. We think that the response options currently listed for this question (now TL12) are the most prevalent uses of results from evaluations of teacher certification programs. Note that respondents can still report whether their states use the results of evaluations of teacher certification programs to provide support to ineffective programs in the “other” response option.

TL55: The term “teacher effectiveness” is used. What exactly does this mean (i.e. based on teacher evaluations?) and does it need to be clarified for the respondent? It seems somewhat confusing because TL50 uses the term “evaluation results” as a basis for human capital decisions; TL55 uses “teacher effectiveness” as a basis for reductions in force; and TL57 uses “student growth” as a basis for tenure. Was it intentional to use different terms? If so, please clarify why. If not, we suggest aligning the questions as much as possible because it seems confusing.

To avoid confusion, we no longer use “teacher effectiveness” in these questions. We revised the wording of TL51, TL52 and TL55 (now TL61, TL62, and TL65) to use the phrase “teacher evaluation results” and thus better align with TL49 and TL50 (now TL59 and TL60). We kept the wording of questions TL54 and TL57 (now TL64 and TL67) unchanged, since those questions refer specifically to estimates of student growth as a component of teacher evaluation results.

TL58: We suggest asking about whether states offer other supports such as induction or mentoring as a strategy to promote equitable distribution (it could be added as part of g.).

We have added “mentoring or induction” to item f of the response options for this question (now TL68), which asks about professional development as a form of support.

TL63: What is the objective of this question? It seems very broad. How will the research team ensure reliable responses that will be easily coded?

In response to feedback from pilot testing of the instrument, this question was dropped.

School Turnaround

TA2: Is this the best question to start this section?

We switched the order of questions TA2 and TA3 to improve the start of the section.

TA5: Suggested edit: For the 2010 round of School Improvement Grant awards that the state made, did the state consider any of the following factors, in addition to those required, when deciding which persistently lowest-achieving schools would receive School Improvement Grant funding, and which would not?

The suggested change has been made.

TA6: If the respondent answers “no” could the survey include a follow-up question to determine why?

We have added “If no, please explain why the state did not provide funds for this purpose” and also added a “specify” probe if the respondent answers “no.”

TA7: How will this data be collected—over the phone or through a later data submission by the State?

We plan to collect this information over the phone. However, if states have more than one or two schools receiving RTT funds for this purpose, we will offer respondents the option of sending the information separately.

TA11: Suggested edit: What percentage of the Race to the Top funds allocation are Race to the Top-participating districts required to spend on these turnaround grants to four school intervention models for the persistently lowest-achieving schools?

We have revised the wording of the question to read, “For participating districts, what percentage of Race to the Top funds must be used to implement one of the four school intervention models in the persistently lowest-achieving schools?”

TA11: Could this question include an option such as “State does not require specific percentage.” It may be possible that a State requires districts to spend part of their allocation for this purpose, but does not require a specific percentage.

The suggested addition has been made.

TA18: Option “SCHOOLS HAVE ADDITIONAL FLEXIBILITY” and Option “SCHOOLS EXEMPT FROM USUAL STATE POLICY” could overlap and is a potentially confusing distinction. Could the question include an option that combines both or further clarifies the distinction? (Also applies to District survey TA13 and TA16.)

We combined “additional flexibility” with “exemption from usual state policy” into a single response option. We have revised the prompt to read, “Do the state’s persistently lowest-achieving schools have additional flexibility with or exemptions from any of the following aspects of collective bargaining agreements or state policies?” For consistency, this change has also been made to corresponding questions on the district interview protocol.

TA39: Related to this question, could the survey ask if the State gets help from TA centers or labs in providing support to schools?

Questions SC13 through SC15 in the current instrument will collect information about states’ work with intermediaries, including the reform topics for which intermediaries provide support, the types of intermediaries with whom states work, and the groups that receive support from intermediaries (including schools). In addition, item ‘a’ in SC14 specifically asks about support provided by “Federally-supported comprehensive center, regional educational laboratory, equity assistance center, or content center.” Respondents can also use the “other (specify)” response option in any of these questions to provide information on this topic.

TA40: Related to this question, could the survey ask:

  • How do States judge success (i.e. what constitutes a turned around school)?

  • How many schools have met this criteria and could be deemed successful?

The study team opted not to include questions about how states judge success or how many schools have met their standard of success. A large number of items would need to be added to the instrument in order to ask these questions in a systematic way. These questions would be too burdensome to add given the current length of the instrument. Moreover, the evaluation is designed to rigorously examine the effectiveness of school turnaround models, regardless of how states judge the individual success of schools.

Charter Schools

  1. Did you think about including a question similar to CH9 that asks about students with disabilities?


The study team opted not to include extra questions on SWDs or sub-items about any other subgroups because of respondent burden concerns. The inclusion of instrument questions or sub-items focused on English Language Learners is in direct response to a request from ED’s Office of English Language Acquisition (OELA), which provided funding to the evaluation study so that we can also focus on ELLs.

  1. Could ED add a question at the beginning of the section on Mechanisms for Charter Accountability that asks “Which of the following entities are responsible for monitoring the performance of charter schools?” (using the same response choices that are in the questions about entities that authorize charters).

    • Is ED interested in learning whether states monitor charter schools even if they are not the authorizer?  If so, adding this item is likely sufficient.  But if ED wants to know about the monitoring activities of any authorizer, CH12-CH15 may need to be reworded to include the situation where a non-state authorizer is doing the monitoring.


We think that the additional information that may be provided by adding a question about the entities responsible for monitoring the performance of charter schools is unlikely to be worth the cost of obtaining that information given length constraints of the interview, and since authorizers always have some sort of monitoring responsibility. We think that the inclusion of the phrase “monitored by state or its agent(s)” in questions CH13 and CH15 is sufficient for understanding whether states monitor charter schools even if they are not the authorizer.

  1. Clarify the concept of “monitoring” in question CH12 – should a state answer yes if it renews charters, or is it supposed to refer to more frequent monitoring or monitoring outside of the charter renewal process?


We have revised this question (and also CH14) for clarity. It now reads, “Currently, does your state have mechanisms in place to monitor the performance of charter schools, either directly or via its agent(s)? Please include monitoring activities that occur as part of the charter renewal or reauthorization process.”

  1. Did you think about including students with disabilities as an example to the answer (c) in questions CH13 and CH15.


In the interest of brevity, we are not including students with disabilities as an example. However, respondents will be able to report a focus on students with disabilities in item c in both CH13 and CH15, since the instrument asks the respondent to specify particular student populations as part of their response.

District Survey

Could the survey ask:

  • Is the district attempting to compile information on how its lowest performing schools are implementing the specific interventions related to the four school models (in particular, the Transformation model)?

  • Does the district have data about what interventions have been successful? If so, is it sharing this data to its other low performing schools?

  • As the district helped its low performing schools put in place the reform models, did any of the four reform models involve particularly difficult barriers to implementation?

Given the goals of the study and for our data collection, which is to gather systematic measureable information about metrics of implementation, the study team opted to focus on which reforms were implemented rather than the process for implementing reforms. The process for implementation is important though and is being examined in greater depth in the Study of School Turnaround (SST). Asking questions about the process for implementation, measurement of success, and sharing of best practices would require adding a potentially large number of items to our instruments in order to collect this information in a systematic way. We do not feel that the potentially significant increase in respondent burden is worth incurring, especially since these topics are already being explored in more depth with the Study of School Turnaround.

Teachers

  1. Did the study team consider asking questions about how districts ensure that evaluation systems are valid and reliable and whether/how states assess the reliability and validity of district evaluation systems? What’s the rationale for not asking about this?

The study team opted to focus on which reforms were implemented rather than the process for implementing reforms. For example, the study team will collect information about formal observations (such as who conducts observations and frequency of observation) but is not focused on the process for ensuring the validity and reliability of evaluation systems. We have concerns that asking questions about the process for ensuring that evaluation systems are valid and reliable would require adding a potentially large number of items to our instruments.

  1. It seems that evaluation systems are mostly implemented at the district level. Why are you not asking more questions related to implementation of evaluation systems at the district level, in particular, about human capital decisions in which evaluations are used (similar to item TL50 on p. 62 of the State Interview) and barriers to implementation (similar to item TL48 on p. 61 of the State Interview)?

Based on pilot test results, the study team decided not to ask districts many questions about implementation of evaluation systems because we had concerns that districts would not be able to provide a consistent answer for all of their schools. That is, the correct response could vary across the treatment (SIG) and comparison (non-SIG) schools in a given district (for instance, if additional flexibility in evaluation systems is given to schools implementing School Turnaround Models but not to other schools). Therefore, the district interview and school survey instruments were revised so that the details about the evaluation systems implemented are obtained directly from schools. This will allow the study team to make comparisons of the features of the evaluation systems in place in treatment and comparison schools.

TL5: How will research team tease out which measures are used for tested and non-tested grades and subjects?

We have added a question that asks about non-tested grades and/or subjects (TL8 in current version), and refocused this question (now TL6) on tested grades and/or subjects.

TL16: Why is the research team not asking this item TL16 for non-tested grades or subjects?

We have added a question that asks about non-tested grades and/or subjects (TL18 in current version). We revised the text of this question (now TL17) to align with the new text of TL18.

School Survey

School Turnaround

TA9: Did ED consider asking for the top five (or top 3) factors, rather than a binary list of ten factors?

The study team is interested in knowing whether each of the considerations listed as responses were considered by schools, and is less concerned about knowing which factors were prioritized by schools. In addition, to the extent possible, we have aligned the wording of questions in the school survey with the wording in the district interview.

TA12: Related to this question, could the survey ask:

  • Considering the strategies implemented at your school, which ones do you consider were the most successful? Which were the least successful?

The evaluation is designed to examine the effectiveness of school turnaround models regardless of how states, districts, or schools judge the individual success of the particular strategies used by schools. The study team opted to focus on more objective, measurable indicators of accomplishments and of the reforms and improvement strategies implemented in schools, rather than on perceptions of success. A potentially large number of items would need to be added to the instrument in order to ask such questions in a systematic way. Moreover, the complementary Study of School Turnaround (SST) includes a set of in-depth case studies that will yield information on how strategies and reforms are perceived by key school and district stakeholders.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authordanberg_n
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy