OMB Part B IES Evaluation RtI Practices

OMB Part B IES Evaluation RtI Practices.doc

Evaluation of Response to Intervention Practices for Elementary School Reading (Site Recruitment)

OMB: 1850-0872

Document [doc]
Download: doc | pdf





Evaluation of Response to Intervention Practices in Elementary School Reading



Supporting Statement for OMB Site Recruitment Clearance Request



Part B



January 29, 2010





Part B: Collection of Information Employing Statistical Methods

The National Center for Education Evaluation (NCEE) of the Institute of Education Sciences (IES), U. S. Department of Education (ED) is conducting the National Assessment of the Individuals with Disabilities Education Improvement Act of 2004 (IDEA 2004, P.L. 108-446), part of which includes an Evaluation of Response to Intervention (RtI) practices in reading. RtI may qualify as an early intervening service (EIS) intended to identify and serve students in general education classrooms who may be at risk for academic difficulties and eligible for special education. IES has contracted with MDRC, SRI International, and RG Research Group to conduct the Evaluation of RtI Practices in Reading project.

This section provides supporting statements for each of the five points outlined in Part B of the guidelines for the collection of information for the RtI project. This submission seeks clearance for the site recruitment materials.

A subsequent OMB package will seek approval for instruments to collect data for a study of RtI design, implementation, and impact in sites operating mature RtI programs. This will involve data collection from principals, teachers, and students and collection of existing records on student academic performance.

B1. Respondent Universe and Sampling Methods

The Evaluation of RtI Practices in Reading aims to identify 40-50 schools with mature RtI practices for inclusion within the RtI design and implementation analysis and an undetermined number of schools with and without mature RtI practices for inclusion in a quasi-experimental analysis of the impacts of RtI practices on student outcomes.1 We are not seeking a sample that is statistically representative of all schools but will be seeking a sample that includes schools and districts in a diversity of settings and using a variety of RtI practices. For example, we will seek a sample that includes some regional variation and multiple states. Additionally, efforts will be made to recruit larger school districts that contain multiple elementary schools because the clustering of schools in one central location will help to reduce data collection costs and facilitate possible analyses of the effect of RtI models on student outcomes by providing a pool of potential comparison schools. This planned analysis is outlined briefly below and will be described in more detail in a subsequent OMB package for data collection for the study.

Sampling Plan

As stated in Part A of this OMB submission, the study of RtI design, implementation, and impacts intends to characterize the range of mature RtI models exhibited by these schools; describe how these schools’ models are implemented in practice; draw comparisons between these schools and other schools with less well-developed RtI practices in the same districts and states, and – where feasible – conduct a quasi-experimental analysis of the impact of these practices on student academic outcomes.

If extant data and RtI implementation patterns permit, the study team will analyze the impact of RtI practices on student outcomes in clusters of schools within the overall sample and possibly other schools within the same districts. The study team has identified two quasi-experimental analytical approaches that can utilize extant data: a comparative interrupted time series (CITS) design and a regression discontinuity design (RDD).

Comparative Interrupted Time Series Design

Under a CITS design, deviations from trends in outcomes such as reading achievement and special education identification would be compared between schools that have implemented RtI for several years and comparison schools not implementing RtI across the same period. In order to assess this design’s feasibility, the study’s site nomination and screening process would gather information about the presence of minimal conditions necessary for CITS. These conditions include:

  • Sufficient numbers of schools have mature RtI practices;

  • Mature RtI schools have good historical information about the timing and quality of RtI implementation;

  • Mature RtI schools implemented RtI practices with a clearly identifiable starting point;

  • Appropriate, statistically-equivalent comparison schools can be systematically identified (as outlined below and as will be further described in a subsequent OMB submission);

  • All mature RtI schools and comparison schools have historical data on student outcomes measured using consistent metrics over at least three or more years2 prior to the first year of RtI implementation in the mature RtI schools; and

  • All mature RtI schools and comparison schools have one or more years of follow-up data, measured using the same metrics as those used for the historical data, in the period following RtI implementation in the mature RtI schools.

If these conditions are present, then a sample of 40 schools that are equally split between mature RtI schools and comparison schools and that meet these criteria would provide adequate statistical power for estimating intervention effects in the first follow-up year after RtI implementation, with a minimum detectable effect size of approximately 0.20 standard deviations.3

Under the CITS design, the analytical sample would include three types of schools: mature RtI schools identified through the nomination process described in Part A and below; other schools in the same district that are also implementing mature RtI practices and that are identified through contacts with district and school staff; and comparison schools that are not currently implementing RtI or that only recently began implementing RtI.

The first step in using CITS would be to identify key student academic outcome measures that are available for the period before RtI start-up and into a post-start-up period. (See Bloom, 1999 for a discussion of the approach and its application.) Likely outcomes include measures of reading achievement, grade promotion, and identification for special education. The second step involves calculation of historical trends in these outcomes prior to the start-up of RtI in both the RtI schools and comparison schools and estimation of likely post-start-up trends based on these historical data. In the third step, these estimated trends would be compared with the actual trends in outcomes that are observed in the post-start-up period in both groups of schools. From this comparison, an estimation of the “deviation from trend” can be calculated. In the fourth step, deviations from trend in the treatment schools would be compared to deviations from trend in the comparison schools in order to “subtract out” the effect of historical events or common policies/reforms that may have occurred concurrently with the implementation of RtI. Thus, the estimated impact of mature RtI would be the difference between the average deviation from trend in the treatment schools and the average deviation from trend in the comparison schools.

One key component of the analytic process would be to select comparison schools so that these schools are statistically equivalent to the treatment schools in terms of their observed baseline (pre-start-up) characteristics. There exist many different methods for selecting comparison schools, including different matching and/or blocking procedures. The study team plans to follow this general approach for identifying comparison schools (though this approach will be tailored to conditions in the field):

  1. Identify all other elementary schools within the same district as the mature RtI schools that are to be included in the CITS sample (using information from the Common Core of Data and/or district web sites).

  2. From this pool of elementary schools, identify comparison schools that did not implement RtI practices during the selected study time period.4 As discussed in Part A of this submission, the study team would contact district staff for assistance with identifying these non-implementing comparison schools (see district screener in Appendix 5). Subsequently, the study team would contact these potential comparison schools (see comparison school screener in Appendix 6) to confirm the absence of RtI practices and the availability of baseline and comparative student data during the time period of interest.

  3. Match mature RtI schools with non-implementing comparison schools that are similar on demographic characteristics (e.g., racial/ethnic composition, socioeconomic composition), school characteristics (e.g., pupil/teacher ratios), and/or baseline achievement measures. Depending on the number of potential comparison schools within a district and other analytical considerations, this step might involve either one-to-one matching of mature RtI and comparison schools; matching a single mature RtI school to a group of comparison schools; or matching a group of mature RtI schools to a group of comparison schools.

  4. Ideally, comparison schools would be drawn from the same district as their corresponding mature RtI schools. However, comparison schools may be drawn from demographically-similar neighboring districts if a district’s small size or its history of RtI implementation (e.g., all schools implementing RtI simultaneously) render within-district comparisons impossible. In this event, the study team would contact adjacent districts for assistance with identifying comparison schools (see Appendices 7 and 8).

In addition, the study team may use a comparison outcome (Shadish et al., 2002) to account for school-specific historical events and to “validate” the trends in comparison schools as a counterfactual. A comparison outcome – also called the “non-equivalent dependent variable” or “untreated outcome” – is an outcome that is affected by the same school-specific historical events (for example, new or innovative leadership in RtI schools) as the outcome measure of interest (reading achievement), but that is not affected by reading RtI. An RtI school’s non-equivalent outcome can be used as a counterfactual for what would have happened to its reading achievement outcome in the absence of the intervention. Accordingly, a “comparison outcome” time series could be used in lieu of a “comparison group” time series and we would estimate the “deviation from trend” in this comparison outcome and compare this deviation to that of outcomes affected by reading RtI. The key component of this process is to identify the comparison outcome that in theory and in practice could not have been affected by the implementation of RtI.

Regression Discontinuity Design

An alternate quasi-experimental analysis of RtI impacts might involve the use of a regression discontinuity design (RDD). This approach would compare reading achievement outcomes for at-risk students who, based on their benchmark test scores, qualified to receive additional reading support with achievement outcomes for students in the same school who meet reading benchmarks initially (likely focusing on students near the cut off for Tier 2 intervention). Often, mature RtI schools use a benchmark test at the beginning of the fall semester to identify at-risk students for additional support. Students whose benchmark test scores fall below a pre-determined cutoff point are deemed at-risk and are referred to additional instructional support (treatment group), and those whose benchmark test scores are above the cutoff stay in the general education class (comparison group). The so-called “sharp” RDD assumes that the decision on receiving the added support is entirely determined by the benchmark test score. The so-called “fuzzy” RDD can accommodate a situation where another factor also influences the decision about receipt of extra support leading to a situation where some students identified for the treatment group based on the benchmark test score do not actually get the extra support and some students identified to receive regular services do get extra support. 5

Therefore, by statistically controlling for the value of the benchmark test score in a regression model, one can (under appropriate conditions) account for any unobserved differences between the treatment and comparison group and thereby obtain internally valid impact estimates for receiving more intensive, additional reading support.

As with CITS, the study team will assess the feasibility of an RDD analysis during its site nomination and screening process, which would gather information about the presence of minimal conditions necessary for RDD. These conditions include:

  • Mature RtI schools maintain benchmark test data for each student.

  • Mature RtI schools assign students to treatment or non-treatment status (i.e., receipt or non-receipt of Tier 2 intervention) based on whether their value for a numeric rating (benchmark test score) is above or below a cutoff point;6

  • Mature RtI schools maintain a record of the cutoff point(s) used to assign students to receive additional instructional support.

  • Mature RtI schools maintain records tracking students’ treatment status throughout the year.

  • Mature RtI schools can provide detailed information about the process of identifying students for additional instructional support, including whether identification involved a decision process based on a single benchmark score, or if multiple benchmark test scores (and/or other factors) were used to identify students for support.

  • Mature RtI schools are either i.) able to provide data from a year-end performance measure that would allow us to assess the impact of additional instructional support, or ii.) willing to allow study-administered year-end testing.

If these conditions are present, then a sample of 40 RtI-implementing schools would also provide adequate statistical power for estimating intervention effects due to being identified for receiving extra assistance – with a minimum detectable effect size of approximately 0.20 standard deviations.7

If the above conditions are present, and if we can correctly account for the relationship between the benchmark test score and the outcome measure in a statistical model, then this approach can provide internally valid estimate for the impact on at-risk students’ reading achievement of being identified to receive additional instructional support within a mature RtI system. This analysis is directly relevant to an important issue in educational practice. The Office of Special Education Programs of the Department of Education has recently issued a guideline explaining to states how they can use IDEA funds to provide coordinated early intervening services to students not currently identified as needing special education services. Information about the impact of providing more intensive reading support under an RtI framework will be useful in considering the effectiveness of services to students on the margin of needing extra assistance.

Feasibility of Analytical Approaches

Finally, as stated in Part A, the two analytic designs under consideration – CITS and RDD – present differing requirements for the student data systems and RtI practices that must be in place to support the analysis. CITS would require both a clearly identifiable starting point for RtI implementation at the RtI schools and state- or district-level data on student outcomes using consistent measures that begin prior to the implementation of RtI and continue after its startup. The RDD analysis requires data on the RtI schools’ ranking of children’s need for assistance, the cutoff points used in deciding on who is offered more intensive reading assistance, and children’s actual receipt of such assistance. Thus, the recruitment-related data collection proposed in this submission will provide information on the presence of these conditions in the nominated districts and sites in order to identify subsamples of schools that can be included in one or both of the quasi-experimental analyses of RtI impacts.

B2. Information Collection Procedures

As mentioned earlier, this OMB submission is requesting clearance for one portion of the data collection – the RtI screener materials for site recruitment. A second OMB submission will be prepared and submitted in winter 2010 to obtain clearance for the full set of data collection instruments for the Evaluation of RtI Practices in Reading. The site recruitment materials are included in this package in Appendices 1 through 8.

Nomination and Screening Process

The nomination process will result in approximately 40-50 schools that have been screened to confirm their use of the core RtI components specified previously for at least two years. In addition, the school screening process should provide essential information on the presence or absence of the conditions needed for a quasi-experimental analysis of impacts. Three steps are involved in the nomination and screening process.

  1. Seek nominations of districts and schools: Contact experts who are knowledgeable about RtI either as researchers or experienced practitioners on the topic. RtI experts will be contacted who represent different stakeholders and perspectives in RtI, including researchers, practitioners, and representatives from organizations supporting RtI activities. A letter and description of the study will be sent to the nominators outlining the purpose of the nomination process and how the information they provide will be used (see Appendix 1 for the study description and Appendix 2 for this letter to experts).



    1. Researchers actively involved in RtI reading will be contacted. Among those to be contacted are Dr. Ed Shapiro, a Lehigh University Professor who leads an OSEP-funded model demonstration grant on progress monitoring and conducts the Pennsylvania state RtI evaluation; Gerry Tindal, a University of Oregon professor who leads an OSEP funded model demonstration grant on progress monitoring; evaluators of state Reading First programs, such as Scott Baker at Eugene Research Institute in Oregon; and the study’s Technical Working Group (TWG) members Carol Connor, Deborah Speece, Donald Compton, Rollanda O’Connor, and Sharon Vaughn, all of whom are nationally recognized researchers in RtI.



    1. Practitioners who have worked extensively with RtI on the ground level – and whose work is recognized by OSEP and IES – will be contacted, including: Amy Sichel, a member of the current study’s TWG and Superintendent of Abington Schools District in Pennsylvania; Joy Eichelberger, Pennsylvania State RtI Lead; Judy Elliot, a member of the current study’s TWG and Assistant Superintendent to the Long Beach Unified School District in California; David Tilly, who has worked extensively with the Heartland Model and written scholarly papers about the problem solving model (e.g. Tilly, 2007; Tilly, Harken, Robinson, & Kurns, 2008), and Douglas Marston, who has worked with the Minneapolis school district for two decades, leads an OSEP-funded model demonstration grant on progress monitoring, has written several scholarly papers on RtI, and is involved with RtI training (e.g., Marston et al., 2003).



    1. Representatives from national organizations working to advance the design and implementation of RtI will be contacted:8 Possibilities include the Council for Exceptional Children (CEC), National Association of State Directors of Special Education (NASDSE), National Center for Learning Disabilities (NCLD), National Center on Response to Intervention/RtI4Success, RTI Action Network, the Office of Special Education Programs (OSEP), and the U.S. Department of Education Regional Resource and Comprehensive Labs, particularly those that have conducted reviews of state RtI activities.9



  1. Organize, collect additional information, and prioritize the nominations for further screening.

    1. The study team will create a list of all the nominated sites, identify the type of nominator (e.g., researcher, practitioner, organization), and tally the number of responses for each site.



    1. The study team will review web sites for the nominated sites to ascertain school readiness for the study, including: district and school RtI policies and resources, number of elementary schools in the district (potential comparison schools in a CITS analysis), district and school demographics, and availability of longitudinal data (pre and post RtI startup) from standardized tests administered to students in 1st through 5th grades. In grades 1 and 2, we anticipate testing is likely to be short standardized tests of reading skills (often fluency) administered to all 1st and 2nd grade students, and perhaps district-administered tests of broader reading achievement. In grades 3 through 5, where Federal testing requirements apply, we anticipate that there will also be scores from state-wide standardized tests.



    1. For the possible CITS analysis, districts and schools will be prioritized for further screening on the basis of the likely availability of achievement data over time on students in grades 1- 5 and availability of likely comparison schools in the district. For the possible RDD analysis, nominations may include information relevant to the feasibility of conducting this analysis but it is unlikely to be possible to determine the existence of the conditions for the RDD analysis without further screening of sites to learn about their decision-making rules for providing intensive services and the availability of data on student benchmark performance and instructional placement.



  1. Further screen nominated sites: Begin with high priority sites and screen them to determine the presence of core RtI components and practices and data for quasi-experimental analyses.

    1. Initial contact with district staff. Mail letter with study description to the school district RtI coordinators, or appropriate district administrator, to: (1) describe the study; (2) let them know that a school in their district has been identified by experts as a possible study site and that we plan to contact this school; and (3) tell them that we will contact the district for further discussion and more information collection regarding the nominated school and other district schools that may be mature in their implementation of RtI (See Appendix 3).



    1. Initiate contact with school staff. Research staff will place a call to the school principal, or the appropriate person identified by the principal, to tell them that we sent a letter to the district letting them know that we were going to contact the school, schedule a screening call for identifying the presence of RtI core components, confirming data availability for the quasi-experimental analysis, and ascertaining their interest in participating in the study. The description of the study (Appendix 1) will be sent to the school administrator prior to the scheduled phone call.



    1. Screen school sites. Using a structured protocol, the study team will conduct screening calls of expert-nominated schools (See Appendix 4a for screening protocol and Appendix 4b for obtaining information on the tests administered by the school to 1st and 2nd graders. This section of data collection will be conducted in the screening call or – if more convenient for school level staff, the spreadsheet be sent to school staff, completed by them, and returned electronically to the study team. Key goals of the call are to determine which of the critical RtI practices are in place and for what length of time, and whether the school has the RtI data needed for quasi-experimental impact analysis.



    1. Contact the district (see Appendix 5) to gather additional information and to confirm that a school appears promising for participation in the study either as a mature RtI school or a comparison school):

      1. Confirm that the district would consider the expert-nominated school as mature in its implementation of RtI;

      2. Learn whether there are other demographically similar schools in the district that might be possible comparison schools or possible additional mature RtI schools in the CITS analysis; and

      3. Learn about school/district use of standardized assessments in grades 1-5. Appendix 5 also includes a brief table in which the respondent can identify the standardized tests administered by the district across all schools.



    1. Screen district-nominated comparison schools and district-nominated mature RtI schools for the CITS analysis.

  1. Collect basic information from pre-existing data sources (e.g., the U.S. Department of Education’s Common Core of Data) on student body characteristics and trends in student outcomes for other schools that might be possible comparison or additional RtI treatment schools in the district.

  2. Contact the district-nominated mature RtI schools using a structured screening protocol to learn about their RtI practices (Appendices 4a-4b).

iii. Contact district-nominated potential comparison schools and use a quick screening instrument (Appendix 6) to learn about RtI related practices, timing of implementation, and brief description of the school’s approach for teaching struggling readers – either currently, if the school has not implemented any RtI practices or prior to their implementation of RtI. The goal is to learn whether the school would in fact present a service contrast to mature RtI implementers.

iv. If there are insufficient numbers of nominated schools and districts for the study, adjacent districts will be contacted to determine their potential eligibility for the study. An information letter (Appendix 7) will be sent to the appropriate district administrator at the adjacent district that explains the study and indicates our interest in talking with them. Subsequently, a follow up screening call (Appendix 8) will be made to determine whether the adjacent district has schools implementing RtI and the number of years, if any that the schools have been implementing RtI and gauge their interest in participating in the study.



Following this data collection, the study team will assess the possible sites for the study and see if there is a sufficient number of appropriate sites. The team expects to recruit approximately 40-50 RtI schools for the study. It is possible that additional schools will be recruited to be included in the potential CITS or RDD studies, depending upon the analytic design selected and the number of schools in the design study that meet the criteria for inclusion in a quasi-experimental analysis. If insufficient numbers of schools have been confirmed as appropriate for the study design, we will screen additional schools that have been nominated from both experts and the district representatives, and if necessary, contact adjacent districts and schools (See Appendices 7 and 8 for adjacent district information letter and screening protocol).

Once a sufficient number of schools meeting study requirements have been identified, we will recommend to IES a list of schools/districts for inclusion in the study, indicating which can be part of one or both of the quasi-experimental analyses of RtI impacts. Our goal in making these recommendations is to identify a sample of schools and districts with regional diversity and a mix and range of RtI practices that can inform the decisions of schools considering how to implement RtI practices. In collaboration with IES, a final sample of schools will be selected.



B3. Methods to maximize response rates

Site recruitment will be an intensive effort. The recruitment approach is based on establishing strong partnerships with the schools and actively addressing potential concerns. This will involve:

  • Early and ongoing communication with IES about potential schools.

  • Contacting national and state-level organizations to build general support and enthusiasm for the study, and contacting experts within these organizations with strong knowledge of and relationships with potential sites;

  • Identifying sampled schools in multiple phases (expert nominations, reviews of school/district websites, school phone screen, district follow up), where pre-existing information is used to the greatest extent possible to narrow the sampling pool prior to conducting phone screens.

Once recruitment efforts begin, the contractor will be persistent in attempts to reach the high priority -sampled schools (and associated district contacts, as necessary) via phone.10 Contractor staff will keep a log of all phone calls and emails to schools and districts and will keep IES apprised of any issues that emerge.

B4. Tests of procedures to be undertaken

Because the structure of the instruments is similar to those used for other studies conducted by the contractor, no field testing of the phone screeners or accompanying spreadsheet is planned.

All screening instruments serve as guides for conversations between the contractor and school/district staff. Contractor staff will be able to ask clarifying questions to assure that schools and districts are responding appropriately and that we are correctly interpreting their responses. The response time estimates are based on past experience with similar effort.

B5. Individuals consulted on statistical aspects of design

Dr. Pei Zhu from MDRC is leading the research design subtask for MDRC. MDRC and SRI have also held multiple conference calls with sub-groups of the study Technical Working Group with expertise on RtI designs and met with the study’s entire Technical Workgroup (TWG) on several occasions.

The TWG includes:

  • Carol Connor, Florida State University

  • Donald Compton, Vanderbilt University

  • Judy Elliott, Los Angeles Unified School District

  • David Francis, University of Houston

  • Paul McDermott, University of Pennsylvania

  • Rollanda (Randi) O’Connor, University of California-Riverside

  • Amy Sichel, Abington School District (Abington, Pennsylvania)

  • Jeff Smith, University of Michigan

  • Deborah Speece, University of Maryland-College Park

  • Sharon Vaughn, University of Texas-Austin





references

Bloom, Howard (1999), “Estimating Program Impacts on Student Achievement Using “Short” Interrupted Times Series.” MDRC, retrieved from http://www.mdrc.org/publications/82/full.pdf.



Design of the National Assessment of IDEA (July, 2006). Prepared under contract by Westat, Rockville, MD.



Fuchs, D. & Fuchs, L. (2006). “Introduction to Response to Intervention: What, why, and how valid is it?” Reading Research Quarterly, 41(1), 93-99. Retrieved from http://www.uoregon.edu/~cfc/classes/SPSY_607/Readings/Class%203/Fuchs.pdf.

Knudsen, W.E. (2008). “Coordinated Early Intervening Services Under Part B of the Individuals with Disabilities Education Act.” Memo from the Office of Special Education Programs to Chief State School Officers and State Directors of Special Education. Retrieved November 5, 2008 from http://www.rti4success.org/images/stories/TA_amy/08-09coordinatedearlyinterveningservices-ceisunderpartbofidea.pdf.



Shadish, William R., Thomas D. Cook, and Donald Campbell (2002). Experimental and Quasi-Experimental Designs for Generalized Cause Inferences. Boston: Houghton Mifflin Company



Van Der Klaauw, W. (2008). “Regression-Discontinuity Analysis. The New Palgrave Dictionary of Economics. Palgrave MacMillan.

Vaughn, S. & Fuchs, L. (2003). Redefining learning disabilities as inadequate response to instruction:

The promise and potential problems. Learning Disabilities Research & Practice, 18, 137–146.



Zirkel, P.A. (Jan/Feb, 2008). RTI After IDEA: A survey of state laws. TEACHING Exceptional Children, Jan/Feb 2008, 71-73.



1 The number of schools with and without mature RtI practices to be included in the quasi-experimental impact analysis depends upon the analytic design selected and the number of schools that meet the criteria for inclusion in this analysis, as will be discussed below. It is likely that only a subset of the mature RtI schools included in the design and implementation study will meet these criteria, and thus mature RtI schools in addition to those included in the design and implementation study may be recruited for inclusion in the impact analysis in order to attain the desired sample size. If a comparative interrupted time series design is used, at least 20 comparison sites will also need to be recruited, as discussed below.

2 The literature does not provide much guidance on the minimum number of baseline years needed. MDRC tends to use three years as a minimum requirement for CITS. In general, longer baseline periods yield better estimates of trends, and, accordingly, yield better estimates of impacts based on deviations from trends. However, the study team may reassess this minimum if significant numbers of schools have only two years of historical data on consistently-measured student outcomes prior to RtI implementation.

3 A minimum detectable effect size is defined as the smallest true program impact that would have an 80 percent chance of being detected (have 80 percent power) using a two-tail hypothesis test at the 0.05 level of statistical significance. The assumptions used to arrive at the comparative interrupted time series MDES are: 39 total RtI-implementing schools and comparison schools with requisite data; 60 students per school, on average; 85 percent of students with complete data; intra-class correlation of 0.01; at least 3 years of baseline data in all schools; and a 1:1 ratio between RtI-implementing schools and comparison schools in the analytical sample. Treatment-to-comparison school ratios other than 1:1 are possible under CITS, and would yield different MDES estimates.

4 Non-RtI-implementing schools might include schools having never implemented RtI or “later-implementing schools” that initiated RtI practices more recently than the selected study time period. Thus, later-implementing schools could still serve as effective counterfactuals by providing a time series of student outcomes unaffected by RtI practices during years between the start-up of RtI in the mature schools and the point at which these later-implementing schools began RtI.

5 See Van Der Klaauw (2008) and Shadish et al. (2002).

6 A fundamental RDD assumption is that students’ ratings and the cut-off point are determined independently of each other – such that assessments of individual students’ reading abilities are not influenced by considerations about whether to provide additional support to such students. The study team will verify this assumption’s validity during follow-up conversations with mature RtI schools.

7 The assumptions used to arrive at the regression continuity MDES are: 41 RtI-implementing schools with requisite data; 60 students per school, on average; 85 percent of students with complete data; 25 percent of students assumed to be in the treatment group; R-squared value of 0.70 for proportion of variation in treatment status predicted by benchmark score; R-squared value of 0.40 for student-level regressors.

8 The list of organizations is drawn from a list of conference attendees at a Department of Education sponsored RtI Coordination meeting in June, 2008.

9Bocala, Mello, Reedy & Lacireno-Paque (2009); Harr-Robins, Shambaugh & Parrish (2009); Sawyer, Holland & Detgen (2008); Stepanek,& Peixotto (2009); Zirkel (2008)

10 During subsequent data collection (not described in the current OMB submission), teachers and other school staff may also be offered compensation for the time required to assist the study team (by scheduling visits, collecting data from administrative records, and participating in interviews or focus groups beyond the regular school day). Such compensation may be mentioned during initial efforts to contact the 70 initially-sampled schools as another strategy to maximize response rates early in the site recruitment process.

File Typeapplication/msword
File Modified2010-03-30
File Created2010-03-30

© 2024 OMB.report | Privacy Policy