1850-NEW DDI OMB_SS SectionB

1850-NEW DDI OMB_SS SectionB.docx

Impact Evaluation of Data Driven Instruction Professional Development for Teachers

OMB: 1850-0924

Document [docx]
Download: docx | pdf



Impact Evaluation of Data-Driven Instruction Professional Development for Teachers

Supporting Statement Part B

August 26, 2015


Contract Number:

ED-IES-12-C-0086

Mathematica Reference Number:

40166-700

Submitted to:

Institute of Education Sciences

IES/NCEE

U.S. Department of Education

555 New Jersey Avenue, NW

Washington, DC 20208

Project Officer: Erica Johnson

Submitted by:

Mathematica Policy Research

600 Alexander Park

Suite 100

Princeton, NJ 08540

Telephone: (609) 799-3535

Facsimile: (609) 799-0005

Project Director: Phil Gleason

Impact Evaluation for Data-Driven Instruction Professional Development for Teachers

Supporting Statement Part B

May 26, 2015





CONTENTS

PART B: COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS 1

1. Respondent Universe and Sampling Methods 1

2. Statistical Methods for Sample Selection and Degree of Accuracy Needed 5

3. Methods to Maximize Response Rates and Deal with Nonresponse 12

4. Pilot Testing 14

5. Individuals Consulted on Statistical Aspects of the Design 14

REFERENCES 15


APPENDIX A: TEACHER ASSIGNMENT DATA REQUEST

APPENDIX B: TEACHER SURVEY QUESTIONNAIRE AND ACCOMPANYING LETTER

APPENDIX C: TEACHER ACTIVITY LOG AND ACCOMPANYING LETTER

APPENDIX D: PRINCIPAL SURVEY QUESTIONNAIRE AND ACCOMPANYING LETTER

APPENDIX E: STUDENT RECORDS REQUEST

APPENDIX F: CONFIDENTIALITY PLEDGE

APPENDIX G: REMINDER EMAIL AND CALL SCRIPTS






TABLES

Table 1. Data-related Activities, by Source 7

Table 2. Teacher and Principal Intermediate Outcome Measures, by Source 10

Table 3. Minimum Detectable Effects by Sample Size 12



SUPPORTING STATEMENT FOR PAPERWORK
REDUCTION ACT SUBMISSION

This OMB package requests clearance for data collection activities for a rigorous evaluation of data-driven instruction (DDI) in 104 schools from 12 school districts. Data-driven instruction involves the use of student assessment data to help teachers adapt their instruction and, ultimately, improve student achievement. The study’s intervention plan will build school capacity for DDI by: (1) helping schools set up structures and processes that enable teachers and other school staff to efficiently carry out data-driven instruction, and (2) training and coaching teachers in the skills needed to understand student data and implement improved instructional strategies to address student needs. By participating in professional development and ongoing DDI activities, teachers are expected to develop the knowledge and skills needed to help them adapt and improve their instructional strategies based on student data, and ultimately, improve student achievement.


We plan to collect student records and teacher-assignment data from participating districts and schools, and conduct a teacher survey, teacher logs, and a principal survey. The Institute of Education Sciences (IES) within the Department of Education (ED) has contracted with Mathematica Policy Research and its partners Abt Associates and Evidence-Based Education Research & Evaluation to conduct the evaluation, and Public Consulting Group–Focus on Results to provide technical assistance to schools implementing the DDI program.

The evaluation’s main objectives are to understand how DDI is implemented and to rigorously estimate the impact of a comprehensive DDI program on student achievement and teacher and principal practices. The implementation component will use information collected from the technical assistance (TA) team, a teacher survey and logs, and a principal survey. For the impact evaluation, the experimental design involves randomly assigning schools within a district to either a treatment or control group. The treatment schools will implement a comprehensive DDI program, with technical assistance provided by an organization that works with schools to implement such instruction, and the control schools will not implement new DDI initiatives during the study years. Student outcomes will include students’ achievement on math and reading state assessments. Teacher and principal outcomes will include teachers’ and principals’ use of data, teachers’ instructional strategies, and the extent and nature of teacher collaboration.

This OMB clearance request concentrates on materials that will be used to collect principal, teacher, and student data. Included in this request are the following: a teacher assignment data request (Appendix A), a teacher survey questionnaire and an accompanying letter (Appendix B), a form and letter for the administration of teacher logs (Appendix C), a principal survey questionnaire and accompanying letter (Appendix D), a student records request (Appendix E), and a confidentiality pledge (Appendix F).

PART B: COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS

1. Respondent Universe and Sampling Methods

This study will not statistically sample districts and schools. It will, instead, rely on a convenience sample of 104 schools within 12 districts that volunteer to participate in the evaluation and meet the study requirements. The evaluation does not aim to make statements that generalize beyond the districts and schools under study.


The proposed data collection activities include:

  • Teacher and Principal Assignment Data: We will collect teacher assignment data from participating schools in winter 2016. We will ask schools to identify teachers currently teaching fourth and fifth grades, provide contact information for these teachers (such as school e-mail address), and indicate if the teacher taught at the school in fall 2014 (Appendix A).1 The study team will attempt to identify principals at study schools from public sources (such as school websites). If that information is not publicly available, we will request schools to provide principal assignment information. Teacher and principal assignment data will be used to determine the teacher and principal samples for the teacher and principal 2016 spring surveys. Information on teacher and principal assignments in fall 2014 will be used to examine whether the DDI intervention lead to a treatment-control difference in educators’ mobility between fall 2014 and the 2015-2016 school year.

  • Teacher Survey and Logs: We will administer a web-based survey in spring 2016 to 500 teachers (Appendix B). Teachers will also have the option of completing the survey by hard copy or by phone, if they prefer. The survey will provide information on professional development and training received by teachers (particularly related to DDI topics), school data culture, instructional planning and collaboration, teachers’ access and use of data to guide instruction, teachers’ instructional strategies, and teachers’ demographic and background characteristics. We will administer web-based logs to teachers during the 2015-2016 school year (Appendix C). The same set of 500 teachers who will be asked to complete the teacher survey will also be asked to complete these logs at two points during the year, and in each case report on their activities over two consecutive days. The logs will provide information on teachers’ day-to-day activities, including planning activities (individually and in collaboration with other teachers) and instructional strategies in the classroom. Teacher survey and log data will be used to analyze teacher intermediate outcomes and to measure important aspects of the treatment-control contrast. We will also examine treatment-control differences in the training teachers receive and other activities surrounding the use of data in schools.

  • Principal Survey: We will administer a hard-copy survey to all 104 study principals in spring 2016 (Appendix D). Principals will also have the option to provide answers to a trained interviewer over the phone. The survey will focus on topics such as school-wide leadership activities, school data culture, school-wide access to and use of data, and principals’ demographic and background characteristics. Principal survey data will provide additional information of the treatment-control contrast in schools’ training and activities surrounding the use of data, as well as shed light on some aspects of DDI implementation.

  • Student Records Data: In summer 2016, we will collect student outcome measures (for example, math and reading test scores from state assessments) for the year of full implementation (2015–2016) as well as for two prior years (2014–2015 and 2013-2014 if available). We will also collect student demographic and background data, such as gender, race/ethnicity, and eligibility for free and reduced-price lunch, and student attendance and disciplinary measures (if available), and student enrollment data for fall 2014 and winter 2016 (Appendix E). We will use test score data to estimate the impact of DDI on student achievement, the key outcome of interest. Information on students’ demographic and socioeconomic characteristics and their achievement test scores prior to the study school year will be used to describe the students in the study and to develop more precise impact estimates. Given the nature of the intervention, we do not expect that the data-driven instruction intervention would affect students’ mobility patterns during the study period. However, we will use student enrollment data in fall 2014 and winter 2016 to test this empirically.

Number of Districts and Schools. To be able to detect the impact of DDI on student achievement at the level requested by ED, we will include 104 schools that include grades 4 and 5 in the evaluation (see Table 3 below). Each district will include an average of 8 to 9 schools in the evaluation; overall, 12 districts will take part in the evaluation.

Randomly Assigning Schools. Our random assignment approach involves identifying strata—or matched pairs—of similar schools within districts and then randomly assigning schools within each stratum to the treatment or control group. The matched pairs of schools were formed prior to random assignment, and each stratum was primarily based on average student achievement in prior years, the characteristic most likely to be predictive of student outcomes during the study period. Other characteristics to be considered in forming strata include: (1) the proportion of students receiving free or reduced-price meals, (2) the racial/ethnic distribution of students, (3) the proportion of students classified as English language learners, (4) Title I eligibility, and (5) school size. Information on these school characteristics comes from the Common Core of Data and state websites. Random assignment within strata will increase statistical precision by reducing random differences in the average baseline characteristics of treatment and control schools (Imai et al. 2009).


Identifying and Recruiting Districts and Schools. The districts and schools included in this study will not be randomly sampled and so will not be statistically representative of the broader group of all public schools serving fourth and fifth graders nationally. Participating districts and schools have to be both interested in and eligible to participate in the study. Interested districts have to be comfortable with implementing DDI; allowing the intervention to be randomly assigned to its participating schools; and willing to comply with data collection activities, including student data collection, a teacher survey, teacher logs, and a principal survey.2 Eligible districts also met the following criteria:


  • Summative and interim assessments. To effectively implement DDI, teachers and administrators in participating schools need to monitor student achievement on a regular basis using student data from both summative assessments (cumulative end-of-year assessments) and interim assessments (periodic assessments intended to evaluate student learning progress during the year). Teachers in study schools will rely upon existing summative and interim assessments used within each district in order to monitor student achievement; the evaluation will not implement study-specific assessments within participating schools. Both summative and interim assessments used by each participating district need to be uniformly administered in 2014-2015 and 2015-2016. Interim assessments have to be highly aligned to the state summative assessment and, ideally, made by the same developer as the summative assessment.3 Thus, recruitment targets districts that used summative and interim assessments meeting both of these criteria (including, but not limited to, states that were part of the Smarter Balanced Assessment Consortium).

  • Districts serving high needs students. Recruitment efforts aim to include districts with a large number of high needs schools that would most benefit from DDI, and thus have a relatively high proportion of free or reduced price meal (FRP) students—having rates of FRP eligibility higher than 40 percent; within those districts, higher poverty schools.

  • District size. In order to meet our targeted sample size, the study focuses on districts with at least eight elementary schools that contained fourth and fifth grades. The study targets elementary schools for two reasons. Districts typically have a larger number of elementary schools than middle schools, so identifying and recruiting a sufficient number of elementary schools would potentially be easier. In addition, elementary schools are less likely than middle schools to be departmentalized, and tracking by ability level is also less common at the elementary school level. Thus, the intervention will be able to work with groups of teachers who are with their students all day and perhaps have more flexibility in refining their instructional practices.

  • No District-wide DDI Initiatives. For the purposes of ensuring a sufficient contrast between treatment and control schools, the study targets districts that had made minimal (or no) efforts to implement data-driven instruction. We will avoid districts that have implemented district-wide DDI initiatives, as well as districts that plan to implement new DDI programs during the intervention period. Similarly, the study will avoid districts that have implemented district-wide Response to Intervention (RtI) programs, given that RtI programs utilize a similar data-driven approach.

Eligible districts must also express a willingness to cooperate with data collection activities and agree to allow their schools to be randomly assigned to treatment or control conditions. Once we identify districts that meet the criteria above, we will identify a subset of eligible schools within those districts, based on the following eligibility criteria:

  • High needs schools. Within each district, recruitment efforts aim to include high needs schools, and thus target schools with a proportion of free or reduced price meal (FRP) students that is high relative to other schools in the district.

  • No school history of or plans to implement DDI. Within each district, the study will avoid schools that have a data coach in place, receive regular professional development focused on data use, or hold regular teacher collaboration group meetings focused on analyzing and making use of student data. We also will exclude schools that plan to implement new DDI programs during the intervention period and schools that have implemented RtI programs.

  • No Schools with School Improvement Grants (SIG). Schools that had received School Improvement Grants (SIG) are not eligible for the study, given that DDI was an important part of required SIG activities under some of the intervention models specified by ED.

  • Title I schools. The U.S. Department of Education has expressed a strong interest in studying effective approaches to help low-income children within Title I schools meet state standards for academic achievement. A comprehensive approach to DDI is consistent with this interest. While the study prefers to include Title I schools, this is not a strict requirement for participating.

  • Excluding charter and magnet schools. Charter schools, which often utilize data-driven approaches to instruction and that typically operate autonomously from other district schools, will be excluded from the study. Magnet programs will be included only when the district provides an even number and similar type of magnet schools.

Eligible schools also must express a willingness to cooperate with data collection activities. Eligible schools must agree to provide the study with student-level assessment and demographic data (if not provided by the district). In addition, their principals should express willingness to complete a survey in spring 2016, and agree to encourage their teachers to complete a teacher survey in spring 2016.



2. Statistical Methods for Sample Selection and Degree of Accuracy Needed

a. Statistical Methods for Sample Selection

As described in Section 1, the study is using a convenience sample of districts and schools that volunteer to participate in the evaluation and meet the study requirements. To meet the study’s target effect size, we will include 12 districts and 104 schools. We will conduct surveys with all 104 principals, and we will collect administrative records on all students in tested grades within the participating schools. Thus, the study will not statistically sample principals, schools, or students. We will sample teachers for the teacher survey as explained below.

We will administer a web-based survey to 500 teachers in the target grade levels in the spring of the 2015–2016 school year, and administer web-based logs in the winter and spring of that school year (Appendices B and C). The teacher sample will consist of five teachers from each study school, on average, randomly selected from the pool of all fourth and fifth grade teachers in the study schools. Selecting only teachers who teach students in tested subjects and grades 4 and 5 has the advantage that it aligns the information we learn from teachers with the student achievement analyses.4 The disadvantage is that we will not have as a complete picture of DDI implementation in the school as we would if we also included all teachers in the participating grades.

We plan to identify teachers by collecting teacher assignment data from the study schools. We will use the teacher assignment data to identify fourth and fifth grade teachers in treatment and control schools. We will stratify teachers by grade and/or subject in order to obtain a similar number of teachers by grade and subject from treatment and control schools. From each stratum, we will randomly select teachers to receive the survey, for a total of 500 teachers.

b. Estimation Procedures

Random assignment of schools within districts to a treatment group that will implement the study provided DDI program or to a control group that will not is an ideal design for assessing overall effectiveness of DDI. Our primary impact analysis will exploit this experimental design to provide rigorous estimates of the impact of DDI on student achievement and other student outcomes. Additional experimental analyses are designed to estimate the impact of DDI on teacher outcomes, such as teachers’ access to and use of student data and teachers’ use of instructional strategies.

Implementation Analysis. The implementation analyses have two main objectives. The first objective is to describe the DDI activities implemented in treatment schools and the extent to which they were implemented with fidelity. The second objective is to describe differences in treatment and control schools’ implementation of DDI-related activities. The analyses will rely upon information from the data coach weekly activity logs and the Focus on Results consultant activity logs, as well as from the principal survey, teacher survey, and teacher logs.

Fidelity of DDI implementation. The implementation analysis will describe DDI activities undertaken by treatment schools and the extent to which they were implemented with fidelity to the intervention plan. As a part of this analysis, we will review log entries during the intervention period. Because it may take time for treatment schools to set up necessary school structures and roll out intervention activities, we expect some variance in implementation at different points in time during the intervention. Thus, we will examine fidelity of implementation during and at the end of the intervention period.

To assess the extent to which treatment schools implemented DDI with fidelity, it will be important to quantify the data and summarize it uniformly. The DDI intervention aims to build school capacity in two ways, by (1) helping schools set up structures and activities that enable school staff to carry out DDI, and (2) training and coaching teachers in the skills needed to use and interpret student data. Implementation fidelity measures will therefore examine the training and coaching that treatment schools receive and the degree to which treatment schools fully implement the anticipated DDI structures and activities.

In describing the fidelity of implementation of a particular DDI component, we will focus on the activities we expect to have happened if the component has been implemented. We will measure both whether the activity occurred and, if appropriate, the level of participation in the activities. To measure the fidelity of implementation of the instructional leadership team meetings, for example, we will measure whether a school holds monthly instructional leadership team meetings among all key staff, attendance at these meetings among the staff, and the key activities taking place at these meetings. These activities would include that the team establishes school-wide goals for performance, establishes areas of instructional focus, and examines student data.

Our examination of implementation fidelity will be accompanied by descriptive text that summarizes our findings, describes changes in implementation over the course of the intervention, and provides a range of examples of how schools implemented DDI with fidelity.

We will also describe treatment schools’ experiences and the challenges they faced in implementing DDI. This information will be especially important if the DDI approach is found to be effective and other districts wish to replicate the program. We will examine school-level factors that may influence DDI implementation, such as the level of engagement of the school principal and the challenges faced when implementing DDI activities (such as achieving consistent participation and strong engagement of teachers in teacher collaboration team meetings).


Implementation of DDI-related activities in treatment and control schools. The DDI intervention is expected to lead the principal, data coach, instructional leadership team, teacher collaboration teams, and individual teachers in treatment schools to engage in numerous data-focused activities. These include (1) activities undertaken by school leaders in directing data use, (2) school-wide communications about data use, (3) professional development and support for the principal and teachers on data use, and (4) collaboration among teachers to review student data and share instructional strategies. We will use data from the principal survey, teacher survey, and teacher logs to examine whether implementation of the DDI intervention leads to a treatment-control difference in these activities (Table 1).




Table 1. Data-related Activities, by Source


Principal Survey

Teacher Survey

Teacher Logs

Activities undertaken by school leaders in directing data use

Frequency of leadership team meetings

X



Leadership team members

X



Leadership team activities (e.g., setting achievement and priority learning goals, monitoring progress, planning professional development activities)

X



Degree to which school leaders ensure that teachers have the time and resources needed to analyze and interpret student data

X

X


School-wide communications about data

Frequency of communication on student achievement goals and results

X

X


Frequency of communication on priority learning goals

X

X


Frequency of communication on expectations for and actual use of data

X

X


Use of data displays in classrooms and other public areas

X

X


Professional development and support for the principal, teachers around data use

Amount of professional development/training activities this school year

X

X


Topics covered by trainings (analyzing student data, establishing priority learning goals for the school, individualizing student learning goals, tracking progress toward goals, using evidence-based instructional strategies)

X

X


Availability of on-site coaching/support for data use

X

X


Data coach activities

X



Frequency of, and topics addressed during, individual coaching


X

X

Frequency of classroom observations and feedback


X

X

Collaboration among teachers around data use and sharing instructional strategies

Frequency of, and amount of time spent on, teacher collaboration


X

X

Teacher collaboration activities (analyzing student data, setting common learning goals for students, sharing effective instructional practices, jointly modifying lesson plans, monitoring implementation and results of instructional changes)


X

X



Impact analysis. The impact analysis will rigorously assess the effectiveness of a comprehensive DDI program. Calculating the statistical significance of the impacts requires that the nested structure of the data—with students clustered in schools–is incorporated in the analysis. Due to clustering, the variance of the impact estimates is larger than it would have been if each individual student were randomly assigned to DDI. Below we describe our approach to calculating outcomes of student achievement and intermediate outcomes on teacher and principal practices.


Student final outcomes. Student achievement will be the study’s primary outcome, measured using spring 2016 math and reading scores on state standardized tests. Specifically, this main impact analysis will examine the effect of DDI on:

  • Math achievement among fourth and fifth graders in a school that implemented data-driven instruction

  • ELA achievement among fourth and fifth graders in a school that implemented data-driven instruction

To assess the impact of data-driven instruction on student achievement, we will use a place-based impact estimation strategy that compares outcomes for students in treatment schools to those of students in control schools using spring 2016 state test scores. This implies that the impacts could reflect either the impacts of data-driven instruction on student achievement or impacts on student mobility. Given the nature of the intervention, we do not expect that the data-driven instruction intervention would affect students’ mobility patterns during the study period. However, we will test this empirically through the analysis of impacts on student mobility based on the student sample in fall 2014 (following randomization) and spring 2016 (at the conclusion of the intervention).

The main impact analysis based upon spring 2016 test scores will estimate an impact model in which the overall impact estimate is based on treatment-control differences in the outcome of interest within each stratum (matched school pair). Although a simple treatment-control difference in mean outcomes will yield an unbiased estimate of the impact of DDI, the precision of estimates can be improved by controlling for baseline characteristics that may influence the outcomes of interest but are not related to the treatment itself. For all the student outcomes, we will control for baseline student and school covariates. If available from state or district records, specific student-level covariates will include prior years’ (spring 2015 and, if available, spring 2014) math and reading test scores (in z-score units), prior years’ student attendance, prior years’ student suspensions, gender, race/ethnicity, eligibility for free or reduced-price lunch, English language learner status, and special education status. We also anticipate including school-level aggregates of these variables and other relevant school characteristics in cases where individual student-level data are not available.

Accordingly, we will estimate student impacts using the following model:

(1)

where yijk is the outcome of individual student i in school j within stratum k; αk is a vector of stratum (matched pair) indicators (fixed effects) included to control for differences across strata in average student, teacher, principal and school characteristics; Tjk is a treatment indicator that equals one if the school was assigned to DDI and zero otherwise; Xijk is a vector of baseline individual student characteristics; ujk is a school-specific random error term; εijk is an individual-level random error term; and , γ, and δ are parameters to be estimated.

The estimate of represents the overall impact of DDI on the student outcome of interest. We will estimate the model with ordinary least squares (OLS) using standard errors that account for school-level clustering.

We will consider student achievement in reading and math to be separate domains, across which the impact of data-driven instruction might differ. Given that assessments differ across state, grade level, and subject area, we will standardize the raw achievement scale scores by converting them to z-scores. We will calculate the z-scores by subtracting the state mean score from the raw scale score and dividing by the standard deviation of the state scores.

We will also estimate impacts on other student-level outcomes, such as attendance and suspensions, if possible. We will request student-level data on these outcomes from districts or states, but if the data are not available or are unreliable, we will estimate impacts on these outcomes measured at the school (or, if available, grade) level. Impacts on a school-level outcome yjkd can be estimated using model (1) above but including school-level averages of individual-level covariates rather than the individual-level covariates themselves.

The estimation of overall impacts on student achievement may mask differences in impacts across subgroups of students. For example, DDI may prove more or less effective at boosting student achievement for students with different baseline characteristics; it may, for example, raise achievement among lower performing students to a greater degree than it does among higher performing students. We will provide a subgroup analysis of the impacts of DDI for student groups based on their level of baseline achievement, focusing on impacts for particularly low-achieving students as well as for students at moderate to high baseline achievement levels. To estimate impacts for student subgroups, we will create a version of the model that interacts a subgroup indicator with the treatment indicator; the coefficient on the subgroup-treatment indicator interaction term will represent the impact estimate for the subgroup.

Teacher and principal intermediate outcomes. The core set of professional development and technical assistance inputs under DDI are intended to help teachers and principals use data to improve instruction, which in turn would lead students to realize higher achievement gains. We will use responses from teacher surveys, teacher logs, and the principal survey to examine these intermediate outcomes. Teachers and principals offer different vantage points from which to assess the extent to which schools engage in data use activities. For example, principals may be able to provide detailed information on school leadership activities, while teachers may be able to provide detailed information on the frequency and content of teacher collaboration activities. The surveys will provide useful information on the frequency of activities over an extended period of time (such as how often a teacher attended professional development during the school year), while teacher logs will provide a one-day snapshot of teacher activities that occur relatively frequently (such as lesson planning and collaboration based on analysis of data).

Table 2 lists the measures and their sources that will be used to estimate impacts on intermediate outcomes. The first two sets of items listed in the table measure intermediate outcomes related to teachers’ access to and use of data to guide instruction. The second two sets of items capture information on teachers’ instructional strategies.


Table 2. Teacher and Principal Intermediate Outcome Measures, by Source


Teacher Survey

Teacher Logs

Principal Survey

Access to student-level data

Access to interim assessment results

X


X

Access to summative assessment results

X


X

Access to student background characteristics, attendance, and school behavior information

X


X

Barriers to data use (usable format, technology, tools)




Use of student-level data

Frequency of use by type (summative assessment results, interim assessments, formative assessments, samples of student work, student characteristics)

X

X

X

Purposes of data use (understand student needs, set learning goals, monitor progress toward goals, differentiate instruction, revise lesson plans)

X


X

Understanding of instructional changes to make based on data

X


X

Differentiated Instruction

Placing students in small groups based on student data

X



Providing small group instruction

X

X


Providing individualized instruction

X

X


Identifying and referring students in need of pull-out services or other intensive interventions

X

X


Changing instructional group assignments of students based on student data

X

X


Whole-Class Instruction

Providing additional instruction in areas where students are struggling

X

X


Identifying evidence-based instructional changes to help address students’ needs


X


Using new instructional strategies to teach challenging concepts to students


X


To assess the impact of DDI on teachers’ use of data and instructional strategies, we will compare outcomes for teachers in treatment schools to those of teachers in control schools. Our impact model for teacher outcomes will take a similar approach to that taken in the student outcomes model, calculating treatment-control differences among teachers. For all teacher outcomes, we will control for teacher covariates derived from the survey. Teacher covariates will include years of teaching experience, gender, race/ethnicity, teacher certification, and an indicator for a master’s degree.


Similar to our approach in the student model, we will estimate the model with ordinary least squares (OLS) using standard errors that account for school-level clustering, and we will compute the overall average impact of DDI by taking a weighted average of the coefficients on the treatment indicators. Each district-specific impact will be weighted by the number of study schools in each district.

Different types of teachers may be differentially affected by the implementation of DDI. For example, less experienced teachers may be more (or less) at ease in using data to inform instruction than more experienced teachers. They also may be able to benefit to a greater extent from the information on student performance provided by student data, since they will be less able to rely on experience to understand their students’ needs. The study will also examine impacts on teacher outcomes separately for subgroups of teachers defined by their level of experience. Similar to the approach used in the student subgroup analysis, we will estimate subgroup impacts by creating a version of the model that interacts a subgroup indicator with the treatment indicator.

Our approach to analyzing principal outcomes will similarly compare principal survey responses regarding the structures and professional development activities in treatment and control schools. However, while the teacher and student models will be estimated at the classroom and student levels, respectively, the principal model will be estimated at the school level. We will adapt the student-level model presented in equation (1) to estimate impacts on principal-reported measures. For all principal survey analyses, we will control for principal covariates, such as years of experience, gender, race/ethnicity, and an indicator for a master’s degree.

Nonexperimental analysis. Contextual factors, such as student characteristics or principal and teacher background characteristics, may aid in the interpretation of the impact of DDI in treatment schools relative to control schools. We will therefore examine how school contextual factors are related to impacts. Examples of contextual factors include:

  • School characteristics. School math and ELA proficiency measured at baseline, the percentage of students eligible for free or reduced-price meals, the percentage that are English language learners, the percentage in special education programs, and racial/ethnic composition of the school.

  • Teacher and principal characteristics. Education, experience, and background characteristics.


The study will use descriptive analyses and regression analyses to examine how impacts are related to school contextual factors. We will use the descriptive analysis to identify conditions and practices that are candidates for regression analyses of impacts.


Correlational analyses of impacts and implementation fidelity. The potential impact of DDI on student achievement may differ within treatment schools based on aspects of their fidelity of implementation of DDI. To explore how school characteristics and other contextual factors may influence student impacts, we will conduct a correlational analysis that examines the relationship between estimated impacts of DDI and key features of treatment schools (or of matched pairs of treatment and control schools when data are available for the control schools).


For example, we may examine whether the qualifications of the data coaches hired at each treatment school are related to impacts on student achievement. Because data coaches play a central role in supporting the DDI intervention at each school, the prior skills and knowledge they bring to the work may influence the degree to which teachers are supported in analyzing student data and using them to improve instruction, which may in turn influence student outcomes. Understanding the degree to which the success of the DDI intervention hinges upon the qualifications of the data coach may prove helpful in developing effective DDI interventions in the future. As above, we will correlate each coach’s data proficiency score on an exercise developed by Focus on Results to assess their initial skills in analyzing student assessment data with the estimated impact of the coach’s school.

Because schools cannot be randomly assigned to specific DDI contexts, this correlational analysis will be nonexperimental. We will stress that any significant relationships between impacts and DDI features or contextual factors might not be causal and might reflect the influence of other unobserved factors.

Degree of Accuracy Needed. In Table 3 we present the minimum detectable effect sizes (MDEs) on student achievement and teacher practices for the sample size of 104 schools (52 treatment and 52 control), 5 teachers per school, and 112 students in grades 4 and 5 per school. The MDEs incorporate conservative assumptions about design effects due to the clustering of students or teachers in schools and precision gains from regression adjustments and stratified random assignment. These assumptions (listed in the notes of Table 3) are based on estimates from recent large-scale studies in education with school-level random assignment. Under these assumptions, we will be able to detect an impact on student achievement of 0.12 standard deviations. With this sample size, we also will be able to detect an impact on teacher practices of 0.33 standard deviations. Minimum detectable effect sizes for student and school subgroups are also shown in the table.

Table 3. Minimum Detectable Effects by Sample Size


Sample Size


Minimum Detectable Effect (MDE)

Number of Schools

Number of Students per School

Number of Teachers per School

Number of Principals per School


Student Outcome

Teacher Outcomes

Principal Outcomes

Balanced Design








104 (52 T, 52 C)

112

5

1


0.12

0.33

0.59

Student/teacher subgroup








104 (52 T, 52 C)

56

3

1


0.14

0.39

--

School Subgroup








52 (26 T, 26 C)

112

5

1


0.18

0.48

0.90

Note: The MDEs were calculated assuming: (1) a stratified random assignment design; (2) a two-tailed test; (3) a 5 percent significance level and 80 percent level of power; (4) a school-level intraclass correlation of 0.15; (5) a response rate to the teacher survey of 85 percent; and (6) a reduction in variance of 40 percent at the student level, 70 percent at the school level from the inclusion of covariates in the student outcome models and 10 percent reduction in variance at the teacher and principal levels, and 10 percent at the school level from the inclusion of covariates in the teacher outcome models. T= treatment; C= control.


3. Methods to Maximize Response Rates and Deal with Nonresponse

There are multiple strategies to maximize response while minimizing burden on respondents and the following techniques are major contributions to a high completion rate: establishing positive relationships with respondents and school and district staff; sending letters prior to the surveys; and establishing efficient and flexible scheduling. We will include a statement on confidentiality and data collection requirements (Education Sciences Reform Act of 2002, Title I, Part E, Section 183) in all letters, data collection instruments, and study information sheets. In cases when the data collection activity is voluntary (the teacher survey, teacher logs, and principal surveys), we will include a statement indicating that participation is voluntary, yet emphasize the importance of their response for the study findings and highlight the incentive for completion.

Teacher and Principal Assignment Data. We will collect teacher assignment information using a form that requires minimal effort to complete (Appendix A). The study team will attempt to identify principals at study schools from public sources (such as school websites). If that information is not publicly available, we will request schools to provide principal assignment information. Schools can choose to provide this information by posting an electronic file to our secure website or by sending a hard copy via a prepaid Federal Express packet provided. We will work with school staff if they prefer to provide the information by another method (such as by phone). We will be courteous but persistent in our follow-up with participants who do not respond quickly to our attempts to reach them. Based on our experience colleting this type of information, we expect a 100 percent response rate.

Teacher Survey and Teacher Logs. Based on Mathematica’s experience surveying teachers on other studies, we expect at least an 85 percent response rate for the teacher survey, and an 80 percent response rate for the teacher logs. To ensure a high response rate, we will send teachers a letter that will describe the study and provide instructions to complete the survey online at their convenience (Appendix B). In the case of the teacher logs, we will request that teachers complete them within a more limited time period so that they will be able to accurately recall the day’s activities. We will send out reminder emails and make reminder telephone calls to teachers who do not respond within two to three weeks of receiving the survey. For the logs, we will send the reminder emails and make the reminder calls during the shorter time period in which we hope to get a response. If necessary, Mathematica staff will follow up with nonrespondents and administer the survey over the telephone at the teachers’ convenience, or provide teachers with a hard-copy survey they can complete. Experienced interviewers will be recruited and extensively trained on data collection procedures, including methods for promoting cooperation among school staff. Interviewers especially skilled at encouraging cooperation will be available to persuade reluctant teachers to participate. To compensate teachers for their time to complete the survey and to increase the response rate, we propose offering teachers $20 for each completed survey. For each completed log, we propose offering teachers $15.

Principal Survey. To ensure a high response rate to the principal survey (Appendix D), we will draw on our collegial relationship established with principals during the recruitment effort. We will provide reminder materials about the study that explain the importance of the survey as an opportunity for principals to provide their perspective on DDI practices in their school. As with the teacher survey, we will send out reminder emails and make reminder calls to principals who do not respond within two to three weeks of receiving the survey. Principals will also be given the option to complete the survey by telephone, if desired. To compensate principals for their time to complete the survey and to increase the response rate, we also propose offering principals $20 for each completed survey

Student Records Data. To minimize burden on the district and maximize the likelihood of obtaining the data, during the initial phases of recruiting we asked each district how administrative records data are stored, how we can obtain permission for collecting this information, and the contact person we should work with to obtain the data (Appendix E). We will accept electronic data file or hard copy lists. Federal rules permit ED and its designated agents to collect student demographic and existing achievement data from schools and districts without prior parental or student consent (Family Educational and Rights and Privacy Act (20 U.S.C. 1232g; 34 CFR Part 99)). To maximize the response rate and minimize burden on schools and parents, we will follow these federal rules, and we plan to compensate districts for the burden of their time spent providing student records. In addition, we will be courteous but persistent in our follow-up with participants who do not respond quickly to our attempts to reach them. We assume that we will be able to obtain records for 100 percent of the participating students.

4. Pilot Testing

We pilot tested the survey instruments and logs for features such as clarity, accuracy, length, flow, and wording. Trained quality control staff checked responses for completeness and reasonableness and they followed up with respondents if problems were identified. The web-based surveys and logs will not allow respondents to enter out-of-range or inconsistent responses, and data entry programs also checked for inconsistencies.

We pilot tested the teacher survey and teacher log with nine or fewer respondents who are either teaching or have taught fourth or fifth grade math or reading. We also pilot tested the principal survey with nine or fewer respondents who are or were school administrators. We monitored the time it took to complete the surveys and logs, to ensure the final versions can be completed within the allotted time. We asked respondents to identify any questions that they found problematic and to provide general feedback about their experience.

5. Individuals Consulted on Statistical Aspects of the Design

The following individuals were consulted on the statistical aspects of the study:

Name

Title

Telephone Number

Phil Gleason

Senior Fellow, Mathematica

202-264-3443

Alison Wellington

Senior Researcher, Mathematica

202-484-4696

Allison McKie

Senior Researcher, Mathematica

202-484-4681

Hanley Chiang

Senior Researcher, Mathematica

617-674-8374

Elias Walsh

Researcher, Mathematica

202-554-7516


The following individuals will be responsible for the data collection and analysis:

Name

Title

Telephone Number

Phil Gleason

Senior Fellow, Mathematica

202-264-3443

Sheila Heaviside

Associate Director of Survey Research, Mathematica

202-484-3096

Irma Perez-Johnson

Senior Researcher, Mathematica

609-275-2339

Tim Bruuresema

Survey Researcher, Mathematica

202-484-3097

Fran O’Reilly

Principal Researcher, EBERE

617-792-0422

Kim Dadisman

Associate, Abt Associates

617-520-3040



REFERENCES

Imai, K., G. King, and C. Nall. “The Essential Role of Pair Matching in Cluster Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation.” Statistical Science, vol. 24, no. 1, 2009, pp. 29–53.




Shape1 Shape2

Improving public well-being by conducting high-quality, objective research and surveys

Princeton, NJ Ann Arbor, MI Cambridge, MA Chicago, IL Oakland, CA Washington, DC


Mathematica® is a registered trademark of Mathematica Policy Research

www.mathematica-mpr.com



1 Teacher assignment data will be collected following approval of the OMB clearance package.

2 States that plan to use the PARCC assessments will be excluded from recruitment because it is unclear when those assessments will be available. One large state will also be excluded due to concerns about the ability to obtain student achievement data without parental consent.

3 This criterion resulted in excluding states that developed their own summative assessments.

4 Although we will collect student test scores on all students in tested grades in the study schools, our primary impact analyses will focus on fourth and fifth graders.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDonna Dorsey
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy