Supporting Statement Part A Teacher Prep

Supporting Statement Part A Teacher Prep.docx

Study of Promising Features of Teacher Preparation Programs; Phase I - Recruitment

OMB: 1850-0891

Document [docx]
Download: docx | pdf



P art A: Supporting Statement for Paperwork Reduction Act Submission




Study of Promising Features of Teacher Preparation Programs







March 20, 2012



Prepared for:

Amanda DeGraff

Institute for Education Sciences

555 New Jersey Ave, NW

Room 5000I

Washington, DC 20208-5500




Submitted by:

Abt Associates Inc.

55 Wheeler Street

Cambridge, MA 02138



In Partnership with:

Abt SRBI

Chesapeake Research Associates

The Bench Group

Dillon-Goodson Research Associates


Part A: Supporting Statement for Paperwork Reduction Act Submission

Table of Contents

A. Justification A-1

Introduction A-1

A.1 Circumstances Making the Collection of Information Necessary A-2

A.1.1 Statement of Need for a Rigorous Study of Teacher Preparation Programs A-2

A.1.2 Research Questions A-3

A.1.3 Study Design A-4

A.1.4 Selection of Intensive Clinical Practice Features A-5

A.1.5 Recruitment A-6

A.1.6 Data Collection Activities A-12

A.1.7 Analysis A-15

A.1.8 Study Timeline A-15

A.2 Purposes and Use of the Information Collection A-16

A.3 Use of Information Technology and Burden Reduction A-17

A.4 Efforts to Identify Duplication A-17

A.5 Efforts to Minimize Burden in Small Businesses A-18

A.6 Consequences of Not Collecting the Information A-18

A.7 Special Circumstances Justifying Inconsistencies with Guidelines in 5 CFR 1320.6 A-19

A.8 Consultation Outside the Agency A-19

A.8.1 Federal Registrar Announcement A-19

A.8.2 Consultations Outside the Agency A-19

A.8.3 Unresolved Issues A-19

A.9 Payments or Gifts to Respondents A-19

A.10 Assurance of Confidentiality A-19

A.11 Questions of a Sensitive Nature A-21

A.12 Estimate of Response Burden A-21

A.13 Estimate of Total Capital and Startup Costs/Operation and Maintenance Costs to Respondents or Record-Keepers A-23

A.14 Estimates of Costs to the Federal Government A-23

A.15 Changes in Burden A-23

A.16 Plans for Analysis, Publication and Schedule A-23

A.16.1 Analysis Plans A-23

A.16.2 Publication plans and schedule A-23

A.17 Approval to Not Display Expiration Date A-27

A.18 Exceptions to Item 19 of OMB Form 83-1 A-27

References A-28





  1. Justification

Introduction

This Information Collection Request seeks clearance to select teacher preparation programs, recruit districts and schools, collect student rosters, and administer a baseline student achievement test for a rigorous study of the effect on student learning of teachers who have experienced intensive clinical practice within the university-based preparation programs they chose to attend. This is the first of two ICRs to be submitted in relation to this study. The second ICR will cover data collection, analysis, and reporting.

This study is being conducted by the Institute of Education Sciences, US Department of Education (ED); it is being implemented by Abt Associates Inc., its wholly owned subsidiary, Abt SRBI, and its partners, Chesapeake Research Associates, The Bench Group, Dillon-Goodson Research Associates, and Drs. Sharon Vaughn and Karen Wixson (together, the “Abt study team”).

The objective of this study is to use rigorous methods to examine certain university-based clinical practice features for novice teachers. Teachers who have experienced intensive clinical practice features as part of their preservice teacher preparation program, that they chose to attend, are hypothesized to produce higher average student test scores than teachers who have not experienced intensive clinical practice in the program that they chose to attend. Using a randomized controlled trial (RCT), students will be randomly assigned to a pair of teachers in the same school and grade level, one of whom will have experienced intensive clinical practice as part of their chosen preservice teacher preparation program (“treatment”) while the other will not have had that same experience as part of their chosen preservice teacher preparation program (“control”). The study will then examine the impact on student achievement of teachers who choose to enter teaching through a traditional university-based teacher preparation program that includes promising preparation features versus those teachers who choose to enter teaching through university-based programs that have more typical features.

The information collection request (ICR) will be submitted in two phases because the study schedule requires that the process of identifying eligible teachers – including the preliminary steps of defining the specific features of intensive clinical practice upon which the study will focus, identifying and selecting the elementary school teacher preparation programs that provide such clinical practice features, district and school recruitment, and the identification of matched teacher pairs – begin before all of the data collection instruments are developed and tested. Additionally, this ICR includes the collection of student rosters and the fall administration of student achievement tests which will be used to examine the statistical equivalence of students in the randomly assigned classrooms. The student rosters are needed in order to prepare for random assignment prior to the beginning of the school year.

Phase I – Recruitment and Random Assignment ICR. The study will use a multi-step process to identify feasible states for the study, select specific features related to intensive clinical practice, identify university-based teacher preparation programs that require such clinical practice, identify feasible districts and schools for the study, confirm eligibility of potential teachers for the study, and implement random assignment of students to treatment and control classrooms. The Phase I - Recruitment and Random Assignment ICR requests approval to collect information from preparation programs about their requirements, focusing on features of clinical practice specifically, to collect information from teachers about their preparation to determine their eligibility for the study, and to randomly assign students to treatment and control classrooms. This package also provides an overview of the study, including its design and data collection procedures.

A second package, to be submitted in May 2012, will request clearance for winter and spring data collection activities for the study. The Phase II - Data Collection ICR will describe student and teacher data collection, analysis, and reporting. The second package will provide a detailed discussion of the data collection activities and copies or descriptions of the instruments (teacher survey and observation protocol).

In addition to the impact study described above, a second component of the study will use state level administrative data to examine the relationship between achievement levels of students who are English Language learners (EL) and (as a contract option that may be exercised) students with disabilities (SWD), and the preparation requirements teachers experienced with regard to teaching such student populations using value-added analyses. This component will gather information about preparation programs’ requirements with a focus on requirements preparing teachers for diverse populations. Using extant test-score data and information about teachers’ preparation programs in several states, it will examine the relationship between student test scores and different novice teachers’ preparation features. This component is based on extant data and is not included in the information collection requests for this study.

    1. Circumstances Making the Collection of Information Necessary

      1. Statement of Need for a Rigorous Study of Teacher Preparation Programs

The specific legislation authorizing this data collection is Title II, Part A of the Elementary and Secondary Education Act (ESEA), section 9601 as amended by No Child Left Behind (NCLB) (20 USC 6621-6623). The ESEA emphasizes the importance of teacher quality in improving student achievement. Title II, Part A of ESEA – the Improving Teacher Quality State Grants program – provides nearly $3 billion a year to states to prepare, train, and recruit high-quality teachers, especially in the areas of mathematics and science. The purpose of Title II, Part A is to help states and local school districts improve student achievement through strategies for improving teacher quality and increasing the number of highly qualified teachers.

Promoting improved teacher quality also is among the core emphases of the current administration. For example, ED’s Blueprint for Reform focuses on “improving teacher and principal effectiveness to ensure that every classroom has a great teacher and every school has a great leader” and investing “in programs whose graduates are succeeding in the classroom.”1 Additionally, the Presidential Teaching Fellows program, part of the proposed reauthorization of ESEA, would provide formula grants to states to hold teacher preparation programs accountable for student performance as well as to improve the rigor of certification.

Research on teacher quality has demonstrated that one of the strongest indicators of students’ academic success is the competence and capability of their teachers (Clotfelter, Ladd & Vigdor, 2007; Boyd, Grossman, Lankford & Wyckoff, 2006; Rivkin, Hanushek & Kain, 2005). But there is no consensus about the specific pathways or operational mechanisms that prepare teachers to become effective, and no consistency across state policies that govern teacher preparation and certification. Each state has its own preparation requirements for K-12 teachers, content standards, curriculum, and assessments, all of which may influence teacher preparation programs.

This is the first large-scale experimental study that examines the specific impact on students of intensive clinical practice by comparing new teachers who differ on the basis of their clinical practice experiences within university-based teacher preparation programs that they chose to attend. This study complements IES’s ongoing research agenda and other studies in the field that focus on the relationships between various aspects of teacher preparation and student achievement. One IES study isolated the effects of variation in the amount and timing of preparation features. Other experimental evaluations have focused on alternative pathways to teaching, such as Teach for America (Decker, Mayer & Glazerman, 2004), alternative teacher certification programs (Constantine et al., 2009), and two highly selective alternative certification programs -- Teach for America (TFA) and the New Teacher Project (TNTP) – on math achievement of middle and high school students.2 Another IES study focused on induction programs for novice teachers (Glazerman et al., 2010).

There have been few experimental studies examining effects of college and university-based teacher preparation programs on student outcomes (NRC, 2010); there is, however, some suggestive evidence about which features of teacher preparation programs merit more rigorous investigation. Particularly relevant to this study, there is some evidence that the nature and quality of clinical experiences have an impact on teacher knowledge and skills and on student achievement (Grossman et al., 2008). Syntheses also point to teacher content or subject matter knowledge (NRC, 2010), and studies of mathematical knowledge for teaching suggest that preparation programs that help teachers develop their pedagogical content knowledge have positive impacts on student achievement (e.g., Grossman et al., 2008; Hill, Rowan & Ball, 2005; Ball and Forzani, 2011).

This study is designed to partly fill the knowledge gap about the effectiveness of intensive clinical practice. Using a random-assignment impact design, this study will measure the effect of novice teachers who experienced intensive clinical practice as part of their preservice teacher program on student achievement on standardized tests.

      1. Research Questions

The primary research question for the impact study is:

What is the impact on student achievement of teachers who choose to enter teaching through a traditional university-based teacher preparation program that includes promising preparation features versus those teachers who choose to enter teaching through university-based programs that have more typical features?

Secondary research questions for the impact study are:

Among the teachers studied, what are the core features of teacher preparation? In particular, to what extent does preparation vary on selected dimensions of clinical practice?

What is the impact on the classroom practices of novice elementary school teachers who experienced intensive clinical practice as part of the preservice teacher preparation program that they chose to attend compared to novice elementary school teachers who did not have the same experience as part of the preservice teacher preparation program that they chose to attend?

What teacher preparation features (such as opportunities to teach throughout the preparation program, extent or nature of the clinical practice, and structured feedback during clinical practice) are associated with teacher effectiveness?

Additionally, the study will also include analysis based on state administrative data. This data will include information about teacher-student links, student assessments, and teacher-preparation programs and will answer the following research question:

What teacher preparation features are associated with teacher effectiveness for special populations (i.e. Special Education Students and English Language Learners)?

A table of the research questions along with the data sources for each question, the analytic approach and outcomes of interest is presented in Appendix A.

      1. Study Design

This study will use an experimental design in which elementary school students in the same school and grade are randomly assigned either to a class taught by (a) a novice teacher who experienced intensive clinical practice as part of the preservice teacher preparation program that he/she chose to attend (“treatment”) or (b) a novice teacher who did not have that same experience as part of the preservice teacher preparation program that he/she chose to attend (“control”). This approach was used in the previously cited studies of Teach for America and alternative certification programs. Random assignment minimizes differences in outcomes between the groups of study students due to unmeasured, pre-existing differences. Each matched pair of teachers is an experiment that can be combined to estimate the impact of “intensive clinical practice” preparation.

The ability of a study to detect outcome differences depends on its sample size. To have adequate power, the study’s target is 100 teacher pairs, or 200 teachers. Assuming an average of 25 students per class, the study will include approximately 5,000 students. It is expected that these study students and teachers will be in 100 schools, in about 30 districts, across about 8 states.

To find and select the teacher pairs efficiently also means identifying teacher preparation programs with intensive clinical practice requirements in states selected for the study. Within these states, districts, schools, and teachers will be selected (purposively) based on the feasibility of their participation in the study, their eligibility for it, and their willingness to participate.

To use resources efficiently, recruitment will focus on select states that have many, varied, and large teacher preparation programs; large enough schools and districts feasible for the impact study; and district stability in hiring patterns and student enrollment. The candidate states represent states that are diverse, have a variety of policy environments and teacher education/certification requirements, and have a different district and school configurations. The set of states is purposeful, and not representative of all states. The select list of states focuses on those that have districts with 10 or more schools large enough to have three or more classrooms per grade level in the target grades (i.e., at least 75 students per grade in grades K-5) because the study can carry out random assignment of students to teachers only in schools that have at least three classrooms in a given grade level. While other states may be added based on identification of other teacher preparation programs and in consultation with the study’s consultants (Section A.8), likely states for the study include: Colorado, Georgia, Maryland, New York, North Carolina, Tennessee, Texas, and Virginia.

Within these states, all districts that have recently hired, or are likely to hire, a sufficient number of elementary grade teachers per year, including those from teacher preparation programs that require intensive clinical practice are “eligible” to participate in the study. The study team will prioritize districts based on stable or increasing enrollment patterns over the past two-three academic years; the number and size of available elementary schools; district polices regarding research; the proximity of, and hiring relationships to study programs, and the proximity of other districts being considered for inclusion in the study sample (for cost effectiveness).

A school is “eligible” for the study if it is a public elementary school (including charter schools), and it contains at least one of the grades K-5, and has at least one set of two relatively new teachers – one “treatment” teacher, and one “control” teacher (i.e., treatment teachers will have experienced intensive clinical practice as part of their preservice teacher preparation program that they chose to attend, while control teachers will not have had that same experience as part of their preservice teacher preparation program that they chose to attend) in the 2012-2013 school year. The focus is on elementary schools, in which classrooms are generally self-contained, in which there is heterogeneous grouping of students, and random assignment of students to teachers is generally consistent with how students are assigned to teachers in general. The process of randomly assigning a roster of incoming students to teachers within a matched teacher pair is more straightforward in elementary schools because students within a grade tend to have the same class schedule, levels and types of courses, and instructional staff.

A teacher pair will be “eligible” if these conditions are met:

  1. Both teachers are the lead teacher in their respective classes and teach both reading and math to students in those classes. This is a typical self-contained elementary school classroom.3

  2. The students whom they are assigned come from the same “pool,” (i.e., general education students, or both teachers serve predominantly ELL students or students with disabilities).

  3. One teacher (which the study calls the “treatment” teacher) will have experienced intensive clinical practice as part of their teacher preparation program from which they graduated within the past three years. The other teacher (the “control” teacher) will not have had that same experience as part of their preservice teacher preparation program from which they graduates within the past three years.

A teacher preparation program will be considered as producing potential “treatment” teachers if it implements universal intensive clinical practice for its elementary-education teachers in training and it has provided the features for at least three years. The latter requirement ensures that the features are recognized and stable program characteristics. It also allows for the identification and recruitment of teachers who are recent program graduates as well as spring 2012 program graduates. Teachers who received their training from institutions outside the United States, from graduate programs, or from alternative certification programs, will be excluded from the study.

Schools will be prioritized, like districts, to maximize recruiting success, targeting the largest schools and those identified with the most potentially eligible teacher pairs.

      1. Selection of Intensive Clinical Practice Features

The study team will work with consultants and IES to specify a set of teacher preparation features, particularly those related to intensive clinical practice, on which the study will focus. Based on these features, the study team will create a rubric for coding information about potential candidate programs. The rubric will allow the study team systematically to review and catalogue program information on teacher preparation programs in the study’s selected states based on extant sources such as course catalogues and university/program websites, and informational calls to program administrators to verify information (see below).

      1. Recruitment

The goal of Phase I – Recruitment and Random Assignment is to identify teachers for the study (and their schools and districts) by first identifying which teacher preparation programs deliver intensive clinical practice features of interest, and then finding teachers who (likely) experienced those features by locating the districts and schools that hired them. The next step of recruitment is to collect data to verify that teachers participated in intensive clinical practice while attending the program. Finally, student rosters will be used to randomly assign students to treatment and control teachers. The process for Phase I – Recruitment and Random Assignment is illustrated in Exhibit A.1 and detailed in the remainder of this section. Phase II – Data Collection is the subject of a subsequent ICR.

Selection of study states State selection will be based on identifying states in which the study is most feasible, that is, where the study team is most likely, and most efficiently, able to recruit eligible teacher pairs for the study, as described above. Using the initial data on potential program candidates and the feedback from additional consultants to the study, the study team has selected the eight states listed above as the primary focus of study recruitment efforts.

Exhibit A. 1: Teacher Preparation Program and District/School Recruiting and Random Assignment of Students Process

        1. Selecting Teacher Preparation Programs

The study hypothesis is that teachers who receive intensive clinical practice will be more effective teachers, and their students will therefore perform better on standardized assessments. An essential step is to identify preparation programs that implement those features. Research has indicated that there can be a gap between what programs describe and what they actually deliver (Walsh, 2011). Confirming that the teacher preparation programs have these features is therefore critical to ensuring a meaningful experimental contrast.

The identification of teacher preparation programs will involve compiling extant data to obtain systematic information on the eligible teacher preparation programs in the selected eight states. The extant data will consist of publically available program descriptions and requirements obtained from university and program websites, student handbooks, and course catalogues. The next step is a screen based on two criteria: (1) it must be an undergraduate program that trains teachers for elementary grades, and (2) it must send at least some of their graduates to public schools (rather than non-public schools).

The study team will collect data from teacher preparation programs on the targeted clinical experience feature(s) to expedite the identification of programs for the district and school recruiters. As described above, this information will be collected via teacher preparation program websites and by conducting interviews with program administrators using the Teacher Preparation Program Interview Guide (Appendix B). The second activity involves prioritizing preparation programs based on several criteria: (1) information from large districts about the programs from which they often hire teachers; and (2) programs for which the study team has prior knowledge that they may implement intensive clinical experience features.

Based on the information listed above, the study team will then conduct an in-depth review and code the data on preparation programs. This information will then be used for the recruitment of districts and schools, and the identification of potential matched teacher pairs.

The next step involves verification of the information gleaned from extant data sources and collection of information through interviews with program administrators. The study team will send administrators of eligible teacher preparation programs in the study states (1) the Teacher Preparation Program Support Letter from IES (Appendix B), encouraging programs to participate in the study and letting the administrator know that the study team will contact him or her, and (2) the Teacher Preparation Program Study Fact Sheet (Appendix B), which describes the study. Interviews with program administrators then will be conducted to confirm that identified features are available to all students and have been in place for at least three academic years, as well as to learn about relationships with hiring districts. To facilitate these calls, the study team will use the topics outlined in the Teacher Preparation Program Interview Guide, to develop and complete a questionnaire which will be programmed into an Access data entry form so it can be administered in a manner similar to Computer Aided Telephone Interviewing (CATI); Appendix B contains the topics which will be used to program the Access data entry form.

The data collected through the program review and interview process will be used to determine which programs are most likely to produce treatment or control teachers. These designations will be utilized by the recruitment team in recruiting districts, schools and teachers. Although the data collection on preparation programs will provide preliminary information about treatment and control status of program graduates, and will provide guidance to the recruitment team on likely treatment and control teachers, the final determination of a teacher’s status into treatment or control condition will come from the Teacher Background Form, as described below.

        1. Selecting Targeted School Districts

In parallel with investigating and selecting teacher preparation programs that will be the focus of this study, the study team will also identify school districts that are targets for recruitment activities. Within the selected study states, the study team will first use extant data to identify districts that provide a greater likelihood of finding graduates of the selected teacher preparation programs along with a matched “control” group teacher in the same school and grade. This will be based on two criteria: (1) “large” districts, i.e., those with 10 or more elementary schools with >75 students in at least one K-6 grade, that are also (2) located within a 100-mile proximity to one or more of the possible study teacher preparation programs. Geographic proximity of programs to hiring districts has been shown to be particularly important for teachers; recent research indicates that substantial proportions of teachers attend (teacher preparation programs in) college within 50 miles of their homes, and further, that substantial proportions of new teachers obtain employment within 50 miles of their homes (Boyd, Lankford, Loeb & Wyckoff, 2005). Therefore, to accommodate the fact that some candidate states are quite large (e.g., Texas), the study has expanded the proximity to a 100-mile radius to ensure that there are teacher preparation programs within a manageable distance from districts. The study team will also obtain information from district web sites on the typical sources of new teacher hires for districts meeting these two criteria, and will augment this with informational calls to district human resource directors.

This information, combined with decisions about including particular programs which produce teachers trained with the features of interest, will be used to develop a short list of potential study districts that will be targeted for recruitment.

        1. Recruiting Study Districts

The study team will contact targeted school districts (1) to confirm the feasibility of conducting the study in their elementary schools; (2) if implementation appears feasible, to obtain permission to begin contacting administrators about their possible study participation; and, (3) to move forward in districts that use their own approval processes for research studies.

This process will begin by sending letters to the district superintendents that include, as attachments, a District Letter of Support from IES, expressing support and encouragement for participation in the study, and a District Study Fact Sheet both of which are provided in Appendix B. Following confirmation of delivery of the district recruitment package, study staff will call targeted districts. This contact will begin with a determination of the appropriate point of contact in the district. In some cases it may be the superintendent, in other situations he/she may suggest working with another district-level administrator (e.g., the assistant administrator for elementary education, the head of human resources/teacher hiring, etc.).

Once the contact has been identified, recruiters will talk with them about the study’s feasibility and acceptability for implementation in the 2012-13 school year. Recruiters will cover the following points:

  • An overview of the study and the purpose of the call – the sponsorship of the study by IES/ED, the goal and importance of the study, the general design of the study, and what we’re trying to accomplish with this call;

  • District’s eligibility for the study – describe district and school eligibility criteria, and determine if there are potentially eligible schools in the district;

  • Interest in study participation – describe in broad terms what the study will entail and get an indication of possible district participation; and,

  • Necessary procedures to secure district permission to begin contacting schools for recruitment and for permission to conduct the study, in particular district requirements regarding parental consent, the requirements for submission of Institutional Review Board (IRB) applications, and the main contact point for the application, if required.

Especially with large districts (i.e., those with potential for multiple eligible schools) – the study team also is planning to meet with district staff face to face. These meetings will cover the same points as the calls but will allow for participation of a wider set of decision-makers, and for a more in-depth exploration of study feasibility.

District authorization of the study will be formalized in a memorandum of understanding (MOU) to be signed by a district representative and the study director. In addition, the study will submit formal Institutional Review Board (IRB) research applications where necessary; in some instances, the district’s written application approval may substitute for the MOU to limit redundancy.

        1. School Identification and Recruitment

In districts that give the study permission to contact schools, schools will be identified based on teacher placement information provided by the selected teacher preparation programs or school districts. As with districts, priority will be given to schools likely to have the highest potential for teacher matches. The school recruitment process will begin with a mailing to school principals that will include a School Letter of Support from IES, expressing support and encouragement for participation in the study, and the same District Study Fact Sheet (Appendix B). Recruiters will call principals to confirm the feasibility of finding matched pairs of eligible teachers, and to determine acceptability of conducting the study during the 2012-13 school year. Recruiters will touch on these discussion points:

  • Introduce recruiter/study and the purpose of the call (refer to the mailed study materials). Indicate the sponsorship of the study by IES/ED, and the district’s approval to contact them directly;

  • Provide a brief overview of the study;

  • Describe the need to find teacher pairs as a basic requirement of eligibility, and what the study team means by a teacher pair;

  • Review the key data collection requirements for a participating school;

  • Note the study team’s efforts to minimize the burden to be placed on schools and teachers; and

  • Assess questions or concerns, arrange for a visit if possible.

As noted, once interest and feasibility has been determined, the study team will schedule face-to-face meetings with the principal and other decision-makers at the school. This meeting will be an opportunity to discuss the study’s eligibility requirements of matched treatment and control teacher pairs, random assignment of students, data collection activities, and the study timeline. The desired outcome of these school meetings is evidence that there is a strong likelihood of having at least one matched teacher pair at the school for fall 2012, and school administration agreement to participate in the study.

The next step is to collect information on potential teacher pairs for the study. The Teacher Background Form (see Appendix B) is a short paper questionnaire to assess whether teacher pairs satisfy the eligibility criteria, including whether: (1) the candidate teachers are the lead teachers in self-contained elementary school classrooms in the same grade and teach both reading and math; (2) students in the teacher pair’s classrooms come from the same pool (e.g., all general education students); (3) one teacher (i.e., the “treatment” teacher) in the match has experienced intensive clinical practice as part of the preservice teacher preparation program that he/she chose to attend and graduated from within the past three years, while the other teacher (i.e., the “control” teacher”) has not had that same experience as part of the preservice teacher preparation program that he/she chose to attend and graduated from within the past three years. To ascertain teacher eligibility, the Teacher Background Form will include short questions that pertain to characteristics of teachers (e.g., experience) and their preparation programs, in particular, to verify that teachers report having experienced the features of interest (for treatment teachers) or having not (for control teachers) in the preservice teacher preparation programs they chose to attend. Once the teacher background forms are completed and reviewed, determinations can be made about treatment/control teacher pairs within schools.

Random Assignment

The final step in the Phase I Recruitment and Random Assignment ICR is to collect student rosters to prepare for random assignment. Working closely with schools having eligible teacher pairs, the study team will obtain rosters of students who will be taught by those teachers. Using the student rosters a randomization procedure will be used to assign students to the two teachers’ classrooms, ensuring that students will have an equal chance of being assigned to either of the teachers in the pair. In some schools, students are divided into groups (e.g., “houses,” “pods,” “families,” and “academies”), and random assignment may be coordinated with the creation of such groups. As necessary, the study will accommodate a small number of exceptions to random assignment, such as students who must be paired with a particular teacher or separated from other specific students, or students who must be purposefully assigned to achieve gender balance across classrooms (often a consideration in elementary classrooms). Because student enrollment will not be static, student rosters will be collected twice through the first two weeks of school and random assignment will continue from spring 2012 through the second week of school in fall 2012.

The study plans to administer a student achievement test in fall 2012 and spring 2013 to measure changes in students’ academic achievement over the course of the year. To verify the equivalence of student proficiency in the two randomly assigned classrooms, the study will use the fall (pre) test scores to ensure there are no significant differences in student ability in the participating classrooms and to improve the precision of the impact estimates. Therefore, consent forms will be distributed to parents of students in classrooms of the teacher pairs in fall 2012 prior to the initial student achievement testing (Appendix B). While final decisions about the student achievement measure will be made over the course of the next few months, a potential option is ECLS-K.4



      1. Data Collection Activities

A brief description of data collection for the entire study is provided below and summarized in Exhibit A-2 below. This supporting statement requests clearance for the Teacher Preparation Program Interview and the Teacher Background Form, which are the only data collection activity in Phase I – Recruitment.

Teacher Preparation Program Interview Members of the study team will conduct semi-structured interviews with administrators of teacher preparation programs. These interviews will be guided by the Teacher Preparation Program Interview Guide (Appendix B). The interview guide asks teacher preparation programs about their features related to features of clinical practice—that is, the experience in classrooms and schools. This data collection will be used for two purposes: to differentiate programs that may produce treatment and control teachers for recruitment and to create a descriptive catalogue of teacher preparation programs in the study states.

Administration of the Teacher Background Form Principals will be asked to request that teachers who are potentially eligible for the study complete a Teacher Background Form (Appendix B). This form asks teachers about their experience and characteristics of their teacher preparation. It will be used for two purposes: to check the eligibility of study teachers – both potential treatment and control – and to ensure that the study team has key contact information on all study teachers.

Random Assignment The study will work with participating schools to obtain rosters of students in the grades with eligible teacher pairs. Students will be randomly assigned to the participating teachers and will have an equal chance of being assigned to either of the teachers in the pair. Although the study team will try to get an early start on random assignment where possible in spring 2012, prior experience indicates that much of this random assignment will take place very near the start of the 2012-2013 school year.

Student Assessment The study will administer standardized math and reading assessments to students in study classrooms at pre (fall) and post (spring) to measure changes in students’ academic achievement over the course of the year. The final choice of measures will be made in consultation with ED and the Technical Working Group on the basis of those that can most effectively assess the skills related to clinical practice program features

Instruments for all other data collection activities for this study will be developed, tested, and submitted later as part of the addendum to this clearance request along with parental consent forms. Data collection activities for which clearance will be requested in an addendum Phase II – Data Collection ICR include:

Classroom Observations To assess classroom practice, each teacher’s reading and math lessons will be observed and coded using a classroom observation protocol. While final choice of measures will be made in consultation with ED and the Technical Working Group on the basis of those that can most effectively assess the skills related to clinical practice program features, two possible options currently being explored are the Classroom Assessment Scoring System (CLASS; Pianta, LaParo & Hamre, 2008) and the Performance Assessment for California Teachers (PACT).5

Teacher Survey The study team will administer an online survey to teachers to obtain information in three domains: (1) teachers’ professional backgrounds; (2) support received during their first years of teaching (including the current year); and (3) personal background characteristics. The survey will also ask teachers for their agreement to provide their college entrance exam scores to the study team. The survey will build upon surveys used in two prior studies of alternative certification.6

Student Records Data The study will collect extant data on student demographic and socioeconomic characteristics from the 2012-13 school year, including gender, race/ethnicity, date of birth, grade, whether they are repeating a grade, free or reduced price lunch eligibility, EL status, whether they have an individual education plan or 504 plan and attendance data. State/district student test scores from the 2011-12 and 2012-13 school years will also be collected. These data will be collected from the district; any data that are unavailable at the district level are requested from the school.


Exhibit A-2 Data Collection Plan

Schedule

Data Collection

Activity

Respondent

Mode

Phase I – Recruitment and Random Assignment

Spring 2012

Teacher preparation program interview

Interview teacher preparation program administrators about the features of their programs.

Program administrators

Phone interview

Spring-Summer 2012; Fall 2012-Spring 2013

Teacher Background Form

Request teachers to complete during school recruitment visit or after being hired. Also request information from replacement teachers throughout the 2012-2013 school year.

Teachers

Hard copy

Spring – Summer 2012

Classroom rosters

Obtain classroom rosters of students to randomly assign to either T or C classes

School staff

Electronic or hard copy

Fall 2012 (first two weeks of fall semester)

List of late enrollment students

Obtain names of students who enroll in school after initial random assignment has been conducted

School staff

Electronic or hard copy

Fall 2012

Consent forms for school records data collection and for testing

School records: do not require consent; obtain passive consent if district requires consent

Reading & math assessment: will request passive consent; active consent if required by the district

Parents and legal guardians of students

Hard copy

Fall 2012, Spring 2013

Student math and reading assessments

Conduct math and reading assessments with all students

Students

Hard copy or computer adaptive assessments

Phase II – Data Collection

Early Spring 2013, Spring 2013

Classroom roster checks

At two points during the spring

School staff

Electronic or hard copy

Spring 2013

Consent forms for teacher-level activities (classroom observations and teacher survey)

Obtain consent for classroom observations (one reading, one math) and teacher survey

Teachers

Hard copy

Fall 2012 or Spring 2013

Classroom observations

Conduct classroom observations (one reading, one math)

Teachers

Hard copy

Spring 2013

Teacher survey

Conduct survey on preparation program features; training and support received during school year

Teachers

Web

Spring 2013 – Initial request


Summer 2013 – Collect data

Student records data collection

Collect the following data:


Student characteristics data for school year 2012-2013

School staff

Electronic or hard copy



      1. Analysis

This section briefly summarizes the approach to answering the research questions for the study. Research questions for the study were delineated above in Section A.1.2 and included in Appendix A of this Supporting Statement. Part B of this Supporting Statement describes these analyses in more detail.

To answer the primary research question about the effect on student achievement of teachers who experienced promising preparation features as part of the preservice teacher preparation program that they chose to attend versus teachers that did not have the same experience as part of the preservice teacher preparation program that they chose to attend, the study will estimate effects on students as a function of whether their teachers had experienced promising features to those teachers had not. To estimate the combined teacher and program effect on student outcomes, the study team will conduct confirmatory impact analyses using only student characteristics as covariates in the models.

Teacher candidates select the programs they want to attend and this selection may overstate or understate program effectiveness to the extent that particularly skilled or proficient candidates select some kinds of programs over others. To isolate the program effect from the teacher effect, the study team will conduct two additional analyses. First, the study team will investigate which teacher characteristics explain self-selection of treatment and control teachers into intensive clinical practice (via choosing preparation programs that do and do not emphasize intensive clinical practice) to improve beginning teacher effectiveness by fitting a regression model that uses the treatment indicator as the outcome variable and teacher characteristics as the predictors. Second, the study team will estimate the impact model from the confirmatory analysis, adding teacher characteristics believed to be correlated with selection of teachers into different preparation programs.

To answer the second research question about variation among core program features, the study team will provide a qualitative summary of the core features of preparation programs and present descriptive tables that display the percentage of teachers who attended a preparation program with that feature.

To answer the third research question regarding the impact on classroom practices of teachers’ choosing to enter teaching through experiencing intensive clinical practice as part of their preservice teacher preparation program they chose to attend, the study team will estimate the average treatment-control differences in the teacher classroom practice measures created based on classroom observations.

To answer the fourth research question regarding the relationship between program features and student test scores, the study team will estimate the relationship between the school-specific estimates yielded by the confirmatory impact model and the treatment-control differences in the indicator variables that capture whether the preparation program of a teacher implemented a particular feature or not.

To answer the fifth research question using extant data about features associated with high performance of ELs, the study will compare test scores of EL students taught by teachers who experienced all or some of the key targeted program features in the preservice teacher preparation program that they chose to attend to the scores of EL students taught by teachers who did not have the same experience as part of the preservice teacher preparation programs that they chose to attend. If there is sufficient extant data, the study team will perform a similar analysis with SWD students. These analyses will use extant data collected from states in which it is possible to link student records to teachers. The study will control for student scores from the previous year and grade to account for the fact that particular types of students may be assigned to particular teachers.7

      1. Study Timeline

The study is expected to be completed in three and a half years. The experimental study will be implemented in the 2012-2013 school year. The final report will be available in late 2014 or early 2015.

    1. Purposes and Use of the Information Collection

Information will be collected by the Abt study team. All information from Phase I – Recruitment and Random Assignment will be used to identify eligible respondents for the study, to randomly assign students to treatment and control classrooms, and to verify the effectiveness of the random assignment process. Specifically, the information collected in the Teacher Preparation Program Interview will be used to provide rich data on the types of features teachers may have experienced and to determine the treatment and control features. Teacher Background Forms will be used to confirm teacher eligibility for the study and ensure collection of key information on all study teachers. Fall student testing will verify the equivalence of student proficiency in treatment and control classrooms. An addendum Supporting Statement will request clearance for Phase II - Data Collection.

The study findings as a whole will be used to inform the efforts of national, state, and local policymakers, teacher preparation programs and certifying institutions, districts, and schools to improve student outcomes. Policymakers may learn from this information how to improve student achievement through improvements to the quality of teacher preparation programs, and the teacher certification standards enacted at the state level. School districts and schools may also use this information to guide their teacher hiring and placement decisions.

Findings will be presented in a final report in late 2014 or early 2015. In addition, the data collected by the study will be submitted to ED as restricted-use data files that will serve as a valuable resource to other researchers who wish to further examine this issue.

    1. Use of Information Technology and Burden Reduction

The study will use a combination of mechanical and electronic technology to collect data. For each data collection task, the study team has selected the form of technology that enables the collection of valid and reliable information in an efficient way while minimizing respondent burden.

During Phase I – Recruitment and Random Assignment, the Teacher Preparation Program Interview will be used to collect information on teacher preparation programs on certain teacher preparation features of interest, particularly clinical practice features. To minimize burden, the study team will use extant data – particularly program websites and online course catalogues – to gather as much of the information as possible. This program information will be prepopulated into the interview script using Microsoft Access software. Interviewers will use the Access software in a manner similar to Computer Aided Telephone Interviewing (CATI) software to guide their phone interviews with program administrators.

Additionally, the Teacher Background Form will be used in Phase I – Recruitment and Random Assignment to collect data to confirm potentially eligible teachers’ experiences of clinical practice teacher preparation features. To minimize burden, recruiters will deliver the hard-copy form in person to principals during the school visit. Because the form is very short, requiring approximately 30 minutes to complete, respondents can quickly complete the form and mail it (using a postage-paid envelope) or fax it back to the researchers. The study team will also provide a dedicated email address and toll-free number for teachers to call for assistance completing the form.

    1. Efforts to Identify Duplication

No other large-scale experimental study of the impact on student achievement of teachers who experience university-based teacher preparation features related to clinical practice has been conducted. IES has several completed and ongoing studies related to teacher preparation that capitalize on the variation in preparation offered by alternative routes to certification. For example, there have been a few experimental evaluations of highly selective alternative pathways to teaching, such as Teach for America (Decker, Mayer & Glazerman, 2004), Alternative Certification (Constantine et al. 2009), and Highly Selective Routes to Alternative Certification (Teach for America (TFA) and the New Teacher Project (TNTP)).8 Another study focused on indication services for novice teachers after they become the teacher of record. In addition to these studies, there have been non-experimental studies (e.g., NRC, 2010; NCTQ, 2011; Grossman et al., 2009; Boyd, Grossman, Lankford, Loeb & Wyckoff, 2006), but the findings are mixed and the non-experimental methods used by these latter studies leave open the possibility that observed differences in student achievement might be due to the underlying differences between the students taught by teachers from preparation programs with clinical practice features and other teachers, rather than the true causal effects of the teachers from preparation programs with clinical practice features themselves. The focus of the proposed study are the clinical practice features of teacher preparation program that prior nonexperimental research and experts suggest are likely to lead to improved teaching and student achievement, and that are substantively different from typical approaches to clinical practice.

To the extent possible, the study team will use existing data for the study rather than duplicate data collection efforts. The study team will utilize all publically available, online documents to obtain information about the teacher preparation programs. The information collected in the Teacher Preparation Program Interview is not available elsewhere. The information collected from the Teacher Background Form is not available elsewhere. Student assessment will verify that the random assignment process resulted in statistically equivalent student groups assigned to treatment and control teachers.

    1. Efforts to Minimize Burden in Small Businesses

The collection of information does not impact small businesses.

The primary small entities for this study are the teacher preparation programs and the districts and schools in which the study teachers teach. During Phase I – Recruiting and Random Assignment, the study team will minimize burden by training recruitment staff to make their contacts as straightforward and concise as possible. The recruitment mailings, conversations, and presentations are designed to be clear, brief, and informative. The study team will include all relevant staff at the district- and school-level meetings so that the district superintendents and principals will not be required to convey the information individually to their staff members. At the district level, the study team will attempt to arrange conversations to include representatives of the superintendent’s office, the human resources office, and the research approval office; the top official for elementary schools; and officials who can discuss the availability of student records. For the school-level presentations, recruiters will offer to meet with the school principal, key individuals responsible for student scheduling, and, at the principal’s discretion, the teachers who might be included in the study. The study team will use a multi-stage screening process, using the district phone call, the principal phone call and meeting, and Teacher Background Form, to quickly eliminate from the pool any districts, schools, or teachers who are not eligible for the study so that they will not receive further contact from the recruiting team. The Teacher Preparation Program Interview Guide and Teacher Background Form were designed to minimize burden on respondents. The random assignment and verification process will also be designed to minimize burden on participants.

    1. Consequences of Not Collecting the Information

The full data collection plan described in this supporting statement is necessary for conducting this study, which is consistent with the goals of ESEA, Title II, Part A to raise student achievement through the preparation, training, and recruitment of high-quality teachers. Despite the evidence that one of the strongest indicators of students’ academic success is the competence and capabilities of their teachers, few sound research studies have examined the role of university-based teacher preparation program in studying teacher preparation and quality. These university-based teacher preparation programs are the primary source of new teachers, but there is little scientifically-based evidence on what features of these programs produce effective classroom teachers. In the absence of this study, ED will not be able to gauge the effect of the clinical practice component of teacher preparation features on student achievement. This study can contribute to the emerging consensus about teacher preparation program features by providing strong evidence on the teacher preparation features hypothesized to be pivotal to teacher quality, specifically intensive clinical practice. Its rigorous methodological design incorporating random assignment of students will ensure that highly credible evidence is obtained about the impact on students of teachers who experience intensive clinical practice as part of the preservice teacher preparation program that they chose to attend.

The consequences of not collecting specific data in Phase I – Recruitment and Random Assignment are described below:

Without collecting the information in the Teacher Preparation Program Interview, the study team could not define a clear contrast between treatment and control features, and could not effectively identify teachers for the study.

Without collecting the information on the Teacher Background Form, the study team could not ensure that the teacher would meet the study’s eligibility criteria.

Without verifying the statistical equivalence of the students randomly assigned to classrooms and including this as covariate in the analyses, the study would require a larger sample size to obtain the same MDE. .

    1. Special Circumstances Justifying Inconsistencies with Guidelines in 5 CFR 1320.6

There are no special circumstances concerning the collection of information in this study.

    1. Consultation Outside of the Agency

      1. Federal Register Announcement

A 60-day notice to solicit public comments was published in the Federal Register, published on 1/27/2012, Vol. 77, page 4287.

Consultations Outside of the Agency

The Abt team will assemble a Technical Working Group (in consultation with ED) composed of consultants with various types of expertise in the areas relevant to this study. The Technical Working Group will be convened in early winter 2012 and will discuss the study design, recruitment, instrumentation, data collection, analysis, and reporting of study findings.

      1. Unresolved Issues

There are no unresolved issues.

    1. Payments or Gifts to Respondents

During Phase I – Recruitment and Random Assignment, there will be no payments or gifts to respondents.

    1. Assurance of Confidentiality

The study team will conduct Phase I – Recruitment and Random Assignment (the subject of this ICR) and Phase II – Data Collection (the subject of an addendum ICR) activities in accordance with all relevant regulations and requirements. These include the Education Sciences Institute Reform Act of 2002, Title I, Part E, Section 183, that requires “[all] collection, maintenance, use, and wide dissemination of data by the Institute … to conform with the requirements of section 552 of Title 5, United States Code, the confidentiality standards of subsections (c) of this section, and sections 444 and 445 of the General Education Provisions Act (20 U.S.C. 1232 g, 1232h).” These citations refer to the Privacy Act, the Family Education Rights and Privacy Act, and the Protection of Pupil Rights Amendment.

In addition, all data collected for the study (Phases I and II) shall remain confidential in accordance with Section 552a of Title 5, United States Code, the confidentiality standards subsection (c) and sections 444 and 445 of the General Educations Provision Act. Subsection (c) of Section 183, referenced above, requires the director of IES to “develop and enforce standards designed to protect the confidentiality of persons in the collection, reporting, and publication of data.” The study will also adhere to requirements of subsection (d) of Section 183 prohibiting disclosure of individually identifiable information as well as making the publishing or inappropriate communication of individually identifiable information by employees or staff a felony.

In addition, the following verbatim language will appear on all letters, fact sheets, and other study materials:

Per the policies and procedures required by the Education Sciences Reform Act of 2002, Title I, Part E, Section 183, responses to this data collection will be used only for statistical purposes. The reports prepared for this study will summarize findings across the sample and will not associate responses with a specific program, district or individual. Any willful disclosure of such information for nonstatistical purposes, except as required by law, is a class E felony.

Data will be presented in aggregate statistical form only. All study staff involved in collecting, reviewing, or analyzing individual-level data will be knowledgeable about data security procedures and will sign nondisclosure agreements. Respondents will be assured that all information identifying them or their school will be kept private to the extent allowed by law. The confidentiality procedures adopted for this study during all rounds of recruitment, data collection, data processing, and analysis consist of the following:

All study respondents will be assured that the information they provide is confidential and will be used only for the purpose of this research. To ensure data security, all individuals hired by the study team are required to adhere to strict standards and sign an oath of confidentiality as a condition of employment.

Hard-copy data collection forms will be delivered to a locked area for receipt and processing. Abt Associates Inc. maintains restricted access to all data preparation areas (i.e., receipt, coding, and data entry). All data files on multi-user systems will be under the control of a database manager, with access limited to project staff on a “need-to-know” basis only.

Individual identifying information will be maintained separately from completed data collection forms and from computerized data files used for analysis.

    1. Questions of a Sensitive Nature

There are no questions of a sensitive nature included in the information requested as part of Phase I – Recruitment and Random Assignment.

    1. Estimate of Response Burden

The total reporting burden for the data collection effort covered by this clearance request is 3,083.5 hours, for a total cost to respondents of approximately $131,098.45. Exhibit A.3 presents time estimates of respondent burden for the recruitment data collection activities requested for approval in this submission. Additionally, the ROCIS IC Burden Analysis Table has more detailed information. The burden estimates are based on the following assumptions:9

There will be approximately 400 teacher preparation programs in the study states; 10% of those programs will be determined ineligible from the extant data review, leaving approximately 360 programs for interviews with program administrators

The total cost to teacher preparation program administrators is approximately $25,099, based on an hourly wage of $46.48 in 2010-11 for Education Administrators: Postsecondary.10

There will be approximately 150 districts of appropriate size in study states. Extant data review will find approximately 15 percent of those districts ineligible for the study, leaving approximately 125 districts for eligibility verification calls.

During calls to 125 districts, approximately 65 (half) will require site visits to discuss the study further and confirm eligibility.

The total cost to district administrators is approximately $11,160, based on an hourly wage of $43.34 in 2010-11 for Educational Administrators: Elementary and Secondary Schools. 11

Study districts will include approximately 1,200 public elementary schools. Those 1,200 schools will participate in eligibility verification calls.

Of the 1,200 schools, 40 percent will not meet study criteria, leaving approximately 720 school principals to participate in site visits.

The total cost to elementary school principals is approximately $88,414, based on an hourly wage of $43.34 in 2010-11 for Educational Administrators: Elementary and Secondary Schools. 12

Of the 720 schools that participate in site visits, approximately 75 percent will decline to participate or be found ineligible for the study, leaving 180 schools in the study sample.

Two teachers at each of the 180 schools will complete Teacher Background Form (for a total of 360 teachers).

Of the 360 teachers who complete the Teacher Background Form, approximately 33 percent will be found ineligible or otherwise drop out of the study. This will leave 240 teachers (120 teacher pairs) in the eligible study sample.

Of the 240 teachers in the eligible study sample, approximately 17 percent will drop out of the study. This will leave 200 teachers (100 teacher pairs) in the final study sample. Initial student rosters and late enrollment rosters will be collected from these 200 teachers.

The total cost to elementary school teachers is approximately $6,426 based on an hourly wage of $26.12 in 2010-11 for Elementary and Secondary Schools: education, training and library occupations: teachers. 13

Exhibit A.3. Estimate of Respondent Burden

Informant/Data Collection Activity

# of Respondents

# of Responses

Hours/ Response

Total Burden Hours/ respondent

Total Burden Hours

Cost per Respondent (hourly wage)

Total Costs per Respondent

Total Costs

Teacher Preparation Program Directors







Program interview

360

1

1.5

1.5

540.0

$46.48

$69.72

$25,099.20

District Administrators









Eligibility verification call

125

1

0.5

0.5

62.5

$43.34

$21.67

$2,708.75

On-site meeting

65

1

2.0

2.0

130.0

$43.34

$84.68

$5,504.20

MOU sign off

65

1

1.0

1.0

65.0

$43.34

$43.34

$2,817.10

Total – District Administrators

255 

 

 


257.5

$43.34


$11,160.05

Elementary School Principals









Eligibility verification call

1,200

1

0.5

0.5

600.0

$43.34

$21.67

$26,004.00

On-site meeting

720

1

2.0

2.0

1440.0

$43.34

$86.68

$62,409.60

Total – Elementary School Principals

1,920

 

 


2,040.0

$43.34


$88,413.60

Elementary School Teachers









Teacher Background Form

360

1

0.5

0.5

180.0

$26.12

$13.06

$4,701.60

Collect initial student roster

200

1

0.25

0.25

50.0

$26.12

$6.53

$1306.00

Collect late enrollment roster

200

1

0.08

0.08

16

$26.12

$2.09

$417.92

Total – Elementary School Teachers

760




246

$26.12


$6,425.60

Total

3,295




3,083.5



$131,098.45

Annual

3,295




3,083.5



$131,098.45

    1. Estimate of Total Capital and Startup Costs/Operation and Maintenance Costs to Respondents or Record-Keepers

There are no annualized capital/startup or ongoing operation and maintenance costs involved in collecting the recruitment and random assignment information.

    1. Estimates of Costs to the Federal Government

The estimated cost to the federal government for the study – including recruiting districts and schools, designing and administering all data collection instruments, collecting administrative data, processing and analyzing the data, and preparing reports – is $10,454,503. Recruiting, data collection and reporting activities will be carried out over three and a half years (fall 2011 to winter 2014). Thus, the average annual cost to the federal government is $2,987,000.

    1. Changes in Burden

This is a request for a new collection of information.

    1. Plans for Analysis, Publication and Schedule

      1. Analysis Plans

This section presents the study’s analytic approach for addressing the study’s research questions (See Appendix A):

  1. What is the impact on student achievement of teachers who choose to enter teaching through a traditional university-based teacher preparation program that includes promising preparation features versus those teachers who choose to enter teaching through university-based programs that have more typical features?

  2. Among the teachers studied, what are the core features of their teacher preparation? In particular, to what extent does preparation vary on dimensions of clinical preparation?

3. What is the impact on the classroom practices of novice elementary school teachers who experienced intensive clinical practice as part of their preservice teacher preparation program that they chose to attend compared to novice elementary school teachers who did not have the same experience as part of their preservice teacher preparation program that they chose to attend?

4. What teacher preparation features (such as opportunities to teach throughout the preparation program, extent or nature of the clinical practice, and structured feedback during clinical practice) are associated with teacher effectiveness?

5. What teacher preparation features are associated with teacher effectiveness for special populations (i.e. Special Education Students and English Language Learners)?

Question 1 will be addressed within the experimental framework by impact analyses while the remaining questions will be addressed using descriptive and non-experimental analyses.

Impact Analyses

Random assignment of students to treatment/control (T/C) teachers is a central feature of this study; it ensures that specific types of students (e.g., more challenging or more able) are randomly and equivalently placed across both T/C teachers, and that observed differences in average student outcomes are driven by differences in teachers. However, difference in outcomes of treatment and control students may not be directly attributed to the difference in the intensity of their teachers’ clinical practice in their preservice preparation, as this difference includes both “program” and “teacher” effects, where the latter reflects systematic differences between T/C teachers due to teachers’ self-selection into the two conditions.

As its wording implies, research question 1 acknowledges the potential confound between the teacher and program effect. The most direct way of addressing this question is producing an estimate of the combination of the two effects, which is relevant as it informs the hiring decision of a district administrator or a school principal: whether to hire a teacher who experienced intensive clinical practice as part of his/her preservice preparation program that he/she chose to attend or a teacher who did not. To estimate the combined teacher and program effect, the study will measure impacts using the following prototypical model:14

(1)

where:

: post-test score (administered in spring of 2013) of student i in school j;

: indicator for the jth school, which equals 1 if student i is in school j and 0 otherwise;

: indicator set to 1 if student i is assigned to a treatment teacher and 0 otherwise;

: vector of student i’s characteristics such as his/her pre-test score (from fall of 2012) and other demographic attributes such as gender, race/ethnicity, reduced price lunch eligibility, etc.;

: residual for student i assumed to be normally distributed with mean 0 and variance of .

Following Schochet (2008) and Constantine et al. (2009), the model does not include a teacher (or classroom) level, since each school is likely to have only one T/C pair and will have insufficient degrees of freedom to estimate both the classroom-level variance and the variance of the school-specific impact estimate. Second, the study will model school effects as fixed (there is no random effect at the school level) because schools are selected purposively.

In the model in Equation 1, is the impact (i.e., adjusted treatment vs. control difference) for school j and captures the fixed effect for school j.15 The effects for each school will be averaged with equal weights to yield an overall effect estimate, . The resulting impact estimates can be expressed in effect size units, calculated by dividing by the standard deviation of the outcome in the control group.

The study will also assess heterogeneity in estimated effects with respect to a limited number of student and teacher characteristics via subgroup analyses. Potential student characteristics for subgroup analyses include EL status, pre-test score, and grade-level. Subgroup analyses by teacher characteristics will be based on attributes of the treatment teacher, which may include experience and amount of course work required by the preparation program.

To supplement the analyses described above and to try to isolate the program effect from the teacher effect, the study team will conduct two additional analyses. First, it will investigate which teacher characteristics explain self-selection of treatment and control teachers’ into preparation programs with and without intensive clinical practice by fitting a regression model that uses the treatment indicator as the outcome variable and teacher characteristics as the predictors. Second, the team will estimate the impact model in Equation 1, adding teacher characteristics believed to be correlated with selection of teachers into different preparation programs. Since this analysis attempts to account for teacher effects, it will address a slightly different question than question 1. It is important to note that these analyses may not account for all differences between the treatment and control teachers and the resulting impact estimate may still include a residual teacher effect; therefore, they will be framed as exploratory with appropriate caveats indicating that the results are outside the experimental framework.

        1. Descriptive and Non-Experimental Analyses

To answer the second research question, the study will provide a qualitative summary of the core features of preparation programs obtained from the extant data review (confirmed by the Teacher Preparation Program Interview) of the preparation programs of both treatment and control teachers and the surveys teachers were administered. In addition, for each dimension of clinical preparation, the study team will present descriptive tables that display the percentage of teachers who attended a preparation program implementing that dimension.

To address the third research question, study team will estimate the average treatment-control differences in the teacher classroom practice measures that will be created based on classroom observations. These analyses will help identify potential pathways through which intensive clinical practice influences teachers’ effectiveness. The model below will be used:

(2) C

where:

: teacher classroom practice measure for teacher k;

: treatment indicator set to 1 for treatment teachers and to 0 for control teachers; and

: residual for teacher k, normally distributed with mean zero and variance .

In Equation 2, captures the average treatment-control difference across all treatment-control pairs and the estimate of reflects both the effect of preparation programs (program effect) and the effect of differences between treatment and control teachers (teacher effect) on the classroom practice measures since the model does not include any teacher controls.

The fourth research question aims to identify specific preparation features (such as opportunities to teach throughout the preparation program, extent and nature of clinical practice) that are related to teacher effectiveness (as measured by higher student test scores), essentially considering each preparation feature as a potential mediator (or channel) through which preparation programs affect student test scores. To address this question, the study will estimate the relationship between the school-specific estimates yielded by the impact model in Equation 1 ( ) and the treatment-control differences in the indicator variables that capture whether the preparation program of a teacher implemented a particular feature or not:

(3)

where:

: impact estimate for school j;

equals , where equals one if the treatment teacher in school j experienced the mth targeted feature and zero otherwise and is defined in a similar fashion for the control teacher in school j; and

: residual for school j assumed to be normally distributed with mean zero and variance

In this model, captures the association between teachers’ effectiveness and the mth targeted feature while controlling for the other features. It is important to note that indicators of the treatment-control differences in the targeted features may be correlated, which may reduce the ability to disentangle one feature from others. Also note that the estimated associations between teachers’ effectiveness and program features may not be causal because of the self-selection of teachers into preparation programs.

Finally, the analyses conducted to address the fifth research question will compare test scores of EL students taught by teachers who graduated from preparation programs implementing all or some of the key targeted program features (group A) to those taught by teachers who have not experienced the targeted features (group B) using extant data collected from states in which it is possible to link student records to teachers. If there is sufficient extant data, the study team will perform a similar analysis for SWDs. These analyses will also control for student scores from the previous year and grade to account for the fact that particular types (e.g., higher or lower performing) of students may be assigned to particular teachers, essentially comparing value-added (for ELs) of teachers from group A to that from group B.

      1. Publication plans and schedule

During the third year of the study, the study team will prepare the draft of the final report, which will address each research question. The report will be written in a style and format accessible to policymakers and research-savvy practitioners. A draft will be delivered to ED in May 2014; a revised draft that addresses ED’s comments will be delivered in July 2014. The final report, which will address all of the peer-review comments, will be delivered by late 2014 or early 2015.

    1. Approval to Not Display Expiration Date

No exemption is requested. The data collection instruments will display the expiration date.

    1. Exceptions to Item 19 of OMB Form 83-1

The submission describing data collection requires no exemptions to the Certificate for Paperwork Reduction Act (5 CFR 1320.9).




References

Boyd, Donald, Hamilton Lankford, Susanna Loeb, and James Wyckoff. 2005. The Draw of Home: How Teachers' Preferences for Proximity Disadvantage Urban Schools. Journal of Policy Analysis and Management, 24:113-132.

Boyd, D., Grossman, P., Lankford, H., & Wyckoff, J. (2006). How Changes in Entry Requirements Alter the Teacher Workforce and Affect Student Achievement. Education Finance and Policy, 1 (2), 176-216.

Boyd, D., Grossman, P., Lankford, H., Loeb, S., & Wyckoff, J. (2006). Complex by Design: Investigating Pathways into Teaching in New York City Schools. Journal of Teacher Education, 57 (2), 155-166.

Constantine, J., Player D., Silva, T., Hallgren, K., Grider, M., & Deke, J. (2009). An Evaluation of Teachers Trained Through Different Routes to Certification, Final Report (NCEE 2009- 4043). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.

Clotfelter, C.T., Ladd, H.F., & Vigdor, J.L. (2007). Teacher Credentials and Student Achievement: Longitudinal Analysis with Student Fixed Effects. Economics of Education Review 26 (December 2007), 673-682.

CTB/McGraw-Hill. Terra Nova, Third Edition (2007). Monterey, CA: CTB/McGraw-Hill.

Decker, P.T., Mayer, D.P., & Glazerman, S. (2004). The effects of Teach For America on students: Findings from a national evaluation. Princeton, NJ: Mathematica Policy Research Inc.

Greenberg, J., Pormerance, L., & Walsh, K. (2011). Student Teaching in the United States. Washington, D.C.: National Center on Teacher Quality (NCTQ).

Grossman, P., Hammerness, K., McDonald, M., and Ronfeldt, M. (2008). Constructing Coherence: Structural Predictors of Coherence in NYC Teacher Education Programs. Journal of Teacher Education (59), 273-287.

Harcourt Assessment, Inc. (2004). Stanford Achievement Test Series, Tenth Edition. San Antonio, TX: Harcourt Assessment, Inc.

Hill, H., Rowan, B., & Loewenberg Ball, D. (2005). Effects of Teachers’ Mathematical Knowledge for Teaching on Student Achievement. American Educational Research Journal, 42 (3), 371-406.

Loewenberg Ball, D. & Forzani, F. (2011). Building a Common Core for Learning to Teach and Connecting Professional Learning to Practice. American Educator (Summer), 17-38,

National Research Council. (2010). Preparing Teachers: Building Evidence for Sound Policy. Washington, DC: National Academies Press (NRC).

Pianta, R.C., La Paro, K., & Hamre, B.K. (2008). Classroom assessment scoring system. Baltimore: Paul H. Brookes.

Rivkin, S.G., Hanushek, E.A., & Kain, J.F. (2005). Teachers, Schools, and Academic Achievement. Econometrica, 73 (2), 417-458.

Rockoff, J.E. (2004). The impact of individual teachers on student achievement: Evidence from panel data. American Economic Review, 94(2), 247-252.

Schochet, P.Z. (2008). Statistical power for random assignment evaluations of education programs. Journal of Educational and Behavioral Statistics, 33 (1), 62-87.

Staiger, D. O., & Rockoff, J. E. (2010). Searching for Effective Teachers with Imperfect Information. Journal of Economic Perspectives, 24(3), 97-118.

Weiss, M.J. (2010). The implications of teacher selection and teacher effects in some education experiments. MDRC Working Papers on Research Methodology. New York: MDRC.

1 U.S. Department of Education, Office of Planning, Evaluation and Policy Development, ESEA Blueprint for Reform, Washington, D.C., 2010.

2 This last evaluation is ongoing and reports have not yet been published.

3 There may be some schools where upper elementary teachers are responsible for all the science or math instruction, for example, and students rotate through different teachers; random assignment in such schools is likely to be infeasible.

4 Information available at: http://nces.ed.gov/ecls/kinderinstruments.asp

5 Information available at: http://www.pacttpa.org/_main/hub.php?pageName=Home

6 (See Constantine, Player, Silva, Hallgren, Grider, and Deke, 2009; the same survey has also been adapted for NCEE’s current Impact on Secondary Math Achievement of Highly Selective Routes to Alternative Certification).

7 While the analytic approach to answering Research Question 5 is described here to provide information about the study as a whole, the data for addressing Research Question 5 are extant state administrative data, and the detailed approach to analysis of those data is therefore not included in Part B of this ICR.

8 This last evaluation is ongoing so no reports have yet been published with results.

9 Support for assumptions about turndown rates is not currently available. Therefore, these assumptions were developed with input from team members who have participated in similar school recruitment efforts for other national studies.

10 Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, 2010-11 Edition, accessed online at  http://www.bls.gov/oes/current/naics4_611100.htm#11-0000 (November 25, 2011).

11 Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, 2010-11 Edition, accessed online at  http://www.bls.gov/oes/current/naics4_611100.htm#11-0000 (November 25, 2011).

12 Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, 2010-11 Edition, accessed online at  http://www.bls.gov/oes/current/naics4_611100.htm#11-0000 (November 25, 2011).

13 Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, 2010-11 Edition, accessed online at  http://www.bls.gov/oco/ocos318.htm (November 25, 2011).

14 This approach is consistent with two recent evaluations with methodological and contextual similarities to the proposed study: the national evaluation of the Teach for America program (Decker, Mayer, and Glazerman, 2004) and the recent IES-funded study of the alternative certification programs (Constantine et al., 2009).

15 As mentioned above, the assumption is that each study school will have a T/C teacher pair. If there are any schools with more than one pair, fixed effects will be included and separate impact estimates will be calculated for each pair; essentially deeming each T/C teacher pair as a mini-experiment.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleAbt Single-Sided Body Template
AuthorKatie Gan
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy