Att_NWREL 6 1 Supporting Statement B

Att_NWREL 6 1 Supporting Statement B.doc

An Investigation of the Impact of a Traits-Based Writing Model on Student Achievement

OMB: 1850-0835

Document [doc]
Download: doc | pdf










AN INVESTIGATION OF THE IMPACT

OF A TRAITS-BASED WRITING MODEL

ON STUDENT ACHIEVEMENT




PAPERWORK REDUCTION ACT

CLEARANCE REQUEST


SUPPORTING STATEMENT PART B




Prepared For:


Institute of Education Sciences

United States Department of Education

Contract No. ED-06-CO-0016



Prepared By:


Northwest Regional Educational Laboratory

Center for Research, Evaluation, and Assessment

March 2007



Table of Contents



Page

List of Exhibits ii


SUPPorting Statement for Paperwork Reduction Act Submission


A. JUSTIFICATION Separate File

  1. DESCRIPTION OF STATISTICAL METHODS

  1. Respondent Universe and Sampling Methods 3

  2. Procedures for Collection of Information

Statistical Methodology for Stratification and Sample Selection 3

Data Collection Plans 4

Estimation Procedures 5

Statistical Power Estimates 7

Unusual Problems Requiring Specialized Sampling 8

Use of Periodic Data Collection Cycles to Reduce Burden 8

  1. Methods to Maximize Response Rates and to Deal With Issues
    of Non-response 9

  2. Pilot Testing of Instruments 9

  3. Contractor Name Responsible for Design, Analysis, and Data
    Collection for the Study 9


REFERENCES in Part A, separate file


APPENDICES

Appendix A: Teacher Survey on Writing Instruction Separate file

Appendix B: Student Essay Instructions and Sample Student Essay Booklet Separate file

Appendix C: Coded Student Data Form Separate file

List of Exhibits



Page


1. Statistical Power Estimates 8



INTRODUCTION

Improving student literacy is identified as a high priority in regional needs assessments conducted over the past few years by NWREL; teachers and principals expressed a need for professional development for teaching writing, and for research-based practices to support the various writing modes. Improving student literacy is also one of the main goals of NCLB, which includes goals to narrow the achievement gap for a number of student subgroups. The sample will include students in many of the AYP reporting categories for NCLB and will enable an exploration of the impact of the treatment with these student populations. The model is focused on formative classroom assessment, which is also identified as a high priority in regional needs assessments.

While writing is not a focus of the AYP provisions of NCLB, it is a critical component of language and literacy broadly, and includes an essential set of skills for continued academic development as well as for success in many occupations. Writing is obviously crucial for higher educational achievement and the development of higher order thinking skills, and is also one of the Foundation Skills identified by the U.S. Department of Labor’s Commission on Achieving Necessary Skills for a broad range of jobs.

National NAEP results in 2002 showed that just over a quarter of fourth grade students (27 percent) were proficient in writing, with similar proficiency levels at 8th grade (31 percent) and 12th grade (24 percent). In the Pacific Northwest, overall fourth grade proficiency rates on the 2002 NAEP writing tests were even lower than the national average in three states. Idaho, Montana and Oregon, all had proficiency rates of 22 percent, while the national average was 27 percent. The proficiency rate in Washington was 30 percent. Alaska did not participate in state-level NAEP.

Proficiency rates on the NAEP writing test were much lower among at-risk groups. For students eligible for free or reduced price lunch, the proficiency rates ranged from 13 to 17 percent in Idaho, Montana, Oregon and Washington, which is near the national average of 16 percent. Proficiency rates for Black students were 14 percent in Oregon and 20 percent in Washington, compared to the national average of 17 percent; for Hispanic students, the proficiency rates were 9 percent in Oregon, 11 percent in Idaho, and 12 percent in Washington, all below the national average of 17 percent. American Indian and Alaska Native students, a subgroup encompassing 5 percent of all students in the Northwest region, had a proficiency rate of only 8 percent in Montana, well below the national average for this group of 15 percent. (States not listed in this section did not have data reported because the sample size was insufficient to permit a reliable estimate.)

In statewide achievement test results, students in the Northwest region demonstrate lower rates of proficiency in writing than in other aspects of language arts. For example, in the 2005 Oregon Statewide Assessment Testing (OSAT) results, 32 percent of Oregon fourth graders met state writing standards. This is less than half the rate of students meeting state reading and literature standards.

Similarly, in the 2005 Idaho Direct Writing Assessment (DWA) results, 34 percent of fifth graders were proficient or advanced. This is less than half the rate of fifth graders meeting state reading standards. Only 17 percent of Idaho Hispanic students and 21 percent of American Indian students scored at the proficient or advanced levels in writing.

In Alaska, a substantial achievement gap exists between Alaska Native and white students in writing. In fifth grade, 54 percent of Alaska Native students met standards in 2005, compared to 84 percent of white students. In sixth grade, 51 percent of Alaska Native students met standards in 2005, compared to 87 percent of white students.

These assessment results raise concerns because writing is a critical life skill and writing also supports the development of reading and thinking skills. There is a strong connection between writing and learning to think systematically (Sommers, 1982; Zinsser, 1988). While reading and mathematics are typically the core subjects of greatest concern to schools, writing also received a substantial emphasis in regional needs assessments (Barnett & Greenough, 2004; 2005). Many schools implement “writing across the curriculum” or use writing projects as integrative, interdisciplinary projects to raise student engagement and achievement in reading, mathematics, science and social studies, and to prepare students for advanced studies or careers. Better understanding of what works in the teaching of writing will benefit schools and students in the region and in the nation.Our regional needs assessments consistently identify improving student literacy as a high priority area (Barnett & Greenough, 2004; 2005). Teachers and principals express a need for professional development for teaching writing, and for research-based practices to support the various writing modes. Improving student literacy is also one of the main goals of NCLB, which includes goals to narrow the achievement gap for various student populations, including English language learners.

While writing is not a focus of the AYP provisions of NCLB, it is a critical component of language and literacy, and includes an essential set of skills for continued academic development as well as for success in many occupations. Writing is crucial for higher educational achievement and the development of higher order thinking skills, and is also one of the Foundation Skills identified by the U.S. Department of Labor’s Commission on Achieving Necessary Skills for a broad range of jobs.

National NAEP results in 2002 showed that just over a quarter of fourth grade students (27 percent) were proficient in writing, with similar proficiency levels at 8th grade (31 percent) and 12th grade (24 percent). In the Pacific Northwest, overall fourth grade proficiency rates on the 2002 NAEP writing tests were even lower than the national average in three states. Idaho, Montana and Oregon, all had proficiency rates of 22 percent. The proficiency rate in Washington was 30 percent. Alaska did not participate.

Proficiency rates on the NAEP writing test were much lower among at-risk groups. For students eligible for free or reduced priced lunch, the proficiency rates ranged from 13 to 17 percent in Idaho, Montana, Oregon and Washington near the national average of 16 percent. Proficiency rates for Black students were 14 percent in Oregon and 20 percent in Washington, compared to the national average of 17 percent. For Hispanic students, the proficiency rates were 9 percent in Oregon, 11 percent in Idaho, and 12 percent in Washington, all below the national average of 17 percent. American Indian and Alaska Native students, an important subgroup in the Northwest region, had a proficiency rate of 8 percent in Montana, well below the national average for this group of 15 percent.

On statewide achievement tests, students in the Northwest region demonstrate lower rates of proficiency in writing compared to the other aspects of language arts. In the 2005 Oregon Statewide Assessment Testing (OS) results, 32 percent of Oregon fourth graders met state writing standards. This is less than half the rate of students meeting state reading and literature standards. In the 2005 Idaho Direct Writing Assessment (DWA) results, 34 percent of fifth graders were found proficient or advanced. This was less than half the rate of fifth graders meeting state reading standards. Only 17 percent of Idaho Hispanic students and 21 percent of American Indian students scored at the proficient or advanced levels in writing.

In Alaska, a substantial achievement gap exists between Alaska Native and white students in writing. In fifth grade, 54 percent of Alaska Native students met standards in 2005, compared to 84 percent of white students. In sixth grade, 51 percent of Alaska Native students met standards in 2005, compared to 87 percent of white students.

These assessment results raise concerns because writing is a critical life skill, not to mention that writing supports the development of reading and thinking skills. There is a strong connection between writing and learning to think systematically (Sommers, 1982; Zinsser, 1988). Many schools implement “writing across the curriculum” or use writing projects as integrative, interdisciplinary projects to raise student engagement and achievement in reading, mathematics, science and social studies, and to prepare students for advanced studies or careers. Better understanding of what works in the teaching of writing will benefit schools and students in the region and in the nation.

The trait-based approach to writing instruction is a popular model in widespread use in the Northwest region and across the nation, despite a relative paucity of well controlled studies demonstrating its effectiveness. Many school staff members, administrators, policy makers and parents view this as a valuable approach to the teaching of writing, and it is the subject of numerous publications and training programs, including but not limited to those provided by NWREL. The model has been incorporated into major published language arts curriculum materials, guides to writing instruction provided by several major educational publishers, and a host of training workshops, web sites and other resources for schools.

However, this popularity is based largely on rational arguments for the underlying model, perceived face validity of the assessment instruments and training materials, and practitioner perceptions that the model has utility and improves student performance. Given this widespread interest, it is important that the education community has access to high quality scientific evidence on the effectiveness of the approach. Schools and policy makers will benefit from improved understanding of the extent to which the approach works, for whom, and under what conditions. The proposed research will contribute to that knowledge base, so that decisions about whether to expand, contract, or modify the adoption of this approach can be based on reliable data.

Only two experimental studies have been completed examining the efficacy of the model, and these are insufficient to provide the broad research base that should underlie such a widely implemented approach. Therefore, the aim of this project is to provide a rigorous, relatively large scale scientific test of the approach, in conditions that are commensurate with the way the intervention is typically implemented in many school settings. The present study will improve on previous studies by involving a larger number of schools, strengthening implementation fidelity, and improving the research methodology, including properly modeling the nested data structure.The trait-based approach to writing instruction is a popular model in widespread use across the nation, despite a relative paucity of well controlled studies demonstrating its effectiveness. Many school staff members, administrators, policy makers and parents view this as a valuable approach to the teaching of writing, and it is the subject of numerous publications and training programs, including but not limited to those provided by NWREL.

In particular, the 6+1 Trait® Writing classroom professional development is currently in wide use across the United States and in other countries. Since 1990, more than 500 training sessions have been conducted by NWREL trainers, and these sessions have involved more than 20,000 participants from all 50 states and 17 countries. NWREL also conducts a training-of-trainers institute; educators who attend this institute receive a full training package and are given permission to conduct training in their local school jurisdictions. Since 1998, a total of 1,365 trainers have been trained by NWREL. NWREL has produced three key publications and a number of classroom support materials for 6+1 Trait® Writing that are used extensively by classroom teachers. Over 350,000 copies of the three key publications have been sold. Given this widespread use, it is important to expand the available research base on the effectiveness of the approach under various conditions and for specific subgroups of students and teachers.

The 6+1 Trait® Writing intervention is an approach to teaching and assessing student writing that consists of a set of strategies to facilitate the integration of assessment and instruction with a specific focus on seven traits of effective writing. The intervention is supported by strategies for direct instruction on the traits and a writing process that emphasizes the effective use of teacher and peer feedback to support students in the continual review and revision of their writing using rubrics containing specific criteria. It is supported by professional development and materials designed to build teacher understanding of their instructional use, and enhance teacher skills in supporting students to improve writing.

The proposed study will extend previous studies on the 6+1 Trait® Writing model by involving a larger number of schools and providing more training and systematic support for teachers throughout implementation. The study will explore relationships among district, school, teacher, and student variables to provide more reliable information for decision-makers to use in determining whether and how to implement the model. The study will also contribute to an area with a shortage of current research, since much of the research on the impact of professional development on student achievement has been conducted in science and mathematics education (Guskey, 2003).

DESCRIPTION OF THE NWREL TRAITS-BASED WRITING MODEL STUDY

PURPOSE OF THE STUDY

The goal of the study is to provide high quality evidence on the effectiveness of the analytical trait-based model for teaching and assessing student writing, by examining the impact of the model on student achievement in writing. The goal of this study is to determine the effectiveness of professional development for a trait-based writing model. Toward this goal, the impact of the model on the achievement of 5th graders will be examined. The model, 6+1 Trait® Writing, will be implemented by the NWREL Traits Writing Assessment Training Unit. The model is designed to improve student writing through an integrated approach to teaching and assessing writing skills, and it incorporates ten instructional strategies to develop the specific traits of writing. Key components of the model include effective feedback to students and the engagement of students in self-assessment.





EXISTING EVIDENCE SUPPORTING THE MODEL

In the first major study of the writing process, Emig (1971) drew on her own knowledge and experience as a writer to develop a process model that emphasized writing processes that are recursive rather than linear, including planning, organization, drafting, editing, and writing processes that vary according to task and instructional context. By 1986, in What Works, the Department of Education identified process as the most effective way to teach writing (in Newell, 1998). Meanwhile, Diederich (1974; Diederich, French & Carlton, 1961) and Purves (1988) were instrumental in moving writing research away from holistic assessments to classroom-based analytical assessments of student writing, to meet the needs of teachers for diagnostic assessment data on which to base decisions about instruction.

In 1983, the writing process and Diederich’s scales were integrated by teachers in Beaverton, Oregon (Grundy, 1986) to build the foundation for what became the 6+1 Trait® Writing model. This model is based on recursively planning, analyzing and revising writing using a framework of analytic traits that characterize quality writing. The model was also informed by a review of 20 years of research on student writing conducted by Hillocks (1987). His review of 2,000 studies identified the use of scales as having a positive effect on improving student writing, second only in effect size to inquiry. This method, of which the 6+1 Trait Writing model is an example, employs sets of criteria to evaluate pieces of work, which helps students develop an understanding of discourse knowledge, improving the ability to present ideas and information in a coherent manner.

The 6+1 Trait Writing model is intended to help teachers provide effective feedback to students, and to develop student self-assessment skills and metacognition related to writing. Reviews of the research on formative assessment have consistently demonstrated that assessment that provides students with feedback about their performance increases subsequent achievements (Natriello, 1987; Crooks, 1988; Black & Wiliam, 1998). Hattie (1992) found that “the most powerful single moderator that enhances achievement is feedback.” Formative assessment is grounded in the feedback models of Crooks (1988) and Sadler (1989), in which students have access to three types of information about a particular performance: (a) the intended instructional outcome, (b) how current performance matches or does not match that expectation, and (c) a mechanism to move students from current performance toward that vision. Marzano (2003) identified a number of features of feedback that make it successful, such as being timely and ongoing throughout the learning process, specific to the content being learned, aligned with assessment, and corrective. Fontana & Fernandes (1994) reported that self-assessment based on an understanding of learning goals and evaluation criteria improves student achievement.

As noted above, there is strong evidence for the effectiveness of formative assessment and feedback as general strategies to support learning. Specific prior research on the 6+1 Trait Writing model includes two experimental studies that examined the impact of the model on student achievement and teacher practices, a correlational study examining the relationship of trait-based assessments of student writing to statewide assessment scores, and several studies of teacher opinions of the analytical trait model.

Two experimental research studies, each involving a small number of schools, have been conducted to examine the impact of the 6+1 Trait Writing model on student learning (Arter, Spandel, Culham and Pollard, 1994; Kozlow & Bellamy, 2004). The first study was conducted during the 1992-1993 school year in six grade 5 classrooms that were randomly assigned to either an experimental or control condition. Teachers in the treatment group received training and specific lesson plans and strategies to enhance student writing performance, plus follow-up assistance during the year. The study found that students in the treatment group had a significantly higher increase in test scores for the ideas trait when compared to students in the control group. Gains in all other traits also favored the treatment group, but were marginally significant or non-significant. The original analysis did not properly account for the nested data structure, and the raw data no longer exist, preventing a proper re-analysis that could yield valid estimates of effect size. However, the overall pattern of mean differences favoring the treatment group on all six outcome measures is unlikely to have occurred by chance.

The second study, conducted during the 2003-2004 school year, involved 72 classes in grades 3 to 6 in one school district. Within each grade, half of the classes were randomly assigned to the treatment group and half to the control group. Teachers in the treatment group received two days of training in November, and were asked to implement the trait approach to teaching and assessing writing for the remainder of the school year. Teachers in the control group were asked to continue teaching and assessing writing following their normal practices. Classroom visits showed considerable variation in the extent of implementation by treatment teachers, as well as substantial implementation of similar practices in the control group. The outcome measure was administered only a few months after the training, and survey comments indicated a need for more training and more time to implement the model. The data were properly analyzed using a mixed model framework to account for the nested data structure. Differences between the treatment and control group were not found to be statistically significant. The findings were deemed inconclusive due to the failed implementation and contaminated control group.

In a study of the correspondence of six-trait writing scores and performance on the Washington Assessment Student Learning writing test (scored holistically), Coe (2000) found that students who scored low in the six-traits tended to also score low in the WASL. For example, 28.6 percent of students who had at least one trait score less than three succeeded in passing the WASL (using a 5-point scale with half-point scores possible due to averaging the scores given by two raters). Conversely, 83.1 percent of students with scores of 3 or above on all six traits passed the WASL. Among students with all trait scores above 3.5, 93.8 percent were successful on the WASL writing test. These findings lend support for the idea that formative assessment based on the six trait model may provide students with useful feedback as they work toward meeting state standards.

Teacher opinions of the approach are very positive as evidenced by the ongoing popularity and widespread use of models of this genre. In the Kozlow & Bellamy (2004) study, over 80 percent of teachers reported the perception that the approach helped them to improve their writing instruction, their understanding of the qualities of good writing, and their ability to provide effective feedback to students. Van Hoet-Hill and Wright (2000), in a study of teacher practices and attitudes to writing instruction, found that given the choice, two thirds of the teachers responding chose to use the six-trait model for assessing student writing, citing the strength of the elements (criteria), rigor, and alignment with state assessments as reasons. Of the teachers using the trait model, 100 percent reported the perception that the writing performance of their students was improved as a result of using the model.Support for the intervention for the proposed study is provided below through a synthesis of the research literature on student writing and formative assessment that supports the model, a synthesis of the research literature on professional development that supports the approach taken in the professional development on 6+1 Trait® Writing, and a compilation of information on the potential efficacy of the intervention.

Research literature supporting a traits model. Much of the developmental work for the 6+1 Trait® Writing model was based on the work of Diederich (1974) and Purves (1988) who were instrumental in moving the educational research field away from holistic assessments to classroom-based analytical assessments of student writing to meet the needs of teachers for diagnostic assessment data on which to base their instruction. Another area of research that influenced the 6+1 Trait® Writing model is writing process. In the first major study of the writing process, Emig (1971) drew on her own knowledge and experience as a writer to develop a process model that emphasized writing processes that are recursive rather than linear, including planning, organization, drafting, editing, and writing processes that vary according to task and instructional context. By 1986, in What Works, the Department of Education identified process as the most effective way to teach writing (in Newell, 1998).

In 1983, the writing process and Diederich’s scales were brought together by teachers in Beaverton, Oregon (Grundy, 1986), thus providing the foundation for the development of the 6+1 Trait® Writing model. This model incorporates writing process as an integral component of a recursive set of activities that develops the seven traits that characterize quality writing.

Traits-based writing instruction is also supported by a meta-analysis of 20 years of research on student writing conducted by Hillocks (1987). His review of 2,000 studies identified the use of scales as having a positive effect on improving student writing, second only in effect size to inquiry. This method, of which the 6+1 Trait® Writing model is an example, employs sets of criteria to evaluate pieces of work, which helps students develop an understanding of discourse knowledge, improving the ability to present ideas and information in a coherent manner.

A key focus of the 6+1 Trait® Writing model is to help teachers to provide effective feedback to students and to develop self-assessment skills in students. Reviews of the research on formative assessment have consistently demonstrated that assessment that provides students with feedback about their performance increases subsequent achievements (Natriello, 1987; Crooks, 1988; Black & Wiliam, 1998). Hattie (1992) found that “. . . the most powerful single moderator that enhances achievement is feedback.” Formative assessment is grounded in the feedback models of Crooks (1988) and Sadler (1989) whereby students have access to three types of information about a particular performance: (a) the intended instructional outcome, (b) how current performance matches or does not match that expectation, and (c) a mechanism to move students from current performance toward that vision. Marzano (2003) identified a number of features of feedback that make it successful, such as being timely and ongoing throughout the learning process, specific to the content being learned, aligned with assessment, and corrective. Fontana & Fernandes (1994) reported that self-assessment based on an understanding of learning goals and evaluation criteria improves student achievement.

Research literature on professional development. The professional development for the intervention draws on the research on effective strategies for professional development. Although there are inconsistent research findings on the effectiveness of various approaches to professional development, six general features have been linked, in varying degrees, to changes in teacher performance and/or student learning in four of the more rigorous studies identified in the literature (Desimone, et al., 2002; Garet, et al., 2001; Cohen & Hill, 1998; Kennedy, 1998): form of activity, collective participation and collaboration, content and pedagogical focus, active learning, sustained time and duration, and coherence.

The professional development for the proposed study incorporates these six most likely predictive features of effective professional development. The form of activity is a workshop supported by coaches, teacher team meetings, and online support that will extend over a two-year period. The degree of collaboration is supported by the coaches who will facilitate teacher meetings, and by the online support that will enable teachers to communicate with each other. Active learning is built into the workshop and the coaches provide support for teachers to engage in active learning at the school site. Time and duration is systematically supported through direct coaching and feedback to teachers, teacher team meetings, and online support. Content and pedagogical focus is supported as the coach provides direct motivation to maintain focus on the targeted improvement efforts and reinforcement of learned concepts through discussion. >>> add a sentence describing how “active learning” is supported. >>> add a sentence describing how “sustained time and duration” is supported. Coaches support alignment with school/district goals, assist with mapping to district/state standards, and provide consistency across grades and classrooms, thereby addressing coherence.

Another general finding from the research on professional development is that a focus on specific teaching practices increases the probability of the use of those practices (Desimone et al., 2002). The content of the professional development for 6+1 Trait® Writing places a high emphasis on specific practices and materials that teachers can use immediately following the training. The follow-up processes provide opportunities for teachers to use those strategies and to interact with a trainer, a coach, and other teachers to share their experiences.

Potential efficacy of the intervention. Factors critical to demonstrating the potential benefits of the intervention are (1) the likelihood that the professional development will result in the intended changes in teacher practices, and (2) the likelihood that these changes in teacher practices will result in improved student achievement. There are two sources of evidence concerning the potential efficacy of the professional development for 6+1 Trait® Writing: teacher opinions about past training, and two experimental studies that examined the impact of this training on student achievement and teacher practices.

Teacher opinions in two areas are relevant to potential efficacy: their perceptions of the effectiveness of the professional development in providing knowledge, skills, and strategies that will assist them in changing their practices; and their perceptions of the effectiveness of these strategies for improving student achievement in writing. Results from evaluation forms completed by 720 teachers who attended workshops on 6+1 Trait® Writing in recent years are very positive. When asked if the materials provided would be immediately useful, 94 percent of teachers said “yes” or “mostly,” and 87 percent of teachers gave these responses when asked if the training met their expectations. Survey responses from 34 teachers in grades 3 to 6 who received training for an experimental study on 6+1 Trait® Writing indicate that teachers found the training to be effective, based on implementing the model for six months (Kozlow & Bellamy, 2004). Concerning the effects of training on their teaching, 80 to 90 percent of teachers agreed that the training helped them to improve their writing instruction, their understanding of the qualities of good writing, and their ability to provide effective feedback to students; and agreed that their students developed a good understanding of the traits.

Van Hoet-Hill and Wright (2000) in a study of teacher practices and attitudes to writing instruction found that given the choice, two thirds of the teachers responding chose to use the six-trait model for assessing student writing, citing the strength of the elements (criteria), rigor, and alignment with state assessments as reasons. Of the teachers using the trait model, 100 percent reported improved student writing performance as a result.

Two experimental research studies, involving a small number of schools, have been conducted by NWREL that specifically examined the impact of the 6+1 Trait® Writing model on student learning (Arter, et al., 1994; Kozlow & Bellamy, 2004). The first study was conducted during the 1992-1993 school year in six grade 5 classrooms that were randomly assigned to either an experimental or control condition. Teachers in the treatment group received training and specific lesson plans and strategies to enhance student writing performance, plus follow-up assistance during the year. The study found that students in the treatment group had a significantly higher increase in test scores for the ideas trait when compared to students in the control group.

The second study, conducted during the 2003-2004 school year, involved 72 classes in grades 3 to 6 in one school district. Within each grade, half of the classes were randomly assigned to the treatment group and half to the control group. Teachers in the treatment group received two days of training in November, and were asked to implement the trait approach to teaching and assessing writing for the remainder of the school year. Teachers in the control group were asked to continue teaching and assessing writing following their normal practices. A small number of classroom visits showed considerable variation in the extent of implementation by treatment teachers. Survey comments indicated a need for more training and more time to implement the model. Although significant differences were not identified when a linear mixed model was applied to test for differences between treatment and control groups and for a treatment by test-time interactions, some moderate effect sizes were obtained that would have potential for significant differences with larger numbers of teachers at each grade level.

In a research study of the correspondence of six-trait writing scores and performance on the Washington Assessment Student Learning writing test (scored holistically), Coe (2000) found that students who scored low in the six-traits tended to also score low in the WASL. For example, 28.6 percent of students who had at least one trait score less than three on a 5-point scale succeeded in passing the WASL. Conversely, 83.1 percent of students with scores of 3 or above on all six traits passed the WASL. Among students with all trait scores above 3.5, 93.8 percent were successful on the WASL writing test. These findings lend support for the idea that formative assessment based on the six trait model may provide students with useful feedback as they work toward meeting state standards.

Teacher interviews from the experimental study conducted in 2004 indicated that teachers needed more time to implement 6+1 Trait® Writing model. This was supported by classroom observations that indicated a wide range of levels of implementation. As a result, the training and support for 6+1 Trait® Writing model has been subsequently strengthened and the amount of time to effectively absorb the training and implement effective change has been lengthened. The intervention timeline now allows for the thorough planning, teaching, and practicing of each trait to ensure student understanding and use in their writing.



RESEARCH DESIGN

The study participants will be 5th grade students and teachers from 64 elementary schools in Oregon. The study will employ a cluster-randomized design with random assignment at the school level; students will be nested within classrooms (teachers) which will be nested within schools.

There are several reasons for random assignment at the school level rather than at the teacher or student level. Chief among these are 1) it would be difficult to eliminate the potential of contamination across conditions if both the treatment and control teachers were present in the same school building, and 2) the intervention is intended to be implemented in a school wide context whenever possible, so it would be an artificial change in the intervention to prevent teacher collaboration and interaction within a school.

The analysis of student outcomes will estimate the effect of the intervention on student achievement. Appropriate statistical tests will be used to take into account the nested, multilevel structure of the data. Additional analyses of student outcomes will examine the heterogeneity of impacts using baseline measures of school, teacher and student characteristics.

Teacher-level data will be examined for an analysis of implementation fidelity and as part of fully describing the treatment and counterfactual conditions. Since the 6+1 Trait Writing model has achieved popularity in the region and nationally, it is likely that many teachers at control group schools will have awareness and perhaps some experience with this model. In order to address this problem of prior exposure, it will be necessary to exclude schools from the study before randomization if they are already thoroughly trained in a trait-based approach and employing this approach with students. We will also collect information from teachers in treatment and control group schools concerning their prior experience with trait-based writing models in general and the 6+1 Trait Writing model in particular.

Teacher knowledge, attitudes and behaviors will be analyzed to determine the level of implementation of classroom practices designed to impact students and to describe the characteristics of the classrooms in both control and experimental schools. All data collection, data management, and data analysis will be conducted by the research team or research contractors, not by the intervention delivery team.The study participants will be 5th grade students and teachers from 44 elementary schools in the Anchorage School District, a large district in Alaska. The study will employ a cluster-randomized design with random assignment at the school level; students will be nested within classrooms (teachers) which will be nested within schools. There are two reasons for random assignment at the school level rather than at the teacher or student level. First, it would be difficult to eliminate the potential of contamination across conditions if both the treatment and control teachers were present in the same school building. Second, the intervention is intended to be implemented in a school wide context whenever possible.

Since the intervention is provided as professional development for teachers with the understanding that changes in teaching practice will cascade down to students, both teacher and student data will be collected. The analysis of student outcomes will estimate the effect of the intervention on student achievement. Appropriate statistical tests will be used to take into account the nested, multilevel structure of the data. Additional analysis of student outcomes will examine the role of pre-existing teacher and student characteristics as moderators of program effects.

Teacher-level data will be examined for an analysis of implementation fidelity and as part of fully describing the treatment and counterfactual conditions. Since the 6+1 Trait® Writing Model has achieved popularity in the region and nationally, it is likely that many teachers at control group schools will have some experience with this model. In order to address this problem of prior exposure, it will be necessary to also collect information from teachers in control group schools concerning their prior experience with trait-based writing models in general and the 6+1 Trait® Writing Model in particular.

Teacher knowledge, attitudes and behaviors will be analyzed to determine the level of implementation of classroom practices designed to impact students and to describe the characteristics of the classrooms in both control and experimental schools. All data collection, data management, and data analysis will be conducted by the research team or research contractors, not by the intervention delivery team.

Research Questions. This study will answer two experimental questions to determine the impact of the intervention on student achievement in writing:

What is the impact of 6+1 Trait Writing on student achievement in writing?

How do student impacts vary by pre-existing characteristics of schools, teachers and students?

In addition, descriptive studies of both treatment and control classrooms will be conducted to help interpret and understand the results of the experimental research questions, including the fidelity of treatment implementation and the differences and similarities between treatment and control classrooms. The questions with respect to implementation are the following:

To what extent did teachers in the treatment group implement the intervention with fidelity to the intended model?

What differences and similarities exist with respect to implementation of writing instruction between treatment and control classrooms?This study will answer two experimental questions to determine the impact of the intervention on student achievement in writing:

1 Are there differences in student writing achievement in the treatment and control schools?

2 How do student demographic variables and other context variables moderate the impact of the treatment on student achievement?



In addition, descriptive studies of both treatment and control classrooms will be conducted to help interpret and understand the results of the experimental research questions, including the fidelity of treatment implementation and the differences and similarities between treatment and control classrooms. The questions with respect to implementation are the following:

To what degree was the intervention implemented as intended by teachers in the treatment group?

What differences and similarities exist with respect to implementation of writing instruction between treatment and control classrooms?



Sample. At present, discussions are in progress with Oregon school districts and buildings to build a pool of study sites. The results of this experimental study will be potentially relevant to all elementary schools, particularly those in the U.S. northwest states, and specifically to 5th grade writing teachers and students. The universe of cases will be those 5th grade students in schools which volunteered to participate in the study, and other students/schools like them. Since the schools are not randomly sampled from a known population, generalization to other schools cannot be made strictly on statistical grounds based on this single study. In other words, it cannot be known with certainty whether any other specific school or student is part of the population from which this sample was drawn. Rather, as in most group randomized trials, generalization to other sites must be based on considered judgments about the similarity of those sites to schools in the study, and/or schools from replication studies in additional sites and under additional circumstances.

The study will be conducted in approximately 64 elementary schools (32 experimental schools and 32 control schools) in Oregon. Oregon schools are similar to those in other Pacific Northwest states in proportions of English Language Learners, students from low-income families, and students from language/racial/ethnic minority groups.

The study will be conducted in two waves of data collection. Training, implementation and data collection will occur during a one year period for each of the two waves. Control group schools will receive the training and online support resources during the year following the experimental year for each wave. See the timeline on page six above for a visual representation of the schedule for data collection.

The initial sample of schools will consist of a set of elementary schools that are not already thoroughly implementing a 6-trait based writing approach, and that are willing to participate in the research protocol. Oregon uses a trait-based framework for statewide writing assessments in grades 4, 7 and 10, but the state provides little support for integration of this framework in classroom teaching and learning. Many schools provide teachers with professional development in trait-based writing instruction to fill this gap. A screening process will exclude those schools in which elementary level teachers are already thoroughly trained in this model and are already integrating this approach in their classroom instruction and student assignments. It is expected that principals and teachers at schools which have not made strong efforts in the past to use this approach in their classrooms will be interested in participating in the study, in order to receive the professional development that is provided. Furthermore, these schools will provide a reasonable counterfactual, with teachers who have some awareness of the popular trait-based writing models but who have not been thoroughly trained and have not used this model with students.

All participating elementary schools will be randomly assigned to either the control or experimental condition. Randomization of schools will occur within strata defined by districts. Within districts, the two schools with the highest percentage of students eligible for free or reduced price lunch will be randomly assigned to conditions, then the next highest pair of schools, and so on, so that the experimental and control groups will be reasonably well balanced in terms of socio-economic status and variables that correlate highly with SES. In districts with an odd number of participating school, the unpaired school will be randomly assigned to condition. Within a participating school, all 5th grade writing teachers will participate regardless of previous writing content skills. Typical Oregon schools have two 5th grade teachers. That will yield approximately 64 teachers and 1536 students in each condition at baseline.

The planned baseline samples of teachers and students are large enough to withstand naturally occurring attrition during the study. The student sample will include on average approximately 48 students from each school at baseline. The average 5th grade class size in Oregon schools is 24 students per classroom. Assuming that as many as one sixth of students present at baseline may not complete the post-test, yielding a conservative estimate of 20 students per classroom, the final student sample for this analysis is expected to be at least 1,280 in the experimental group and 1,280 in the control group.



Student attrition, as well as crossovers and students who enter study classrooms after baseline, will be monitored. Attrition rates will be reported in detail; if attrition is large, or different across conditions, the final sample may be weighted during the analysis to preserve the representative nature of the sample. Students who cross over from treatment schools to baseline schools will be included in the treatment sample for the main analysis, which will be an intent-to-treat (ITT) analysis. If particular teachers within the treatment group fail to implement the model in their classrooms (“no-shows”), those teachers and students will still be included in the treatment group for the ITT analysis. Students entering study schools during the year will not be included in the analysis.At present, discussions are in progress with the Anchorage School District to conduct the study within the 61 regular elementary schools in the district. The results of this experimental study will be potentially relevant to all elementary schools in the five northwest states, and specifically to all 5th grade writing teachers and students.

The study will be conducted in approximately 44 schools (22 experimental schools and 22 control schools) in the Anchorage School District in Alaska. Anchorage schools are similar to northwest schools in proportions of English Language Learners, students from low-income families, and students from language/racial/ethnic minority groups. In particular, Anchorage has significant student populations of Native American and Alaska Native students, Hispanics, and recent immigrants from many Asian and world cultures.

The initial sample of schools will consist of all Anchorage SD elementary schools not already thoroughly implementing a 6-trait based writing approach. All participating elementary schools will be randomly assigned to either the control or experimental condition. Within a participating school, all 5th grade writing teachers will participate regardless of previous writing content skills. Most schools have three 5th grade teachers. That will yield slightly over 60 teachers in each of the treatment and control groups at baseline.

The student sample will include approximately 60 students from each school. The baseline student sample for this analysis is expected to be approximately 1,320 in the experimental group and 1,320 in the control group. These are conservative estimates of 20 students per classroom.



The planned baseline samples of teachers and students are large enough to withstand naturally occurring attrition during the study. Teacher attrition will be monitored through personnel records and the project database. Student attrition, as well as crossovers and students who enter study classrooms after baseline, will be monitored. The primary concern in this regard will be to detect any systematic differential attrition or systematic pattern of crossovers between the conditions. However, since the design of the study includes assignment of entire schools to conditions, student crossovers within a school are not possible. Crossovers could only occur if students transfer to a different school within the district during the student treatment year.

Treatment. The intervention under study is the 6+1 Trait® Writing model, an analytic approach to teaching and assessing student writing developed and disseminated by the NWREL Traits Writing Assessment Training unit. In order to decrease any bias or appearance of bias in the study, all key research activities such as data collection, management and analysis will be completed by NWREL research staff or independent subcontractors, not by intervention delivery staff. Independent contractors will manage the raw data and the master data set to insure transparency and replicability of methods, as described later in this document. The program unit responsible for the development and dissemination of the intervention will only be involved in training and implementation activities, and will not have access that would allow conscious or unconscious contamination of the data or study procedures.

The 6+1 Trait® Writing model consists of a set of strategies to facilitate the integration of assessment with instruction, targeting seven “traits” of effective writing: Ideas, Organization, Voice, Word Choice, Sentence Fluency, Conventions, and Presentation. The model employs specific instructional strategies that provide teachers with a range of activities to support direct instruction on the traits, and to engage students in learning about and practicing the use of the traits in planning, assessing and revising their writing. The approach is focused on formative assessments providing effective and timely feedback on performance. These strategies include teaching students the language of rubrics for the traits, having students score and justify their scores on writing samples, and using a writing process that emphasizes feedback, revision, and editing. The intervention is not an alternative writing curriculum designed to replace existing writing programs in schools, but rather an additional, complementary set of tools to aid in conceptualizing, assessing and describing the qualities of writing. It is used in conjunction with existing writing curricula to provide a framework for dialog, designed to improve the ability of teachers and students to plan, evaluate, discuss and revise their writing.

Professional development and support materials will be provided to build teacher understanding of the traits and the instructional strategies that are aimed at supporting students in the development of their writing. Teachers will receive training and resource materials during a summer institute and additional meetings and site visits during the school year. In addition, an online support system will provide additional resource materials and additional access to interaction with trainers and other teachers.

The implementation of the classroom writing intervention will consist of the following activities:

Teachers will attend a four-day summer institute that will provide comprehensive training and resource materials on the traits model.

The 6+1 Trait Writing trainers will support teachers on an ongoing basis at their schools by observing and critiquing teachers’ lessons and meeting with teams of teachers to facilitate discussions of implementation issues and the sharing of experiences among teachers. Teachers will attend 3 days of meetings during the school year to further their understanding of how to implement the model, including practice using the scoring rubric and other resources. Trainers will also visit each treatment group teacher in the classroom four times during the year to answer questions, provide feedback, and suggest ways to improve implementation of the model.

The online support system will provide the capability for teachers to interact with NWREL trainers and other teachers. The facility will include sample papers that teachers can access for illustrations of the traits, opportunities to practice and refine their scoring skills, and a forum in which they may share sample lessons and activities they have found to be particularly effective.

Teachers will integrate the traits model into their existing classroom writing instruction and activities with students. The strategies cover issues of curriculum and instruction, formative assessment, and student engagement in their learning. Table 1 shows the ten strategies by these three purposes.



Table 1 – Instructional Strategies by Purpose




Curriculum & Instruction Issue

Formative Assessment Issue

Student Engagement Issue

1

Teaching the language of rubrics for writing assessment

X



2

Reading and scoring papers and justifying the scores

X

X


3

Teaching focused revision strategies

X



4

Teachers modeling participation in the writing process


X

X

5

Reading a lot of materials that demonstrate varying writing quality

X


X

6

Writing to effective prompts


X

X

7

Weave writing lessons into other subjects

X



8

Students setting goals and monitoring their progress


X

X

9

Integrating learning goals for writing into curriculum planning

X



10

Teaching ways to structure nonfiction writing

X




The comparison condition will be the regularly planned writing instruction at each control group school.The basis of the intervention, the 6+1Trait® Writing model, is an analytic approach to teaching and assessing student writing that consists of a set of strategies to facilitate the integration of assessment with instruction; it has a specific focus on the following seven traits of effective writing: Ideas, Organization, Voice, Word Choice, Sentence Fluency, Conventions, and Presentation. The model employs specific instructional strategies that provide teachers with a range of activities to support direct instruction on the traits, and to engage students in learning about and practicing the use of the traits in planning, assessing and revising their writing. The approach is focused on formative assessments providing effective and timely feedback on performance. These strategies include teaching students the language of rubrics for the traits, having students score and justify their scores on writing samples, and using a writing process that emphasizes feedback, revision, and editing. The intervention is not an alternative writing curriculum, but rather an approach to conceptualizing, assessing and describing the qualities of writing. It is used in conjunction with existing writing curricula to provide a framework for dialog, designed to improve the ability of teachers and students to plan, evaluate, discuss and revise their writing.

The professional development and support materials under study were designed to build teacher understanding of the traits and the instructional strategies, and to improve teacher skills in supporting students in the development of their writing. The extent and nature of the content makes a 2-year training and implementation timeline necessary. The intervention for this study consists of professional development, local support, and online resources for trait-based writing. Teachers will receive training and will begin the process of integrating the system into their classrooms during the first implementation year. In addition, local writing coaches will be trained and begin assisting teachers during the first year. During both years of implementation, teachers will receive support from interaction with the local coaches, visits from trainers, and through an online support system. The second year of teacher implementation will be the “treatment year” for students in the study.

The professional development will consist of the following:

An initial four-day training session prior to the first implementation year will provide comprehensive training on the traits model.

Teachers will implement the traits model during this first year and will be supported by a local coach and the 6+1Trait® Writing trainer, who will make two consulting trips to each treatment school. In addition, support will be provided by 6+1Trait® Writing training staff through the Internet.

A second four-day training session during the summer after the first implementation year will provide a review and a more detailed analysis of the traits approach. Teachers will bring their first-year experience to bear on this analysis, will share experiences, and will raise questions for the trainer.

During the second implementation year, which is the treatment year for the class of 5th grade students to be studied, teachers will continue to implement the traits model, integrated with their classroom writing curriculum. Each school will again receive two consulting visits from the NWREL writing expert, support from coaches at the district level, and from NWREL staff through the Internet.

The 6+1Trait® Writing trainer and district coaches will support teachers on an ongoing basis at their schools by observing and critiquing teachers’ lessons and meeting with teams of teachers to facilitate discussions of implementation issues and the sharing of experiences among teachers. They will conduct demonstration lessons, conduct monthly leveling and scoring meetings, and assist teachers in alignment between classroom lessons, state standards and district curriculum. This will build capacity in the school district to provide ongoing support for sustained implementation after the end of the study. One or more new district coaches will be hired and will receive extensive training in 6+1 Trait® Writing, with ongoing support from 6+1Trait® Writing trainers. They will also receive the training-of-trainers workshops, training on effective coaching, and methods for aligning instruction to standards.

The online support will provide the capability for teachers to interact with NWREL trainers, coaches, and other teachers. It will include sample papers that teachers can access for illustrations of the traits and to refine their scoring skills, and will allow teachers to post sample lessons and activities that they have found to be particularly effective so that other teachers may use them.

The on-line support will provide teachers with resources and a discourse community after training has been completed and while implementation is in progress. On-line support for teachers addresses at least three of the six features of effective professional development. These are collective participation and collaboration, content/pedagogical focus, and sustained time and duration. On-line support provides a forum for teachers to discuss issues that arise during implementation with project staff and other teachers. It is also a way of maintaining content and pedagogical focus by giving teachers access to additional materials and resources, as well as creating a forum. Finally, on-line support provides a sustained focus over time on key implementation issues.

6+1Trait® Writing staff will train all coaches and teachers, and will interact with coaches through the Internet, by phone, and through periodic meetings. A close working relationship will be established with Anchorage School District staff to coordinate the field work. Since some schools will implement the intervention and some will not, district wide professional development for literacy that is not related to the study will be carefully planned and coordinated so that it will not confound the study.

The comparison condition will be the regularly planned writing instruction at each control group school.



DATA COLLECTION

Student Outcome Measure

Student achievement in writing will be assessed using raters trained to score essays using the 6+1 trait model. NWREL maintains a trained pool of raters who are regularly monitored and calibrated to insure inter-rater reliability and fidelity to the assessment model. Each student essay will be scored by two teams of raters. One team will produce a holistic score reflecting the overall quality of the student writing. The other team will produce a set of scores based on the traits of writing that are included in the intervention. The primary hypothesis tests will be conducted using the holistic ratings, which are not overly aligned with the intervention, as the dependent variable. The trait-based ratings will be available for supplementary, parallel analyses of the impact of the intervention on specific aspects of student writing. The raters will score the essays without any knowledge of a student’s experimental condition or whether a given essay was a pre- or post-test. The spring essay scores (at the end of the treatment year) will be used in the outcome analyses as dependent variables, while essay scores from the beginning of the year will be entered as covariate measures of baseline level of writing performance.


In addition, an independent scoring of a random subsample of the end-of-year essays will be made using one of the currently available automated essay scoring systems to provide comparison scores for essays at the end of the year. This will provide additional evidence about the reliability and validity of the human ratings, as well as add to the growing literature on the validity of writing scores produced by artificial intelligence engines.


School-level performance on the Oregon statewide writing assessment will also be used as a school-level covariate, to reduce the between-school variance and improve the efficiency of the design. This assessment is only administered in 4th, 7th and 10th grades, so the 4th grade aggregate school-level data from the experimental and control group students will be used as a school-level covariate. As noted above, each student’s essay score from the beginning of the year will also be entered as a student-level covariate.


Students typically produce their best writing with prompts that give them some options for personalizing a topic. Prompts will be selected that provide students a clearly defined audience, role, and mode (purpose) for their writing. All prompts for the writing assessment will be in the expository mode because it is generally introduced in the 4th grade classroom and should be familiar to 5th grade students. The essay writing sessions will be proctored by participating teachers from both control and treatment groups. Each essay will be scored by raters blind to condition, using the holistic and six analytic rubrics from the 6+1 Trait Writing model.


Teacher Implementation Measure

Teacher classroom practices will be measured in both treatment and control classrooms using teacher surveys to collect data related to teacher implementation of the ten instructional strategies that form the basis of training. This will provide data on both fidelity of implementation in the treatment group schools and contamination in the control group schools. The instruments will be pilot tested during the planning period for this study in separate schools from those which will later make up the treatment and control groups. Analysis of inter-rater reliability and validity will occur prior to the treatment year and will be used to make necessary instrument refinements. Results will be included in project reports. Section 5.2 provides more specific information on the data collection procedures for the following three instruments.


The teacher survey will be administered to all teachers in both treatment and control teachers. The survey contains Likert-type questions about teacher practices and opinions, descriptive information about use of instructional time, and open-ended questions about instructional and learning issues. The results will be used by the research team to describe the treatment and control teachers and their classrooms. Student Outcome Data. Student achievement in writing will be measured using two measures: (1) Alaska State Writing Assessment (ASWA), and (2) NWREL Writing Essay Assessment.

The 5th grade ASWA score for participating students in the treatment year will be the primary dependent variable and the previous year’s 4th grade ASWA score will serve as a covariate for those students. The ASWA is administered in May of each year from grades 3 through 10. In addition, an in-class student writing essay assessment will be administered in the fall and spring of the treatment year. These essays will be collected and scored using analytic rubrics for the six traits of the 6+1 Trait® Writing Model by trained NWREL raters. The raters will be blind to the experimental condition of each student. The spring essay scores will be used in the outcome analyses as dependent variables and the fall essay scores will be entered as covariate measures of beginning level of writing performance.

Teacher Implementation Data. Information on teacher classroom practices will be collected through: (1) teacher surveys, (2) teacher interviews, and (3) classroom observations. An instrument will be developed for each of these data collection activities. These instruments are aimed at collecting data related to teacher implementation of the ten instructional strategies that form the basis of training. The instruments will be pilot-tested during the planning period for this study in separate schools from those who will later make up the treatment and control groups. Analysis of inter-rater reliability and validity will occur prior to the treatment year and will be used to make necessary instrument refinements. Results will be included in project reports.



MEASUREMENT INSTRUMENTS FOR WHICH CLEARANCE IS SOUGHT

Student Outcome Data. Student achievement in writing will be measured using two measures: (1) Alaska State Writing Assessment (ASWA), and (2) NWREL the Writing Essay Assessment modeled very closely after the Oregon State Writing Assessment.

Alaska State Writing Assessment

The Alaska State Writing Assessment was developed in 2004-2005 by a testing contractor, Data Recognition Corporation (DRC) as one part of the new and comprehensive Alaska Standards Based Assessment. DRC has test development experience with 18 states. The steps in DRC’s item and test development process were designed in accordance with the Standards for Educational and Psychological Testing, (AERA, NCME, APA, 1999). The development process began with an analysis of the Alaska content standards and grade-level expectations, leading to development of a preliminary test blueprint and item specifications. This defined the standards to be assessed per grade arranged by the relative importance of the content domains. DRC developed item specifications and then conducted a series of meetings over a 20 month period with Alaska teachers and subject-area supervisors to develop the ASWA.

Educators from diverse Alaska geographic and demographic populations participated in all stages of item and test development process. They were involved in a) reviewing of grade-level expectations, b) deciding what the test should measure, c) writing and reviewing of test questions, and d) making decisions as to what test questions actually appeared in the test. The review process included a thorough attention to bias, fairness, and sensitivity issues. In addition, all test questions were reviewed for content validity by an independent team of writing experts from Alaska and across the entire country.

Results from a reliability analysis for the state writing assessment are reported in the “Spring 2005 Alaska Standards Based Assessments (SBAs) Operational and Field Test Technical Report.” A series of tables presents reliability of the writing assessment broken down by student special population groups, including by ethnicity, socioeconomic status, English language proficiency status, migrant status, special education status, and gender. The reliability coefficients for the 5th grade writing assessment ranged from r =.89 to r =.92, indicating a high level of consistency of scores across the writing items for all special populations. These results are based upon 9,700 5th grade students taking the test in Spring 2005.

Question types for the ASWA include multiple-choice, short answer, and longer written narratives in the form of narrative, informative, and descriptive writing. Some multiple-choice questions are based on short fiction or nonfiction passages. The content strands of the ASWA measure student ability to: a) write using paragraphs that maintain focus, b) write different types of compositions, such as stories and personal letters, and c) edit sentences using the punctuation and capitalization conventions of Standard English.

NWREL The Writing Essay Assessment

NWREL Student achievement in writing will be analyzed by having student essays scored by teams of raters, using a process parallel to that used in the 4th and 7th grade Oregon statewide writing assessment. Students will receive the essay prompt and instructions to guide them through a three-day process of planning, drafting, and finalizing an essay. This process is very common in writing classrooms whether or not a trait-based approach to instruction is used and, therefore, should provide students a good opportunity to show their best writing. Specific directions for assessment administration will be provided to teachers, explaining the purpose and procedures for each of the three days. Also, teachers will be provided copies of the current state of Oregon Accommodations Table and Modifications Table for writing test administration. Student accommodations do not change the content and/or performance standards of what is being measured by the assessment. Student essays written with assessment accommodations will be included in all data analyses.


Baseline writing samples will be collected in September or early October of each data collection year; follow-up samples will be collected the following May. All prompts for the writing assessment will be in the expository mode for consistency of performance comparison across all subgroups of students. Students will work on their essays for 45 minutes on each of 3 successive days, to provide the opportunity for a natural writing process including planning, drafting, and revision. The intervention is largely aimed at improving the willingness and ability of students to substantively revise their writing after reflection upon first drafts, therefore it is not possible to estimate the impact of the intervention based on an assessment of first drafts produced during a single writing session. The essay writing sessions will be proctored by the participating teachers in both control and treatment groups.


Each essay will be scored using the holistic and six analytic rubrics from the 6+1 Trait Writing model; presentation will not be scored. NWREL maintains a pool of writing assessment raters who will score the essays without any knowledge of a student’s experimental condition. These raters have scored papers for school district clients in the U.S. and foreign countries for over 20 years, and have recently begun scoring writing samples provided by job applicants for major corporations. The team is experienced in using the 6+1 Trait rubrics as well as client-specific rubrics on papers written in persuasive, expository, and narrative modes. For this study, one scoring team will apply the holistic rubric while a second team will apply the six analytic rubrics.


The rating team is re-calibrated at the beginning of each set of papers to insure proper application of rating scales to each sample and to insure inter-rater reliability, which is regularly monitored in real time during each scoring job. Each essay is scored by two raters; if their scoring of any trait differ by more than one point, a third rater resolves the discrepancy. Ratings are entered directly into an online database which is prepared for each job using identifying information provided by the district or client. This application includes a variety of reporting features, including facilities for ongoing monitoring of rater reliability using the AC1 statistic (Gwet 2002a, 2002b) and a variety of detailed and summary reports.


We plan to have each student essay scored by two teams of raters. One team will produce a holistic score reflecting the overall quality of the student writing. The other team will produce a set of scores based on the traits of writing that are included in the intervention. The first set of hypothesis tests will be conducted using the holistic ratings, which are not overly aligned with the intervention, as the dependent variable. The trait-based ratings will then be used for supplementary, parallel analyses of the impact of the intervention on specific aspects of student writing.


We are making arrangements with Chesapeake Research Associates to receive the original raw student essays from classrooms and to repackage the student papers for processing by the rating teams. All participating teachers will use a pre-addressed and stamped package to send all completed student essays to CRA, who will code each essay to maintain anonymity and then forward them to NWREL to be scored. The raters will be blind to whether each essay is a pre-test or post test, or whether it is from an experimental or control group school. The essay ratings will then be returned to Chesapeake Research Associates and assembled into a data file for analysis, along with relevant school level data collected from NCES and state department of education sources.


By using this method, all research team members at NWREL will be blind to the source of the essays during the rating process. Chesapeake Research Associates will maintain the master copy of the raw essays and the complete raw data file; CRA (or other IES consultants) will thus be able to verify that all reported statistical analyses are completed as appropriate, using the entire original data set.Writing Essay Assessment will be administered in the fall and spring of the treatment year, and it will provide a more specific look at student essay writing performance. This will complement the ASWA results that include one short writing item with a larger number of multiple-choice items. All prompts for the writing assessment will be in the expository mode for consistency of performance comparison across all students. These essays will be administered by participating teachers from both control and treatment groups. Each essay will be scored using the holistic and seven analytic rubrics from the 6+1 Trait® Writing Model. NWREL maintains a trained pool of writing assessment raters who will score the essays without any knowledge of a student’s experimental condition.

Teacher Implementation Data. Information on teacher classroom practices will be collected through: (1) teacher surveys, (2) teacher interviews, and (3) classroom observations administered three times during the treatment year.

Teacher Survey

A teacher survey will be administered to all teachers in both treatment and control schools. The survey contains Likert-type questions about teacher practices and opinions, descriptive information about use of instructional time, and open-ended questions about instructional and learning issues. The results will be used by the research team to describe the treatment and control teachers and their classrooms.

Teacher Interview Protocol

A teacher interview protocol will be used for collecting teacher input on their instructional and assessment practices, curriculum design, and specific learning objectives for students. It will be used in connection with a classroom observation protocol described next.

  1. Classroom Observation Protocol

A classroom observation protocol will document how teachers implement a specific lesson in their classrooms. The protocol will ask observers to make note of specific aspects of classroom environment, teacher instructional and assessment activities, and student engagement. The data will correspond to the ten 6+1 Trait® Writing instructional strategies. The information gathered through classroom observations will be compared to the interview data, and used to build a more complete picture of classroom instruction and learning.

SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSION

B. DESCRIPTION OF STATISTICAL METHODS

Collection of information employing statistical methods, in other words the use of statistical sampling methods, has a potential of reducing the respondent burden while ensuring the external validity of the study. In the following sections, statistical and alternative sampling methods proposed for this study are described.

  1. Respondent Universe and Sampling Methods

The study will not statistically sample students or schools from a fixed universe, but will identify and recruit eligible schools in Oregon containing 5th grade classrooms and a minimum of 30 5th grade students. Eligible schools will be those that are not already thoroughly implementing a 6-trait based writing approach, and that are willing to participate in the research protocol. The sample will consist of schools that are willing and able to participate and that are not already implementing the intervention. There are approximately 555 schools with 5th grade classrooms and a 5th grade enrollment of 30 or more; it is unknown how many of these are not already thoroughly implementing a 6-trait based writing approach, or how many of these will be interested in participating.

Accordingly, the universe of cases will be those 5th grade students in schools which volunteer to participate in the study, and other students/schools like them. Since the schools are not randomly sampled from a known population, generalization to other schools cannot be made strictly on statistical grounds based on this single study. In other words, it cannot be known with certainty whether any other specific school or student is part of the population from which this sample was drawn. Rather, as in most group randomized trials, generalization to other sites must be based on considered judgments about the similarity of those sites to schools in the study, and/or schools from replication studies in additional sites and under additional circumstances.

In a cluster randomized design, the response of units of randomization is the key response rate issue. In this study, schools are the unit of randomization. Response rates for schools are expected to be at or very near 100% since participating schools will have chosen to participate.


  1. Procedures for Collection of Information

Statistical Methodology For Stratification and Sample Selection

  1. Procedures for Statistical Sampling

As described in the previous section “1. Respondent Universe and Sampling Methods”, the study will not involve statistical sampling of schools. From the pool of eligible elementary schools, 64 schools will be selected and half will be randomly assigned to the treatment group that will implement the traits-writing model, and half will be assigned to the control group (for a total of 32 experimental schools and 32 control schools). Random assignment will be completed independently by Chesapeake Research Associates to ensure that such assignment is independent of school recruitment efforts.

Eligible schools will be those that are not already thoroughly implementing a 6-trait based writing approach, and that are willing to participate in the research protocol. Oregon uses a trait-based framework for statewide writing assessments in grades 4, 7 and 10, but the state provides little support for integration of this framework in classroom teaching and learning. Some schools provide teachers with professional development in trait-based writing instruction to fill this gap. A screening process will exclude those schools in which elementary level teachers are already thoroughly trained in this model and are already integrating this approach in their classroom instruction and student assignments. It is expected that principals and teachers at schools which have not made strong efforts in the past to use this approach in their classrooms will be interested in participating in the study, in order to receive the professional development that is provided. Furthermore, these schools will provide a reasonable counterfactual, with teachers who have some awareness of the popular trait-based writing models but who have not been thoroughly trained and have not used this model with students.

Randomization of schools will occur within strata defined by districts. Within districts, the two schools with the highest percentage of students eligible for free or reduced price lunch will be randomly assigned to conditions, then the next highest pair of schools, and so on, so that the experimental and control groups will be reasonably well balanced in terms of socio-economic status (SES) and variables that correlate highly with SES. In districts with an odd number of participating schools, the unpaired school will be randomly assigned to condition.

Within a participating school, all 5th grade writing teachers will participate regardless of previous writing content skills, as will al of their 5th grade students (there will be no sampling within schools). Typical Oregon schools have two 5th grade teachers which will yield approximately 64 teachers and 1,536 students in each condition at baseline, i.e., 64 teachers and an average of 24 students per class. Assuming that as many as one sixth of students present at baseline may not complete the post-test, yielding a conservative estimate of 20 students per classroom, the final student sample for this analysis is expected to be at least 1,280 in the experimental group and 1,280 in the control group.

Student attrition, as well as crossovers and students who enter study classrooms after baseline, will be monitored. Attrition rates will be reported in detail; if attrition is large, or different across conditions, the final sample may be weighted during the analysis to preserve the representative nature of the sample. Students who cross over from treatment schools to baseline schools will be included in the treatment sample for the main analysis, which will be an intent-to-treat (ITT) analysis. If particular teachers within the treatment group fail to implement the model in their classrooms (“no-shows”), those teachers and students will still be included in the treatment group for the ITT analysis. Students entering study schools during the year will not be included in the analysis.


Data Collection Plans

Data collection will occur for two groups of schools corresponding to the 2007-08 and 2008-09 school years. Activities for each group will be coordinated for three data levels: student, teacher, and school.


Student achievement in writing will be assessed with holistic and trait scores of a written essay from all participating students in September and May of the experimental year. The writing essays will be administered by participating teachers from both control and treatment groups. This is consistent with the Oregon statewide writing assessment. NWREL staff has developed a student assessment booklet that will include the essay prompt and instructions to guide the student through a three-day process of planning, drafting, and finalizing an essay. Teachers will also provide coded, non-identifiable data on a few demographic and assessment-related variables for each student using the Coded Student Data Form. Teachers will return completed forms to Chesapeake Research Associates (CRA) in a stamped, addressed envelope, where they will be recorded and prepared for scoring by NWREL raters.

The teacher survey will be completed by all teachers in both treatment and control groups prior to the beginning of treatment (September), in mid-year (February), and again toward the end of the treatment year (May). NWREL will provide teachers a copy of the survey and a stamped, addressed return envelope. NWREL will record receipt of surveys and follow-up via e-mail or telephone with teachers who do not return the survey within two weeks. School-level data, including demographic and student achievement data, will be collected in the summer preceding the treatment year via the internet.


Estimation Procedures


The impact of the intervention on student performance will be estimated using an intent-to-treat (ITT) analysis, in which data from all subjects will be analyzed as part of the group to which those subjects were originally randomly assigned. We believe that crossovers and no-shows will be minimal, therefore no provisions are included below for separate estimates of impact on the treated (IOT). If this assumption proves to be flawed, the plan may be revised to include supplemental analyses bracketing the lower and upper bounds for the IOT estimate.


Student writing scores will be the dependent variable, predicted primarily by membership in the treatment or control group. Several covariates will be used to increase the efficiency of the design. All students with post-test scores will be included in the analyses; baseline scores will be imputed for any students who do not complete the pre-test. A hierarchical linear model will be used to analyze the treatment and control group differences in student achievement. The data analysis will be accomplished using a mixed model ANOVA in which the effect of the experimental manipulation is estimated as a fixed effect, while the effects of school level variables and the individual differences among teachers and students will be estimated as random effects. Specific data analysis techniques are described below for each research question:


Research Question 1: What is the impact of 6+1 Trait® Writing on student achievement in writing?

The analysis of student outcomes will have two components. First, outcome data will be subjected to a series of descriptive analyses to examine the distributions and insure that statistical assumptions are reasonably met for the primary analyses. Second, outcome data will be subjected to a series of inferential analyses for hypothesis testing, which will involve the comparison of the treatment group and the control group. Since our data will have a nested structure, we will utilize HLM (hierarchical linear modeling) for this purpose. This will be done for the entire data set, then repeated for the subgroups. Subgroup analyses, however, can also be carried out by entering the subgroup identity as a fixed factor at Level-1 (student-level).


In the following, we will outline the statistical models that will be used for hypothesis testing. Multilevel modeling techniques will be used to properly account for the nested structure of the data, i.e., the fact that observations within groups are not independent. The purpose of the multilevel modeling is, therefore, primarily to obtain an efficient, unbiased estimate of the impact of the treatment. Covariates will be used primarily to improve the efficiency of the model by controlling extraneous variance, although some covariates will also be explored as possible moderators of the treatment effect.


Two models are presented in the following. The first model is a 2-level CRT with Level 2 Covariate. The second model is the generalized 2-level CRT, which can have numbers of covariates at both levels. This model is presented for the purpose of discussing the possible expansion of the model.


  1. Two-level CRT with Level 2 Covariate


    • Level 1 Model (i.e., Student Level Model)


Yij = β0j + eij eij ~N(0, σ2)


    • Level 2 Model (i.e., School Level Model)


β0j = γ 00 + γ 01Wj + γ 02Sj + u0j u0j ~N(0, τ|s)


where:

yij: outcome measure of student i at school j.

β0j: mean outcome measure of students at school j.

eij: residual associated with each student. It is assumed to be normally distributed with the mean of 0 and the variance of σ2.

γ00 : grand mean for the outcome measure.

γ 01: treatment effect.

Wj: indicator variable. Treatment group is indicated by 0.5; control group, -0.5.

γ 02: coefficient for the school-level covariate.

Sj: school-level covariate, which is the school mean for the previous year’s 5th grade test score.

u0j : residual associated with the school mean of the outcome measure. It is assumed to be normally distributed with the mean of 0 and the variance of τ|s.


In this model, the test score of a student is defined as the school-level mean plus the random error associated with each student. The school-level mean, in turn, is defined as the grand mean plus the effect of the treatment plus the random effect associated with the school. The school-level mean, however, is adjusted for the covariate which is the previous year’s school mean.


The random assignment to conditions will take place at the school level in the study. Consequently, the treatment effect will show up at the school level. Reducing the error variance at the school level, therefore, will result in the gain in the power.


The HLM model above is essentially a 2-level nested mixed-model ANCOVA, in which the treatment and the school level covariate are entered as fixed effect variables, whereas the cluster (school) is entered as the random effect variable.



  1. Generalized Two-level CRT


The model can be expanded by identifying and entering more covariates. Such covariates can be either at the student or school level. The following shows the general two-level HLM with multiple covariates.


    • Level 1 Model (i.e., Student Level Model)


Yij = β0j + β1ja1ij + β2ja2ij + …+ βpjapij +eij


    • Level 2 Model (i.e., School Level Model)


β0j = γ00 + γ01W1j + γ02W2j + … + γ0sWsj + u0j


β1j = γ10 + γ11W1j + γ12W2j + … + γ1sWsj + u1j

:

:

βpj = γp0 + γp1W1j + γp2W2j + … + γpsWsj + upj



Theoretically, any number of covariates could be entered. However, more covariates means that a larger sample will be necessary to estimate the coefficients. Given the limitation in the sample size we can afford, especially in the context of experimental studies in which the effects of covariates are randomly distributed across conditions, our plan is to use the simplest model that reflects the data structure accurately enough. This means exercising due diligence in choosing only good covariates, with a maximum of four covariates.



Research Question 2: How do student impacts vary by pre-existing characteristics of schools, teachers and students?

Additional analyses will be performed to provide further information about the impact of the 6+1 Trait® Writing model by uncovering factors that moderate the level of impact of the treatment. The primary concerns here are that student or school variables examined must not be reactive to treatment condition, and a valid counterfactual must be available and identifiable in the control condition.


Analyses of demographic subgroups will be performed in this context, as well as the contribution of a measure of prior exposure of teachers to trait-based writing models. These additional factors will be entered at appropriate levels into the model specified in the primary analysis. For instance, the measure of prior exposure will be entered into the Level-2 (teacher level) equation. Subgroup analyses, though exploratory, will also be performed by entering such school level variables as “%[subgroup]” into the Level-3 equations, while entering a student level variable “[subgroup] membership” into the Level-1 equation. These analyses are intended for aiding interpretation of the result of the primary analysis.


Each step in the above data analyses will be fully documented and reported, including the processes and techniques used and the detailed results and effect sizes. Student level demographic variables that may have implications when aggregated at the classroom or school level will initially be modeled at each level, and removed from particular levels if no effects or interactions are found. For example, race/ethnicity may have effects at the level of individual students, but may also have classroom or school effects when aggregated to those levels. Additional exploratory analyses will be conducted to examine relationships between level of implementation and impact on student achievement.


Statistical Power Estimates


To determine the level of statistical power attainable from various sample sizes, a set of analyses was performed to estimate the minimum detectable effect size under a set of scenarios. The minimum detectable effect size was defined here as the necessary size of effect in order to maintain statistical power of 0.8. The power analysis presented in the following was specifically performed for the detection of the main effect of treatment on student outcome.

Collaborating with the Oregon Department of Education, we were able to calculate the ICC for a 2-level CRT model very similar to that proposed for the current study (student nested within School). The ICC was calculated using the 2005 – 2006 Grade 4 Oregon Writing Assessment data (N=39057). First, the unconditional ICC was calculated by fitting a 2-level CRT model without any covariate. Then, the conditional ICC was calculated by fitting a 2-level CRT model with the previous year’s building mean as the school-level (L2, for Level 2) covariate. The effect of this covariate, R2L2, was calculated from the school-level variance in the two models. Specifically:


  • When a 2-level CRT model without covariate was fit to the data, school-level variance (τ) was 2.917, and student-level variance (σ2) was 18.221. The unconditional ICC was, therefore, calculated as τ / (τ + σ2) = 0.138.


  • When a 2-level CRT model with the covariate was fit to the data, school-level conditional variance (τ|x) was 1.574, and student-level variance (σ2) was 18.225. The conditional ICC was, therefore, calculated as τ|x / (τ|x + σ2) = 0.079.


The effect of the covariate, R2L2, was calculated from the initial school-level variance (τ) and subsequent the school-level variance conditional to the use of covariate (τ|x).

R2L2 = 1 – (τ|x / τ) = 0.428. It appears that the previous year’s school mean (school-level covariance) and the true school mean are moderately to strongly correlated, RL2 = 0.654.


Based on this information, a power analysis was performed for the present study using the Optimal Design Software, the unconditional ICC of 0.14, and the effect of the covariate (R2L2) of 0.43. Exhibit 1 shows the number of schools required to attain MDEs ranging from .10 to .25.



Exhibit 1. Statistical Power Estimates

Unconditional ICC

R2L2

Number of Schools

Minimum Detectable Effect Size

(at power of .8)

0.14

0.43

320

.100

144

.150

84

.200

54

.250













The general goal of the power analysis was to estimate the necessary number of schools to sample, in order to maintain power of 0.8 for a minimum detectable effect size of δ = 0.25. In performing the power analysis, the number of students per teacher (n) was set to 20. The number of teachers per school (J) was set to 2. The default value of 0.05 was used for the alpha level.


The results of this power analysis indicate that the planned sample size of 64 elementary schools will provide adequate power to detect an MDE of .24 or larger given no school level attrition, and enough power to detect an MDE of .25 or larger even with considerable school-level attrition, which we hope to avoid. The number of schools included in Wave 2 of the study can be adjusted up or down depending on the findings of preliminary analyses of data collected from the Wave 1 schools. Actual power for the study is likely to be somewhat better than the conservative estimate presented above, since it does not take into account the individual level baseline writing measure as a covariate. The degree to which an individual level covariate helps with power at the school level is unpredictable and isn't modeled in the Optimal Design software.


Unusual Problems Requiring Specialized Sampling

There are no such unusual circumstances.

Use of Periodic Data Collection Cycles to Reduce Burden

This is a one-time research study.


All 61 regular elementary schools in the Anchorage School District will be contacted for participation. A participation-rate of 70 percent or above would enable us to have 40-plus schools, enough to ensure the adequate level of statistical power.

Although no statistical sampling of schools will take place in connection to this study, statistical sampling of students and teachers within participating schools will be used to reduce the respondent burden as well as the study cost for three of our five measures.

    1. Writing Essay AssessmentAlaska State Writing Assessment

    2. No sampling of students will be used. Data will be collected from all the 5th graders twice during the year of the study in the participating schools (both treatment and control). The expected number of students whose data we will collect will be approximately 1,320.

NWREL Writing Essay Assessment

A representative sample of students at participating schools (both treatment and control) will take NWREL Writing Essay Assessment twice during the year of the study. Stratified sampling, using schools as strata, will be used for identifying students.

Teacher Survey

No sampling of teachers will be used. Data will be collected from all the 5th grade teachers in the participating schools (both treatment and control). The expected number of teachers whose data we will collect will be approximately 120.

Teacher Interview Protocol

A representative sample of teachers, about 20 from each of the conditions, will be interviewed three times during the study year. Those teacher interviews will take place at the time when the NWREL observer visits their classrooms. We are currently weighing the advantages and disadvantages of stratified sampling in which schools are used as strata. Such stratification will result in one teacher representing each participating school. Factors that may affect our decision are: a) relative size of between-school vs. within-school response variance; and b) the cost of visiting a school.

    1. Classroom Observation Protocol

Classroom observations and teacher interviews are conducted as a set, three times during the study year. As such, classroom observations will be conducted to the same sample of teachers who will be interviewed.

  1. Methods to Maximize Response Rates and to Deal with Issues of Non-response

We expect very high response rates for this study. Since the study is conducted within a single school year, as part of schoolwide curriculum and instruction that is implemented for all students in the 5th grade classrooms, it is unlikely that attrition or non-response of students or teachers will be great or that it will differ across conditions. Some students may be absent for one of the assessments, and some students will likely move into or out of each school. This will be monitored closely. Study personnel will have regular contact with participating teachers and will have ample opportunities to facilitate the collection of teacher surveys and student essays. As noted above, the key issue here is not non-response of individuals, but non-response of entire schools, since schools are the unit of assignment and analysis, and school estimates are robust to the minor expectable loss of individual responses. Recruitment of schools will include procedures to insure that the school as a whole is willing to complete the study.


  1. Pilot Testing of Instruments

Pilot testing is expected for the three instruments used to collect teacher implementation data: (1) Teacher Survey, (2) Teacher Interview Protocol, and (3) Classroom Observation Protocol. . Only nine or fewer respondents will be involved for the pilot testing of these instruments. Teachers will be asked to complete each survey as if they were in the full-scale study. They will be asked to confirm their time to complete and will be debriefed in a phone or face-to-face interview about their experiences with the questionnaire to ensure that items are clear and that they are gathering the information intended.

  1. Contractor Name Responsible for Design, Analysis, and Data Collection for the Study

This study will be conducted by the Center for Research, Evaluation and Assessment, the Northwest Regional Educational Laboratory (NWREL), under the Regional Educational Laboratory Contract with the Institute of Education Sciences, the U.S. Department of Education. Chesapeake Research Associates (CRA) is providing consultation services around research design and analysis for the study.

Michael Coe Principal Investigator NWREL 503-275-9497

Gary Nave Project Analyst NWREL 503-275-9573

Makoto Hanita Project Analyst NWREL 503-275-9628

Michael Puma Project Consultant CRA 410-897-4968

David Connell Project Consultant CRA 410-897-4968





File Typeapplication/msword
File TitleAn Investigation of the Impact of a
AuthorNWREL User
Last Modified BySheila.Carey
File Modified2007-06-14
File Created2007-06-14

© 2024 OMB.report | Privacy Policy