Att_NTA Supporting statement B 8-9-07

Att_NTA Supporting statement B 8-9-07.doc

Study of Education Data Systems and Decision Making

OMB: 1875-0241

Document [doc]
Download: doc | pdf

B. Collections of Information Employing Statistical Methods

1. Respondent Universe

Case Studies

School site visits to obtain a more detailed look at teacher and school leader practice will comprise our case study data collection efforts. For the case study sample, SRI relied on a purposive approach to sampling. By focusing our fieldwork on schools where we know that many teachers are actively looking at student data, we greatly increase the likelihood of seeing effects of data use on practice, compared with a sample of schools drawn at random. During the spring of 2006, project staff engaged in an in-depth process of selecting appropriate schools and districts for site visits in the fall of the 2006-07 school year. School sites were identified through three methods:

  • Polling of TWG members and other leaders in educational technology, researchers, vendors, and staff of professional associations such as the Consortium for School Networking (CoSN) and the State Education Technology Directors Association (SETDA), supplemented by a search of conventionally and Web-published literature on data-driven decision making, to identify schools and districts with reputations as leaders in this area.

  • Phone interviews with selected districts active in data-driven decision making during which districts will be asked to identify both their most exemplary school in terms of data use and a school that typifies data use at the school level within their district.

  • Self-nomination of schools at the 2006 National Educational Computing Conference (NECC) that brings together teachers, technology coordinators, administrators, policy makers and industry representatives to discuss education technology issues.

The list of nominated districts was then grouped by type of commercial data system used and systems developed locally. The plan was to select three locally developed systems and one district each using seven frequently used commercial systems (e.g., SchoolNet, eScholar). Publicly available data will also be reviewed to characterize nominated districts in terms of the demographics of their student bodies and in terms of their students’ achievement trends (i.e., we will not select any district that has declining achievement). Geographic diversity was also be sought across the 10 districts.

The study currently provides for 10 district case studies and three schools per district, for a total of 30 schools. As part of the school site visits, six teachers at each school are participating in an assessment scenario interview, designed to provide insight into teachers’ understanding of some of the basic assessment and statistical concepts needed to make sense of student data. As the data collection unfolded this spring, it became apparent that there are very large differences across teachers and that some teachers struggle even to “read” a data table or graph. The fact that a sizable proportion of the teacher sample in districts considered leaders in the use of data could not form valid interpretations of data when assessed individually suggests that school use of data to drive decision making may be an unrealistic goal. It can be argued, however, that the individual assessments administered by the contractor underestimate what school staff can do because they preclude teacher communication and collaboration around the data. Many school staffs engage in data analysis and reflection as a group activity, and it may be that groups arrive at sounder conclusions and consider more variables than do individual teachers when working alone. Thus, the assessment scenario interviews may under-estimate the quality of the conclusions actually made from data in schools where this is a group activity. The proposed fall 2007 site visits will provide an opportunity to evaluate the differences in teacher responses in group versus individual settings.

In addition, we have become aware that even though our 10 case study districts were nominated as leaders in the use of student data systems to inform instruction, their learning and implementation in this area are still underway. We expect that many of the sites will make significant progress between spring 2007 and spring 2008 and we would like to go back to these districts in the spring of 2008.

The district representative for the 9 districts visited during spring 2007 will be contacted via phone to discuss the willingness of the district and 3 schools to participate in another round of data collection in spring 2008. SRI staff will explain the reasons for this additional data collection activity and answer any questions that the respondent might have. We propose to make these calls in mid-August to determine how many districts will consent to the visits. (The site visit to the 10th district in the original sample will be conducted in September 2007 using the revised data collection approach and, therefore, would not be revisited in spring 2008.) The number of districts included in the second round of site visits will determine how many additional districts will need to be recruited for site visits in fall 2007.

To obtain additional analytic power and anticipate potential attrition from the case study sample (i.e., the possibility that some schools will choose not to participate in the second data collection activity), we proposed adding up to five additional districts and three schools in each (total of 15 schools) which would have first site visits in fall 2007. These districts would be selected based on the same criteria used to obtain the initial 10 case study sites: districts already identified through polling of leaders in educational technology and TWG members, and interactions with district leaders active in data-driven decision making through the study’s various outreach activities. For each district selected, project staff will contact the district representative by phone to explain the study, answer any questions, and to gain recommendations for 3 site visit schools at either the elementary or middle school leve1 based on the following criteria:

  1. One school that the district considers exemplary in its data use policies.

  2. One school that has shown dramatic improvement in its use of data to improve instruction and student outcomes.

  3. One school that is typical of the district with respect to use of data systems.

To the extent possible, each district will be asked to recommend schools that serve demographically similar student populations at the same grade levels. Priority will also be given to schools serving large numbers of low-income student and schools that have experienced improved student achievement. Project staff will work with the district representative to establish contact with school staff and coordinate the site visit schedule (as soon as possible after the initial contact).

District Survey

The proposed sampling plan for the district survey was developed with the primary goal of providing a nationally representative sample of districts to assess the prevalence of district support for data-driven decision making. A secondary goal was to provide numbers of districts adequate to support analyses focused on subgroups of districts. To conduct such analyses with reasonable statistical precision, we have created a six-cell sampling frame stratified by district size and poverty rate. From the population of districts in each cell, we will sample at least 90 districts. Below, we discuss our stratifying variables, describe the overall sampling frame, and show the distribution of districts in the six-cell design.

District Size

Districts vary considerably in size, the most useful available measure of which is pupil enrollment. A host of organizational and contextual variables that are associated with size exert considerable potential influence over how districts can support data-driven decision making. Most important of these is the capacity of the districts to design data management systems, actively promote the use of data for educational improvement, and provide professional development and technical support for data interpretation. Very large districts are likely to have professional development and research offices with staff to support school data use, whereas extremely small districts typically do not have such capacity. Larger districts also are more likely to have their own assessment and accountability processes in place, which may support accountability and data-driven decision making practices. District size is also important because of the small number of large districts that serve a large proportion of the nation’s students. A simple random sample of districts would include few—if any—of these large districts. Finally, accountability-related school improvement efforts are much more pronounced in large districts. For example, longitudinal analyses of district-level data from the Title I Accountability Systems and School Improvement Efforts study indicated that although the total number of schools identified for improvement has remained approximately the same from 2001 to 2004, there has been a steady trend toward a greater concentration of identified schools in large districts (U.S. Department of Education, 2006).

We propose to sort the population of districts nationally into three categories so that each category serves approximately equal numbers of students, based on enrollment data provided by the National Center for Education Statistics’ Common Core of Data (CCD):

  • Large (estimated enrollment 25,800 or greater). These are either districts in large urban centers or large county systems, which typically are organizationally complex and often are broken up into subdistricts.

  • Medium (estimated enrollment from 5,444 to 25,799). These are districts set in small to medium-size cities or are large county systems. They also are organizationally complex, but these systems tend to be centralized.

  • Small (estimated enrollment from 300 to 5,443). The small district group typically includes suburban districts, districts in large rural towns, small county systems, and small rural districts. These districts tend to have more limited organizational capacity.

Districts with 299 or fewer students will be excluded from the study. Such districts account for approximately one percent of all students and 21 percent of districts nationwide. The distribution of districts among the size strata and the proportion of public school students accounted for by each stratum are displayed in Exhibit 6. The proportion of districts among the three size strata in the district sample (excluding districts with 299 or fewer students) are: large (2.2 percent), medium (13.5 percent), and small (84.3 percent).

Exhibit 6
Distribution of Districts and Student Population, by District Size*


Enrollment Size Category

Number of Districts

Percent of Districts

Number of Students (000s)

Percent of Students

Large (>25,800)

249

1.8

15,834

33.0

Medium (5,444 – 25,799)

1,497

10.6

15,844

33.0

Small (300 – 5,443)

9,378

66.6

15,853

33.0

Very small (299 or less)

2,956

21.0

478

1.0

TOTAL

14,080

100.0

48,008

100.0

* Based on 2004-05 NCES Common Core of Data (CCD).


District Poverty Rate

Because of the relationship between poverty and achievement, schools with large proportions of high-poverty students are also more likely than schools with fewer high-poverty students to be low achieving, and thus to be identified as in need of improvement. Under NCLB, districts are required to provide identified Title I schools with technical assistance to support school improvement activities, including assistance in analyzing data from assessments and other student work to identify and address problems in instruction and assistance in identifying and implementing professional development strategies and methods of instruction that have proven effective in addressing the specific instructional issues that caused the school to be identified for improvement. We expect that high-poverty districts face greater demands for educational improvement, as well as the demands of working with larger numbers (or higher proportions) of schools identified for improvement. Consequently, we want our sample to include a sufficient number of both relatively high-poverty and relatively low-poverty districts so that survey results from these districts can be compared.

As a measure of district poverty, we will use the percentage of children ages 5 to 17 who are living in poverty, as reported by the U.S. Census Bureau and applied to districts by the National Center for Education Statistics. The distribution of districts among strata and the proportion of students accounted for by each stratum are displayed in Exhibit 7.

Exhibit 7
Distribution of Districts and Student Population, by District Poverty Rate*

District
Poverty Rate

Number of Districts

Percent of Districts

Number of Students (000s)

Percent of Students

Other (≤20%)

8,555

76.9

33,370

70.2

High (>20%)

2,569

23.1

14,160

29.8

TOTAL

11,124

100.0

47,530

100.0

* Excluding districts with 299 or fewer students. Based on data from the 2003 U.S. Census for the percentage of children ages 5 to 17 who are living in poverty and applied to districts by NCES.


District Sample Selection Strategy

Our original proposal called for a sample of 500 districts. We had anticipated an 85 percent participation rate, which would result in approximately 425 respondents. We currently propose to increase our sample size (up to 588 districts) with the goal of obtaining approximately 500 respondents.

The two variables of district size and poverty rate generate a six-cell grid into which the universe of districts (excluding very small districts) can be fit. Exhibit 8 shows the strata, a preliminary distribution of the number of districts in each stratum, and the initial sample size in each cell.

For most analyses, we will be combining data across cells. When examining data from the full 498 responding districts, the confidence interval is ± 6.5 percent. With 498 district respondents, if we are looking at data from all 249 high-poverty district respondents, the confidence interval is ± 9 percent. In those cases where we are examining survey responses in a single cell, 83 respondents will yield a statistic with a confidence interval of no more than
± 11 percent.1 For example, if we find that 50% of medium-size, high-poverty districts report a particular approach to supporting schools for data-based decision making activities, then the true population proportion is likely to be within 11 percentage points of 50 percent. More precisely, in 95 out of every 100 samples, the sample value should be within 11 percentage points of the true population value.

Exhibit 8
Number of Districts in the Universe and Quota Sample Size, by Stratum


District Poverty Rate



District Size

Low
(≤20%)

High
(>20%)


Total

Large
Sample
Universe


83
160


76
89


166
249

Medium
Sample
Universe


83
1,155


83
342


166
1,497

Small
Sample
Universe


83
7,240


83
1,155


166
9,378

TOTAL
Sample
Universe


249
8,555


249
2,569

491*
11,124

* Initially we will sample 89 districts in each cell (i.e., 534 districts). As required, additional samples will be added (up to a total of 588 districts) to meet our quota of respondents in each strata.

2. Data Collection Procedures

As described in the first section of this document, the district survey is a component of an interrelated data collection plan that also includes case studies and a review of secondary sources that address the same set of evaluation questions. Exhibit 9 outlines the schedule of data collection activities for all of the study’s data collection activities (OMB has already approved initial case study and district survey activities, OMB Control Number 1875-0241).


Exhibit 9
General Timeline of Data Collection Activities


Year

Conduct Case Studies


Survey Districts

Winter 2006 to Spring 2007


Fall 2007

Spring 2008




Case Studies

We propose to expand the case study work by making a second visit to willing case study schools at which time we can (1) interview the school leader or onsite data coach to update the description of their activities and accomplishments and (2) re-administer the assessment scenarios to small groups of teachers. We would propose using two kinds of groups: (1) three teachers working together and (2) a group of two teachers and the principal or data coach. With this additional data, we could compare the quality of teachers’ thinking and conclusions about data (1) when working alone, (2) when working with other teachers, and (3) when working with another teacher and a coach. These findings would have direct implications for strategies of Data Driven Decision Making implementation.

As noted above, the number of districts included in the second round of site visits will determine how many additional districts will need to be recruited for site visits in fall 2007. We are proposing adding up to five additional districts and three schools in each (total of 15 schools) which would have first site visits in fall 2007. The original and proposed school data collections are summarized in Exhibit 10.


Exhibit 10
Original and Proposed Sample Sizes for Site Visit Data Collections


Original Sample

Additional Sample


LEAs

Schools

Teachers

Principals

LEAs

Schools

Teachers+

Principals

First Interviews

10

30

180

30

5

15

120

15

District staff (3/LEA)





15




Solitary Data Interpretations (3 tchrs/school)


30

180




45


Small-Group Data Interpretation (5 tchrs & coach/school)







90


Follow-up Interviews*





7

21

126

21

Solitary Data Interpretations (3 tchrs/school)






21

63


Small-Group Data Interpretation (2 tchrs & coach/school)






21

63


Note: We were unable to schedule one district site visit prior to the end of the 2006-07 school year so this visit will be made in the fall along with those to the 5 districts proposed here as additions to the sample.

*Follow-up visits would be conducted with all schools in our original sample wiling to host a second visit; we have assumed that 7 districts (and 21 schools) would agree.

+ Includes data coach.

The same protocols used for the 2006-07 site visits will be used for new districts, while a revised set of protocols will be used with districts that receive a second round of visits in Spring 2008. The revised protocols can be found in Appendix D.

District Survey

The evaluation questions outlined in the first section of this document, along with an analysis of the literature and pretesting of case study protocols, have generated a list of key constructs that have guided survey development. Below we describe the district survey in greater detail.

The district survey will focus on the characteristics of district data systems and district supports for data-driven decision-making processes within schools. Proposed questions for the district survey will include transferability of data between state and district systems through the use of unique student and teacher identifiers, uses of the systems for accountability, and the nature and scale of the supports districts are providing for school-level use of data to improve instruction. In addition, the district survey will include a section on the types of student information available in district systems (information on types of student data available from state systems will be drawn from the survey conducted by NCEA). Our district survey will cover the suggested topics shown in Exhibit 11.

Exhibit 11

District Survey Topics

Topic

Subtopic

Evaluation Question Addressed

Data systems

  • Types of student data in district data systems.

  • Features and functions of district systems.

  • Linkages between unique student and teacher identifiers at the local and state levels (interoperability).

  • Limitations of district data systems.

  • Accessability of student data to school staff.

  • Data quality.

Q1: What kinds of systems are available to support district and school data-driven decision making?

DDDM tools for generating and acting on data

  • Features and tools of district systems.

  • Types of queries the system supports to link student performance with other data.

  • Frequency of DDDM activities related to student-level data and other types of decision making related to improving instructional practice.

Q2: Within these systems, how prevalent are tools for generating and acting on data?

Supports for DDDM

  • Extent to which DDDM has supported district improvement goals/activities.

  • Steps taken to increase the capacity of district staff to engage in DDDM.

  • District provision of training, resources, technical assistance, and the establishment of policies and practices to increase school-level capacity in DDDM.

  • How long districts have been providing supports for school use of data.

  • Areas where districts and schools need more support for data system use and DDDM.

  • Current barriers to expanding district DDDM practices.

Q3: How prevalent are state and district supports for school use of data systems to inform instruction?



The district survey will be formatted to contain structured responses that allow for the quantification of data, as well as open-ended responses in which district staff can provide more descriptive information on how DDDM is carried out in their district.

As noted earlier, respondents will also be given the option to complete the district survey online. The paper survey will contain the URL for the electronic version. In order to determine which districts have responded using the online survey, respondents will be requested to enter the identification number at the top of the survey as well as the name and location of their district on their online survey form.

Our approach to survey administration is designed to elicit a high response rate and includes a comprehensive notification process to achieve “buy-in” prior to data collection as well as multiple mailings and contacts with nonrespondents described later in this document. In addition, a computer-based system will be used to monitor the flow of data collection—from survey administration to processing and coding to entry into the database. This monitoring will help to ensure the efficiency and completeness of the data collection process.

Secondary Data Sources

The use of secondary data sources will enhance our analysis and avoid duplication of efforts. We have currently identified two main sources of additional data on how states, districts, and schools are using data systems.

The first source of data for secondary analysis is the NETTS study which is focusing on the implementation of the Enhancing Education Through Technology (EETT) program at the state and local levels. A teacher survey was completed through NETTS in January 2005 that gathered data from over 5,000 teachers in approximately 850 districts nationwide. Teachers were asked about their use of technology in the classroom, including the use of technology-supported databases. Questions about data systems addressed issues related to the accessibility of an electronic data management system with student-level data, the source of the system (state, district, school), the kinds of data and supports provided to teachers to access data from the system, and the frequency with which teachers use data to carry out specific educational activities, and the types of supports available to teachers to help them use student data.

The second major source of data for secondary analysis is the National Center for Educational Accountability (NCEA) state survey, first administered in August 2005, which focused on data system issues related to longitudinal data analysis. The second administration of the NCEA state survey was scheduled for completion by the end of September 2006. The 2006 survey updates data from the 2005 survey and adds some new items as well. The NCEA state survey will continue to be used as a secondary data resource for this study in the future. NCEA data provide key information on the data systems that states are building and maintaining as they gear up to meet NCLB requirements for longitudinal data systems (i.e., NCEA’s “ten essential elements”).2

Prepare Notification Materials and Gain District Cooperation

Gaining the cooperation of district representatives is a formidable task in large-scale data collection efforts. Increasingly, districts are beset with requests for information, and many have become reluctant to participate. Our efforts will be guided by three key strategies to ensure adequate participation: (1) an introductory letter signed by the Department, (2) preparation of high-quality informational materials, and (3) follow up contacts with nonrespondents.

U.S. Department of Education Letter. A letter from the Department will be prepared that describes the purpose, objectives, and importance of the study (i.e., documenting the prevalence of data-driven decision making, identifying practices for effective data-driven decision making, identifying challenges to implementation) and the steps taken to ensure privacy. The letter will encourage cooperation and will include a reference to the U.S. Department of Education Department General Administrative Regulations (EDGAR) participation requirements, stating that the law requires grantees to cooperate with evaluations of ESEA-supported programs (EDGAR Section 76.591). As noted earlier, the study is part of the national technology activities supported under Section 2404(b)(2) of Title II, Part D, of the Elementary and Secondary Education Act and 85 percent of districts receive EETT funds under ESEA. A draft of the letter is included in Appendix A.

High-Quality Informational Materials. Preparing relevant, easily accessible, and persuasive informational materials is critical to gaining cooperation. The primary component of the project’s informational materials will be a tri-fold brochure. This brochure includes the following information:

  • The study’s purpose.

  • Information about the design of the sample and the schedule for data collection.

  • The organizations involved in designing and conducting the study.

A draft copy of the brochure is included in Appendix B. All informational materials will be submitted to ED for approval before they are mailed. Mailing of informational materials to districts will begin in spring 2007, prior to the mail out of the survey.

Contacting Districts. The first step in contacting districts will include the notification letter and information packet sent to the district superintendent. As part of the notification process, we will request the most appropriate respondent for the district survey (i.e., the district staff member who has primary responsibility for leading data-driven activities related to instructional improvement). As initial pretest activities have shown, the position held by this particular staff member is not consistent across districts (e.g., Director of Technology, Director of Research and Assessment, Director of Curriculum). Therefore, we will take extra steps during the notification process to identify the best respondent for the survey (this will be particularly important in very large districts). The survey will then be shipped via Priority Mail to the district staff member identified by the superintendent. (A copy of the notification letter is included in Appendix A.)

Every effort is being made to minimize the burden on districts, but at the same time, very large districts that serve large numbers of students will be included in multiple studies given the proportion of the student population they serve. In these districts, great care will be taken during notification activities to respond to the concerns and questions of participants. If needed, project staff will be prepared to submit proposals to district research committees.

3. Methods to Maximize Response Rates

A number of steps have been built into the data collection process to maximize response rates. Special packaging (e.g., Priority Mail) and a cover letter from the U.S. Department of Education have served to increase survey response rates in other recent national studies (e.g., NETTS, Evaluation of Title I Accountability Systems and School Improvement Efforts). In addition, by targeting the appropriate respondent for the survey, we are more likely to obtain a completed survey. Finally, all notification materials will include a reference to the U.S. Department of Education Department General Administrative Regulations (EDGAR) participation requirements, stating that the law requires grantees to cooperate with evaluations of ESEA-supported programs. The Study of Education Data Systems is part of the national technology activities supported under Section 2404(b)(2) of Title II, Part D, of the Elementary and Secondary Education Act and 85 percent of districts receive EETT funds under ESEA.

Other steps to be taken to maximize response rates include multiple mailings and contacts with nonrespondents:

  • The surveys will be mailed with postage-paid return envelopes and instructions to respondents to complete and return the survey within 3 weeks.

  • Three weeks after the initial mailing, a postcard will be sent out reminding respondents of the survey closing date and offering to send out replacement surveys as needed or the option of completing the survey online.

  • Four weeks after the initial mailing, a second survey will be sent to all nonrespondents, requesting that they complete and return the survey (in the postage-paid envelopes included in the mailing) or complete the survey online within 2 weeks.

  • Six weeks after the initial mailing, telephone calls will be placed to all nonrespondents reminding them to complete and return the survey. A third round of surveys will be sent after telephone contact, if necessary.

The final step will be for SRI to conduct interviews by phone (i.e., in the event of response rates below 80 percent) to increase the response rate. We will use the data gathered through the phone interview to do a study of nonresponse bias. The responses obtained in the phone interview will be compared with those obtained from respondents to see whether people who did not respond to the mail and online survey are different in systematic ways from those who did. Respondents will also be asked their reasons for not responding to the mail or online survey to learn the reasons for nonresponse.

4. Pilot Testing

To improve the quality of data collection instruments and control the burden on respondents, all instruments will be pre-tested. Pilot tests of the district survey will be conducted with several respondents in districts near SRI offices in Arlington and Menlo Park, with districts among the case study pool that were not selected for inclusion in the case study sample, and with selected members of the TWG. The results of the pre-testing will be incorporated into revised instruments that will become part of the final OMB clearance package. If needed, the revised survey will be piloted in a small set of local districts with nine or fewer respondents prior to data collection. The district survey can be found in Appendix C.

5. Contact Information

Dr. Barbara Means is the Project Director for the study. Her mailing address is SRI International, 333 Ravenswood Avenue, Menlo Park, CA 94025. Dr. Means can also be reached at 650-859-4004 or via e-mail at [email protected].

Christine Padilla is the Deputy Director for the study. Her mailing address is SRI International, 333 Ravenswood Avenue, Menlo Park, CA 94025. Ms. Padilla can also be reached at 650-859-3908 or via e-mail at [email protected].



References

Choppin, J. (2002). Data use in practice: Examples from the school level. Paper presented at the annual meeting of the American Educational Research Association, New Orleans.

Confrey, J., and Makar, K. M. (2005). Critiquing and improving the use of data from high-stakes tests with the aid of dynamic statistics software. In C. Dede, J. P. Honan, and L. C. Peters (Eds.), Scaling up success: Lessons learned from technology-based educational improvement (pp. 198-226). San Francisco: Jossey-Bass.

Cromey, A. (2000). Using student assessment data: What can we learn from schools? Policy Issues, Issue 6. Oak Brook, Illinois: North Central Regional Educational Laboratory.

Feldman, J., and Tung, R. (2001). Using data based inquiry and decision-making to improve instruction. ERS Spectrum 19(3), 10-19.

Halverson, R., Grigg, J., Prichett, R., & Thomas, C. (2005). The new instructional leadership: Creating data-driven instructional systems in schools. Madison, Wisconsin: Wisconsin Center for Education Research, University of Wisconsin.

Herman, J., and Gribbons, B. (2001). Lessons learned in using data to support school inquiry and continuous improvement: Final report to the Stuart Foundation. Los Angeles, California: UCLA Center for the Study of Evaluation.

Light, D., Wexler, D., and Heinze, J. (2004). How practitioners interpret and link data to instruction: Research findings on New York City schools’ implementation of the Grow Network. Paper presented at the annual meeting of the American Educational Research Association, San Diego.

Mason, S. (2002). Turning data into knowledge: Lessons from six Milwaukee public schools. Paper presented at the annual meeting of the American Educational Research Association, New Orleans.

McCarthy, M.S. (2001). Making standards work: Washington elementary schools on the slow track under standards-based reform. Seattle, Washington: Center on Reinventing Public Education.

Means, B. (2006). Prospects for transforming schools with technology-supported assessment. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences. Cambridge, UK: Cambridge University Press, pp. 505-519.

Means, B. (2005). Evaluating the impact of implementing student information and instructional management systems. Background paper prepared for the Policy and Program Studies Service, U.S. Department of Education.

Means, B., Roschelle, J., Penuel, W., Sabelli, N., and Haertel, G. (2004). Technology’s contribution to teaching and policy: Efficiency, standardization, or transformation? In R. E. Floden (Ed.), Review of research in education (Vol 27 pp. 159-181). Washington, D.C.: American Educational Research Association.

Palaich, R. M., Good, D. G., and van der Ploeg, A. (2004). State education data systems that increase learning and improve accountability. Policy Issues (no. 16). Naperville, Illinois: Learning Point Associates.

Roschelle, J., Pea, R., Hoadley, C., Gordin, D., and Means, B. (2001). Changing how and what children learn in school with computer-based technologies [Special Issue, Children and Computer Technology]. The Future of Children, 10(2), 76-101.

Stringfield, S., Wayman, J. C. and Yakimowski-Srebnick , M. E. (2005). Scaling up data use in classrooms, schools, and districts. In C. Dede, J. P. Honan, and L. C. Peters (Eds.), Scaling up success: Lessons learned from technology-based educational improvement (pp. 133-152). San Francisco: Jossey-Bass.

U.S. Department of Education, Office of Educational Technology. (2004). Toward a new golden age in american education: How the Internet, the law and today’s students are revolutionizing expectations. Washington, D.C.: Author.

Turnbull, B. J., and Hannaway, J. (2000). Standards-based reform at the school district level: Findings from a national survey and case studies. Washington, D.C.: U.S. Department of Education.

U.S. Department of Education. (2006). Title I accountability and school improvement from 2001 to 2004. Washington, D.C.: Author.

Wayman, J. C., Stringfiled, S., and Yakimowski, M. (2004). Software enabling school improvement through analysis of student data. (Report No. 67). Baltimore: Center for Research on the Education of Students Placed at Risk, Johns Hopkins University.







1 The value of 11 percent assumes responses to yes/no questions with 50 percent probability of a “yes” response. The half-width of the confidence interval will be smaller for other probabilities of a “yes” response.

2 Creating a longitudinal data system that will provide data to improve student achievement is one of the major goals of the Data Quality Campaign. The campaign is a national, collaborative effort to encourage and support state policymakers to improve the collection, availability and use of high-quality education data.

File Typeapplication/msword
File TitleEVALUATION OF TITLE I ACCOUNTABILITY SYSTEMS AND SCHOOL IMPROVEMENT EFFORTS (TASSIE)
AuthorChristine Padilla
Last Modified ByDoED
File Modified2007-08-10
File Created2007-08-10

© 2024 OMB.report | Privacy Policy