Att_NTA Supporting statement A 8-9-07

Att_NTA Supporting statement A 8-9-07.doc

Study of Education Data Systems and Decision Making

OMB: 1875-0241

Document [doc]
Download: doc | pdf

II. Supporting Statement for Paperwork Reduction Act Submission

A. Justification

1. Circumstances Making Collection of Information Necessary

Section 2404(b)(2) of Title II, Part D, of the Elementary and Secondary Education Act of 2001 (P.L. 107-110) provides for the support of national technology activities, including technical assistance and dissemination of effective practices to state and local education agencies that receive funds to support the use of technology to enhance student achievement. In addition, sections 2415 (5) and 2416(b)(9) further call on states and districts to develop performance measurement systems as a tool for determining the effectiveness of uses of educational technology supported under Title IID. The Study of Education Data Systems and Decision Making will address both of these provisions by providing SEAs and LEAs with critical information to support the effective use of technology to generate data to support instructional decision making.

New standards and accountability requirements have led states’ to reexamine the appropriate uses of technology to support learning and achievement. For some years now, researchers and developers have promoted technology systems that give students access to high-quality content and carefully designed instructional materials (Roschelle, Pea, Hoadley, Gordin, and Means, 2001). More recently, both technology developers and education policymakers have seen great promise in the use of technology for delivering assessments that can be used in classrooms (Means, Roschelle, Penuel, Sabelli, and Haertel, 2004; Means, 2006) and for making student data available to support instructional decision making (Cromey, 2000; Feldman and Tung, 2001; Light, Wexler, and Heinze, 2004; Mason, 2002; U.S. Department of Education, 2004). These strategies are not only allowable uses of ESEA Title IID funds but also of great interest to educational technology policymakers because of their potential to link technology use with changes in practice that can raise student achievement—thus helping to fulfill the primary goal of both Title IID (Enhancing Education Through Technology) and ESEA as a whole. The purpose of the present study is to provide further information exploring these potential links between the use of data housed in technology systems and school-level practices that support better teaching and learning.

While data is being collected on the characteristics of database systems that manage, collect and analyze data (Means 2005; Wayman, Stringfield, and Yakimowski, 2004), much less is known about the relationship between various systems features and usage in ways that support better teaching and learning at the school and classroom levels. The wide variety in system architectures and components and the multiple ways in which educators have been encouraged to use them, suggest that a whole range of different practices and supports are being used in different places. However, survey questions used in the past have been at too large a grain-size to capture important distinctions. Moreover, the field has limited knowledge about how educators can best use data from different kinds of assessments (e.g., annual state tests versus classroom-based assessments) for informing instruction. Further study is needed to determine the system features and data types school staff have access to, and the kinds of supports they are receiving from their school districts.

Stringfield, Wayman, and Yakimowski-Srebnick (2005), have been studying these data systems for some time and believe that school-level data-driven decision making using such systems has great potential for improving student achievement but is not yet widely implemented in schools:

Formal research and our own observations indicate that being data-driven is a phenomenon that generally does not occur in the basic structure of the typical school. We believe that the absence of data-informed decision making is not due to educators’ aversion to being informed. Rather, the wealth of data potentially available in schools is often stored in ways that make it virtually inaccessible to teachers and principals and generally far from analysis-friendly. (p. 150).

Our conceptual framework suggests that in addition to making usable, system-wide data available through technology systems, schools and districts need to put in place processes for preparing educators to use these systems and to cultivate practices of inquiring about data within schools. Far more detailed studies of use of these systems are needed if we are to extract principles for designing systems and practices that support frequent and effective use (the six supports identified in our conceptual framework). This is the goal of the Study of Education Data Systems and Decision Making.

2. Use of Information

The data collected for this study will be used to inform educational policy and practice. More specifically, the data can be used:

  • By SEAs and LEAs to compare the contents and capabilities of their student data system to that of a nationally representative sample of districts, and to identify and develop policies to support implementation practices that are associated with more teacher use of student data in planning and executing instruction.

  • By ED staff to design outreach efforts to stimulate and enhance the use of data systems to improve instruction.

  • By researchers, who may use the data to inform future studies of data-driven decision making to improve instructional practice.

3. Use of Information Technology

The contractors will use a variety of advanced information technologies to maximize the efficiency and completeness of the information gathered for this evaluation and to minimize the burden the evaluation places on respondents at the district and school levels. For example, respondents will be given the option to complete either a paper and pencil or online version of the district survey. During the data collection period, an e-mail address and phone number will be available to permit respondents to contact the contractor with questions or requests for assistance. The e-mail address will be printed on all data collection instruments, along with the name and phone number of a member of the data collection team. For the district survey, a notification and response rate tracking system will be set up in advance so that task leaders and ED can have weekly updates on response rates. Finally, a project Web site, hosted on an SRI server, will be available for the purpose of communicating data collection activities and results (e.g., a copy of the study brochure, a copy of the district survey, study reports as available).

4. Efforts to Identify Duplication

The contractor is working with the U.S. Department of Education to minimize burden through coordination across Department studies to prevent unnecessary duplication (e.g., no repeating questions for which sufficient data are already available). In addition to using teacher survey data collected from other Department-sponsored evaluation activities to assess national trends regarding some aspects of data-driven decision making, secondary sources have also been identified for data on state-supported data systems, thereby eliminating the need for state-level data collection. Additional information on these secondary sources is provided in the discussion of data collection procedures.

5. Methods to Minimize Burden on Small Entities

No small businesses or entities will be involved as respondents.

6. Consequences If Information Is Not Collected or Is Collected Less Frequently

Currently, the Department is working to improve the quality of K-12 performance data provided by SEAs to assess the effectiveness of federally-supported programs. EDFacts is an initiative of the Department, SEAs, and other members of the education community to use state-reported performance data to support planning, policy, and program management at the federal, state, and local levels. Since much of the data that states gather on program effectiveness comes from the local level, it is important that local performance measurement systems are collecting high-quality data. No Child Left Behind also cites the importance of using data to improve the instruction that students receive, but many of the data system efforts to date have focused on improving state data systems without comparable attention to making the data in those systems available to and meaningful to the school staff who make instructional decisions. Improved use of data at the local level can improve the teaching and learning process by giving teachers the information they need to guide their instructional practices. A potential byproduct would be improvement of the quality of the data that the Department receives from states because states depend on districts and schools for their data.

To address the need for high-quality, actionable data at the local level, the Study of Education Data Systems and Decision Making will provide in-depth information in several key issue areas: the availability of education data systems, their characteristics, the prevalence and nature of their use in districts and schools, and the conditions and practices associated with data usage that adheres to professional standards for data interpretation and application. The study will also provide more detailed information on implementation of data-driven decision making than has previously been available at the local level.

If the data are not collected from the proposed study, there will be no current national data collection on the availability of education data systems at the local level and the nature of their use in districts and schools. There will also be only limited information on classroom and other school-level use of data. (Several investigations are exploring data use in individual schools, but no other study is producing cross-case analysis.) The result will be a continued gap in information on how districts and schools are using data to inform their instructional decision making, the barriers to effective use of data, and the supports that staff need to better employ data in their teaching in a way that can positively impact student achievement.

7. Special Circumstances

None of the special circumstances listed apply to this data collection.

8. Federal Register Comments and Persons Consulted Outside the Agency

A notice about the study was published in the Federal Register Vol. 72, January 31, 2007 on page 4494. During the public comment period no comments were received. Additional notices were published when the subsequent request was made for clearance of the district survey (approval was received June 15, 2007).

In addition, throughout the course of this study, SRI will draw on the experience and expertise of a technical working group (TWG) that provides a diverse range of experience and perspectives, including representatives from the district and state levels, as well as researchers with expertise in relevant methodological and content areas. The members of this group and their affiliations are listed in Exhibit 3. The first meeting of the technical working group was held on January 26, 2006. Individual members of the TWG have provided additional input as needed (e.g., recommendations for revisions to the case study sample, review of the district survey).

Exhibit 3
Technical Working Group Membership

Member

Affiliation

Marty Daybell

Washington State Education Authority

Dr. Jeff Fouts

Fouts & Associates

Aimee Guidera

National Center for Educational Accountability

Jackie Lain

Standard and Poor’s School Evaluation Services

Dr. Glynn Ligon

ESP Solutions

Dr. Ellen Mandinach

Center for Children and Technology

Dr. Jim Pellegrino

University of Illinois-Chicago

Dr. Arie van der Ploeg

North Central Regional Education Laboratory (NCREL)/Learning Points

Dr. Jeff Wayman

University of Texas, Austin

Katherine Conoly

Executive Director of Instructional Support, Corpus Christi Independent School District



9. Respondent Payments or Gifts

No payments or gifts will be provided to respondents.

10. Assurances of Confidentiality

SRI is dedicated to maintaining the confidentiality of participant information and the protection of human subjects. SRI recognizes the following minimum rights of every subject in the study: (1) the right to an accurate representation of the right to privacy, (2) the right to informed consent, and (3) the right to refuse participation at any point during the study. Because much of the Policy Division’s education research involves collecting data about children or students, we are very familiar with the Department’s regulation on protection of human subjects of research. In addition, SRI maintains its own Institutional Review Board. All proposals for studies in which human subjects might be used are reviewed by SRI’s Human Subjects Committee, appointed by the President and Chief Executive Officer. For consideration by the reviewing committee, proposals must include information on the nature of the research and its purpose; anticipated results; the subjects involved and any risks to subjects, including sources of substantial stress or discomfort; and the safeguards to be taken against any risks described.

SRI project staff have extensive experience collecting information and maintaining confidentiality, security, and integrity of interview and survey data. In accordance with the SRI’s institutional policies, privacy and data protection procedures will be in place. These standards and procedures for district survey data are summarized below.

  • Project team members will be educated about the privacy assurances given to respondents and to the sensitive nature of materials and data to be handled. Each person assigned to the study will be cautioned not to discuss confidential data.

  • Each survey will be accompanied with a return envelope so as to allow the respondent to seal it, once it has been completed. All respondents will mail their postpaid surveys individually to SRI. The intention is to allow respondents to respond honestly without the chance that their answers will be inadvertently read or associated personally with them. If the survey is taken online, security measures are in place to safeguard respondent identifies and to ensure the integrity of the data.

  • Participants will be informed of the purposes of the data collection and the uses that may be made of the data collected.

  • Respondent’s names and addresses will be disassociated from the data as they are entered into the database and will be used for data collection purposes only. As information is gathered on districts, each will be assigned a unique identification number, which will be used for printout listings on which the data are displayed and analysis files. The unique identification number also will be used for data linkage.

  • All paper surveys will be stored in secure areas accessible only to authorized staff members.

  • Access to the survey database will be limited to authorized project members only; no others will be authorized such access. Multilevel user codes will be used, and entry passwords will be changed frequently.

  • All identifiable data (e.g., tracking data) will be shredded as soon as the need for this hard copy no longer exits.

  • The reports prepared for this study will summarize findings across the samples and will not associate responses with a specific district, school or individual. Information that identifies a school or district will not be provided to anyone outside the study team, except as required by law.

At the time of data collection, all survey participants will be assured of privacy to the extent possible (see draft of survey cover letter in Appendix A). Case study participants have also been assured of privacy to the extent possible in the initial invitation to participate in the study as described in the first OMB package. Any new districts included in the case study activities will receive the same assurances.

11. Questions of a Sensitive Nature

No questions of a sensitive nature will be included in the district survey.

12. Estimate of Hour Burden

The estimates in Exhibit 4 reflect the burden for both the original set of case studies and district survey data collection activity previously approved, as well as the additional case study data collection activity for which we are seeking approval. Based on the information in Exhibit 4, 360 respondents and 292.5 hours of burden will be added through the additional case study activity. The additional estimated burden will be $11,700 based on a $40/hour salary cost.

  • District personnel—time associated with reviewing study information, and if required, reviewing study proposals submitted to the district research committee, and preparing a list of schools that are active data users (e.g., marking up existing list of schools); time associated with asking questions about the study and answering questions about the district’s use of data systems.

  • School personnel—time associated with asking questions about the study and time associated with answering interview questions and conducting data scenarios with teachers.

Exhibit 4
Estimated Burden for Study

Group

Participants

Total No.

No. of Hours per Participant

Total No. of Hours

Estimated Burden

District Personnel (survey)

Superintendent (notification)

District staff (survey)


534


500*


0.5


1.0


267


500


$10,680


20,000

District Personnel (case studies)

Superintendent (notification)

District staff (interviews)


22


45


0.5


1.0


11


45


$ 440


1,800

School Personnel (case studies)

School principal (notification)

School principal (interview)

Teachers (interviews & focus groups)

Teachers (solitary interviews)

Teachers (group)


66


66



360


108


153


0.5


1.0



1.0


1.0


1.5


33


66



240


108


229.5


$ 1,320


2,640



9,600


4,320


9,180


Totals

1,854


1,499.5

$59,980

*Anticipate some non-response hence only 500 respondents to survey.



13. Estimate of Cost Burden to Respondents

There are no additional respondent costs associated with this data collection other than the burden estimated in item A12.

14. Estimate of Annual Costs to the Federal Government

The annual costs to the federal government for this study, as specified in the contract, are:


Fiscal year 2005

$272,698

Fiscal year 2006

$849,532

Fiscal year 2007

$242,696

Fiscal year 2008

$634,329

Total

$1,999,255


15. Change in Annual Reporting Burden

The program change of 292.5 hours is due to the additional case study work being added to the collection.

16. Plans for Tabulation and Publication of Results

During the summer of 2006, the data collection team worked with the secondary analysis team to analyze data from the NETTS 2005-06 teacher survey. Descriptive statistics were calculated to show the proportion of teachers nationwide who are aware of their access to student data systems and the support teachers have had for DDDM. Although an evaluation report is not required for the first year of the study, an issue brief is being prepared on the secondary analyses of the NETTS teacher survey data (additional information on this database is provided in Section B).

In 2007, analysis activities will focus on the case study data (OMB Control Number 1875-0241). These data will provide a more in depth look at the kinds of systems that are available to support district and school data-driven decision making, and the supports for school use of data systems to inform instruction. We will analyze teacher focus group and interview data on these topics. A separate analysis will examine responses to the portion of the teacher case study interviews in which teachers respond to hypothetical data sets. We will develop and apply rubrics for judging the quality of the inferences teachers make from data. The school-level and teacher-level data from the first round of site visits will be reported in an interim report to be drafted by October 2007 to answer the following evaluation questions:

  • To what extent do teachers use data systems in planning and implementing instruction?

  • How are school staff using data systems?

  • How does school staffs’ use of data systems influence instruction?

In 2008, analysis activities will focus on data from the district survey (OMB Control Number 1875-0241) and additional case study activities. These analyses, along with prior analyses of data collected at the state, district, and teacher levels, will be combined in a final report to be drafted by August 2008. The final report will focus on the following additional evaluation questions:

  • What kinds of systems are available to support district and school data-driven decision making (prevalence of system access; student information and functions of these data systems)?

  • Within these systems, how prevalent are tools for generating and acting on data?

  • How prevalent are state and district supports for school use of data systems to inform instruction?

Exhibit 5 outlines the report dissemination schedule.


Exhibit 5
Schedule for Dissemination of Study Results

Activity/Deliverable

Due Date

Interim Report

Outline of interim report

First draft of interim report

Second draft of interim report

Third draft of interim report

Final version of interim report


8/07

10/07

12/07

2/08

3/08

Final Report

Outline of final report

First draft of final report

Second draft of final report

Third draft of final report

Final version of final report


6/08

8/08

10/08

11/08

12/08



Data Analysis

The Study of Education Data Systems and Decision Making will analyze quantitative data gathered through surveys and qualitative data gathered through case studies. Survey data will include the 2007-08 district survey of a nationally representative sample of approximately 500 districts administered in fall 2007 and, as noted above, secondary analyses of other data sets will also be conducted by the study team. Qualitative data will be collected through case studies of 30 schools conducted during winter and spring of the 2006-07 school year and approximately 36 schools in fall and spring of the 2007-08 school year.

The evaluation will integrate findings derived from the quantitative and qualitative data collection activities. Results from the case studies may help to explain survey findings, and the survey data may be used to test hypotheses derived from the case studies.

Case Study Analyses

The steps taken in the analysis of qualitative data were described previously. For the newly proposed group interviews, we will videotape a subset of these interactions so that we could do a detailed examination of the collaboration process in addition to the estimation of the quality of data inferences derived. This detailed analysis of how teachers’ reasoning about data is influenced through interactions with colleagues and school leaders may reveal strategies for scaffolding teachers’ interactions with data that can be incorporated into professional development activities or decision support systems.

Survey Analyses

The processing and analysis of survey responses are designed to minimize sources of error that can mask or bias findings. Data management techniques are designed to produce accurate translation of survey responses to computerized databases for statistical analysis. The statistical analyses will use appropriate analytical weights so that the survey findings are unbiased estimates of parameters of the populations of interest.

The evaluation questions can be addressed with a few different statistical techniques. Many of the questions require descriptive information about the population of districts or schools on a single variable of interest. Some evaluation questions may be explored in more detail by examining relationships among variables or by examining whether the distributions of a variable differ from one subpopulation to another. These questions will be addressed through multivariate analyses.

Univariate Analyses. Characteristics of the measurement scale for a variable will determine the type of summary provided. For many variables, the measurement scale is categorical because survey respondents select one choice from a small number of categories. For other variables, the measurement scale is continuous because respondents provide a numerical answer that can take on many values, even fractional values, along an ordered scale. The distributions of categorical variables will be provided by presenting for each choice category the (weighted) percentage of respondents who selected the category.

The distributions of continuous variables will be summarized by an index of the central tendency and variability of the distribution. These might be the mean and standard deviation, if the distribution is symmetrical, or the median and interquartile range, if the distribution is skewed.

In some cases, the variable of interest can be measured by a single survey item. In other cases (e.g., for more complex or abstract constructs), it is useful to develop a measurement scale from responses to a series of survey items. The analyses will provide simple descriptive statistics that summarize the distribution of measurements, whether for single items or for scales. Whenever it is possible to do so within the set of survey data, we also will examine the validity of the measurement scale. Techniques, such as factor analysis, that examine hypothesized relationships among variables will be used to examine the validity of scale scores.

Bivariate and Multivariate Analyses. In addition to the univariate analyses, we plan to conduct a series of bivariate and multivariate analyses of the survey data. For example, we will examine relationships between specific demographic variables and district and school reports of the implementation of supports for data-driven decision making. Our primary purpose in exploring the relationships between context variables and responses to items about implementation of support systems is explanatory. For example, we will examine relationships of survey responses with district size because we expect that larger districts have more resources and, therefore, greater capacity to assist schools with data-driven decision making. Conversely, small districts (often rural) have very limited central office capacity to assist schools (McCarthy, 2001; Turnbull and Hannaway, 2000) and may be challenged to implement data-driven decision making well.

By looking at relationships among two or more variables, the bivariate and multivariate analyses may provide more refined answers to the evaluation questions than the simple summary statistics can provide. A variety of statistical techniques will be used, depending on the characteristics of the variables included in the analyses and the questions to be addressed. Pearson product-moment correlations will summarize bivariate relationships between continuous variables, unless the parametric assumptions are unreasonable, in which case Spearman rank order correlations will be used. Chi-square analyses will examine relationships between two categorical variables. We will use t-tests and analysis of variance (ANOVA) to examine relationships between categorical and continuous variables. Regression analyses will be used to examine multivariate relationships.

Integration of Quantitative and Qualitative Data. Findings from the surveys and case studies provide different perspectives from which to address the evaluation questions. For example, whereas the district survey can estimate the number of districts in the nation that provide assistance for data-driven decision making, the case studies can describe the nature and quality of the assistance that districts might provide. The survey findings can be generalized to national populations, but they may have limited descriptive power. The opposite is true for the case studies, which can provide detailed descriptions and explanations that may have limited ability to generalize.

A goal of the analysis is to integrate findings from the different methods, building on the strengths of each. When they are consistent, the findings from different perspectives can provide rich answers to evaluation questions. When they are inconsistent, the findings point to areas for further study. For example, the case studies may identify hypotheses worth testing in the survey data, and survey results might identify an unexpected correlation between variables that case study data could help to explain.

Our strategy for integration is to share findings across the perspectives in a way that maintains the integrity of each methodology. The evaluation questions provide the framework for integration. Separately, the quantitative and qualitative data will be analyzed to a point where initial findings are available to address the evaluation questions. Then the findings will be shared and discussed to identify robust findings from both types of data, as well as inconsistencies suggesting the need for further analyses. Frequent communication at key points in this process will be important. We will have formal meetings of the case study and quantitative analysts to share findings. The first will be held after preliminary results are available from each type of study, for the purpose of identifying additional analyses to conduct. Subsequent meetings will focus on interpretation of analyses and planning for new examinations of the data. Communication between analytic teams will be facilitated by having some core members common to both.

The integration of different types of data, like the integration of data from different sites within the qualitative analyses, requires an iterative approach. Each step builds on the strengths of the individual data sets to provide answers to the evaluation questions. If, for example, we find through analyses of survey data that schools vary on the degree to which they receive technical support for data interpretation, we will first test with the survey data whether responses vary with characteristics identified through previous research (e.g., size of the district in which the school is located, number of years the school has been in identified as in need of improvement). If a characteristic is predictive of the amount of assistance that schools receive, it will be explored further through our case studies. Data from the case study schools will be examined to see whether they support the relationship between the characteristic and technical assistance; to provide additional details, such as other mitigating factors; and to identify examples that could enhance the reporting of this finding.

17. OMB Expiration Date

All data collection instruments will include the OMB expiration date.

18. Exceptions to Certification Statement

No exceptions are requested.


File Typeapplication/msword
File TitleEVALUATION OF TITLE I ACCOUNTABILITY SYSTEMS AND SCHOOL IMPROVEMENT EFFORTS (TASSIE)
AuthorChristine Padilla
Last Modified ByDoED
File Modified2007-08-10
File Created2007-08-10

© 2024 OMB.report | Privacy Policy