District Technical Support Memo - supporting justification

Att_FRSS #1850-0733 v.16 5-15-08 district tech supp memo.doc

NCES Quick Response Information System

District Technical Support Memo - supporting justification

OMB: 1850-0733

Document [doc]
Download: doc | pdf


TO:

Rochelle W. Martinez


April 29, 2008

THROUGH:

Katrina Ingalls



FROM:

Edith McArthur





SUBJECT:

Request for Clearance for the Proposed Fast Response Survey System (FRSS) 93: Educational Technology in Public School Districts


Justification


The National Center for Education Statistics (NCES), U.S. Department of Education proposes to employ the Fast Response Survey System (FRSS) to conduct a survey on educational technology in public school districts. The survey was requested by the Office of Educational Technology (OET) to provide national data on technology access and use within the nation’s public school districts. The survey will cover a wide range of topics, such as: technology infrastructure (e.g., types of Internet connectivity and Internet capacity, wired and wireless networks), treatment of older computers, district policies on acceptable uses of technologies, digital resources provided to schools and teachers by districts (e.g., online assessments, data management systems for analyzing and tracking student progress, and server space), types of formal teacher professional development offered by districts, and respondent perceptions about technology use within the district. Most of this information has yet to be captured at the national level. By addressing topics regarding educational technology in school districts, the survey will provide valuable and timely national data to education policymakers, school district administrators, other educators, researchers, and the educational technology community (e.g., school technology specialists).


This is one of three proposed surveys that OET has requested to be conducted with the FRSS. The other two are a school-level and a teacher-level survey (the teacher survey is submitted under 1850-NEW). The three new surveys will serve as a barometer of technology access and use within the nation’s public elementary and secondary school districts, schools, and classrooms. The district survey will be mailed to a sample of approximately 1,550 public school districts from the NCES Common Core of Data (CCD) Local Education Agency (School District) Universe File. We will mail the survey to district superintendents and ask that it be completed by the person most knowledgeable about educational technology within the district. Respondents will have the option of completing the survey by mail or on the web. The FRSS survey, under OMB clearance #1850-0733, is authorized under Section 153 (a) of the Education Science Reform Act of 2002 (Public Law 107-279).


Overview of Data Collection


Westat will collect the information for the Early Childhood, International and Crosscutting Studies Division, NCES, U.S. Department of Education, using the FRSS. Westat is responsible for the questionnaire development; sample design and selection; data collection; telephone follow up; editing, coding, keying, and verification of the data; and production of tabulations and the report detailing the results of the survey.


The data collection will be accomplished by means of a self-administered survey. Respondents will have the option of completing the survey on a traditional paper and pencil questionnaire or on a Web version of the questionnaire that will be accessed through the Internet. The sample will consist of about 1,550 public elementary schools selected from the NCES 2005-2006 Common Core of Data (CCD) Public School Universe file. The questionnaire (see Attachment 2) is limited to three pages of information readily available to respondents and can be completed by most districts in 30 minutes or less. These procedures are typical for FRSS surveys and result in minimal burden on respondents.


Since this survey includes new topics, substantial development work was conducted. The development of the survey has involved several phases. First, after discussions with OET about desired survey topics, Westat conducted a brief literature review and a search of existing survey instruments. The initial draft of each survey instrument included some newly crafted items as well as some adapted items from existing surveys. Second, Westat conducted four rounds of feasibility calls to test and improve the instrument. Respondents around the country were asked to review and discuss the survey in 30-45 minute telephone interviews. Respondents were asked about the clarity and relevance of the survey items, and also about whether they could provide the information asked for in each question without too much burden. After each round of calls, the instrument was revised and submitted to OET and NCES for review and further revision.


Following the NCES Questionnaire Review Board (QRB) meeting, the questionnaire was revised and submitted for NCES review and approval. This questionnaire draft was then pretested through calls to technology specialists of selected public school districts. Following the pretest, the questionnaire was revised again and submitted with an official request for OMB clearance. Questionnaires will be mailed in late June 2008 to the superintendent of each sampled district. The cover letter (see Attachment 1) will include instructions that the survey is designed to be completed by the person most knowledgeable about educational technology within the district. Included in the mailing will be information about the option to complete a Web version of the survey on the Internet. Telephone follow up for nonresponse will begin about 3 weeks after the questionnaires have been mailed to the districts. Experienced telephone interviewers will be trained to conduct the nonresponse follow up and will be monitored by Westat supervisory personnel. The response rates for FRSS surveys of districts typically have been 90 percent or greater.



Data Collection Instrument


A questionnaire and cover letter (enclosed) will be mailed to each school district in June of 2008. The cover letter requests the participation of the district and introduces the purpose and content of the survey. It also notes that the survey should be completed by the person or persons most knowledgeable about educational technology in the district. The cover letter also includes instructions on how to complete and return the survey, as well as contact information in case of queries. Included in the mailing will be information about the option to complete a Web version of the survey.


The questionnaire will collect information on various aspects of educational technology access and use in public school districts. Questions 1-6 address features of technology infrastructures in school districts, including local area networks (question 2), district networks (questions 3-5), and backup connections (question 6). Questions 7-9 involve the treatment of older computers, and question 10 asks about district written policies that restrict the use of various technologies in the schools. Questions 11 and 12 address the sorts of technologies made available to teachers and students by districts, and question 13 examines the types of student data kept by districts in electronic data systems, such as attendance data and assessment scores. Question 14 asks districts whether they employ an individual who is responsible for educational technology leadership. Question 15 involves types of teacher professional development in educational technology offered or required by districts, and question 16 elicits the opinions of respondents regarding various aspects of technology in their districts, for example, about the perceived adequacy of funding and technology support.



Review by Persons Outside the Agency


All development work occurred in close collaboration with the Office of Educational Technology. The various draft versions of the instrument were also tested thoroughly with individuals in the field, for example, educational technology specialists in school districts. In addition to multiple rounds of feasibility calls, the questionnaire was most recently pretested through calls to educational technology specialists in school districts. Based on input from these respondents, NCES, and OET, the questionnaire was revised and submitted as Attachment 1 in this official request for OMB clearance.


Survey Cost


The survey is estimated to cost the Federal government about $330,000, including about $300,000 for contractual costs and $30,000 for salaries and expenses. Based upon costs of past FRSS sample surveys, contractual costs are divided into the subtask costs shown in Exhibit 1.


Exhibit 1. Estimated contractual costs by subtask


Subtask

Cost



Sampling

10,000

Survey preparation

50,000

Data collection

125,000

Data analysis

40,000

Report preparation and dissemination

75,000



Total

300,000


Time Schedule


Mailing of the survey is planned for early June 2008. About 3 weeks after mailout of the surveys, Westat will begin telephone follow up for nonresponse. Data collection is scheduled for completion about 12 weeks after initial mail out. Exhibit 2 shows the anticipated schedule.


Exhibit 2. Anticipated data collection schedule



Cumulative workdays


From submission to RIMG/OMB

From RIMG/OMB approval




Package to OMB

0

-

Package approved by OMB

45

0

Mail-out of questionnaire

55

10

Follow up started

80

25

Follow up completed

115

70


Plan for Tabulation and Publication


Most of the analyses of the questionnaire data will be descriptive in nature, providing NCES and other data users with tables, charts, and appropriate explanatory text. Survey responses will be weighted to produce national estimates. Crosstabulations of data items will be made with selected classification variables including:


  • District enrollment size (less than 2,500, 2,500-9,999, and 10,000 or more);

  • Geographical region (Northeast, Southeast, Central, and West);

  • Metropolitan status (urban, suburban, rural); and

  • Percent of students eligible for free or reduced-price lunch (less than 10 percent, 11 to 19 percent, 20 percent or more).


Weighted frequency distributions will be produced for all items. Crosstabulations by the analysis variables listed above will be produced for all the categorical items.

Statistical Methodology


Reviewing Statisticians

Adam Chu, Senior Statistician, Westat, (301) 251-4326, was consulted about the statistical aspects of the design.


Respondent Universe


The respondent universe for the proposed survey on educational technology will include all regular public school districts in the United States. This survey is one of three related surveys to be conducted under a nested design involving a sample of districts, schools within districts, and teachers within schools. Since the primary focus of the study will be on the principal and teacher surveys, a stratified sample design will be employed to select a sample of schools for these two school-based surveys. The resulting school sample will then be used to identify the corresponding sample of districts. An advantage of this approach is that it avoids the need to introduce an additional stage of sampling to select the districts. Moreover, the resulting district sample is unbiased with proper weighting and is expected to be reasonably efficient for estimation of district-level statistics that are correlated with the enrollment size of the district. Such an approach has been used successfully in prior FRSS surveys (the linked Safe Schools Surveys conducted in 1991) and the ongoing Schools and Staffing Survey (SASS) conducted by NCES.


Ordinarily, the Common Core of Data (CCD) Local Education Agency (LEA) Universe File maintained by NCES would be used to create a frame for sample selection purposes. However, this is not necessary for the present study because the sample of districts will be selected indirectly through the selection of a stratified sample of schools. Note that although the CCD LEA Universe File will not explicitly be used in the sampling process, the sample of districts derived from the school sample will nonetheless be representative of all districts in the nation. As indicated in Table 1, a total of 13,832 “regular” school districts are included in the respondent universe (i.e., the population of inference), where regular districts are defined to be those with an NCES type-of-agency code of 1 (local school district that is not a component of a supervisory union) or 2 (local school district component of a supervisory union).


Table 1. Distribution of regular public school districts in the 2005-06 NCES Common Core of Data Local Education Agency (LEA) Universe File by Enrollment Size Class and Region




Region

Enrollment

size class

Number of districts*



Northeast


Southeast


Central


West







Less than 1,000

6,520

1,134

238

2,733

2,415

1,000 to 2,499

3,335

849

431

1,352

703

2,500 to 9,999

3,082

825

645

872

740

10,000 to 99,999

869

123

220

149

377

100,000+

26

4

14

2

6







TOTAL

13,832

2,935

1,548

5,108

4,241


Includes district types 1 (local school district not part of a supervisory union) and 2 (local school district component of a supervisory union). Counts exclude districts with 0 or missing enrollment as reported in the CCD LEA Universe File.



Statistical Methodology


As indicated previously, the sample of districts for this component of the study will be identified through the selection of a stratified sample of schools, where strata are defined by instructional level, enrollment size class, and selected poverty categories based on the percent of students eligible for free/reduced price lunch. A total of 2,000 schools will be sampled, including approximately 1,000 elementary schools and 1,000 secondary/combined schools. Initially, the 1,000 elementary schools and 1,000 secondary/combined schools will be allocated to the strata in rough proportion to the aggregate measure of size of the schools in the stratum, where the measure of size is defined to be the square root of the number of teachers (FTE) in the school. Within the strata, schools in the sampling frame will be sorted by type of locale (central city, urban fringe, town, rural) and Office of Education (OE) region to induce additional implicit stratification. The sample of schools will then be selected systematically with probabilities proportionate to the measure of size. We have estimated that the resulting sample of schools will yield a total of 1,550-1,570 school districts. Table 2 summarizes the expected sample sizes resulting from proposed design by enrollment size class.


Table 2. Initial and expected sample sizes for the proposed district survey on educational technology



District

enrollment

size class


Number of

districts*

in population


Expected

number of sample districts



Districts completing survey†





Less 1000

6,520

231

208

1000-2499

3,335

286

257

2500 to 9999

3,082

552

497

10000-99999

869

455

410

100000+


26

26

23

Total

13,832

1,550

1,395

* Regular districts classified as type 1 or 2 in the CCD LEA Universe File.

Assumes 90 percent response rate.



Expected Levels of Precision


Table 3 summarizes the approximate sample sizes and standard errors to be expected under the proposed design for selected analytic domains. Note that sample sizes refer to the numbers of districts completing the district questionnaire (not the initial sample sizes). Also note that the standard errors in the table reflect design effects ranging from 1.12 to 1.50. The design effects (i.e., unequal weighting effects) are a consequence of the fact that large districts will be selected at relatively higher rates (i.e., have smaller sampling weights) than small districts. Since the district sample sizes are subject to sampling variability, the final sample sizes may differ from those shown in the table. Also, note that the sample sizes represent the expected numbers of completed questionnaires assuming an overall response rate of 90 percent. The standard errors in Table 3 can be converted to 95 percent confidence bounds by multiplying the entries by 2. For example, as can be seen in Table 3, an estimated proportion of the order of 20 percent (P = 0.20) for suburban districts would be subject to a margin of error of ±0.036 (±3.6 percent) at the 95 percent confidence level. Similarly, an estimated proportion of the order of 50 percent (P = 0.50) for the total sample would be subject to a margin of error of ±0.032 (±3.2 percent) at the 95 percent confidence level.


Table 3. Expected sample sizes (number of responding districts) and corresponding standard errors for estimates of proportions by selected analytic domains




Standard error† of an estimated



proportion equal to ...


Domain

Expected sample size*


P = 0.20


P = .33


P = .50







Total sample

1,395

0.013

0.015

0.016






Metropolitan status





Central city

254

0.031

0.036

0.038

Suburban

715

0.018

0.022

0.023

Rural

427

0.024

0.028

0.030






Region





Northeast

290

0.029

0.034

0.036

Southeast

305

0.028

0.033

0.035

Central

393

0.025

0.029

0.031

West

407

0.024

0.029

0.030






Enrollment size class





Less than 2,500

465

0.020

0.023

0.025

2,500 to 9,999

497

0.019

0.022

0.024

10,000+

433

0.020

0.024

0.025







* Expected number of responding districts assuming a 90 percent response rate.


Assumes unequal weighting design effects ranging from 1.12 to 1.50 depending on analytic domain.



Estimation and Calculation of Sampling Errors


For estimation purposes, sampling weights reflecting the overall probabilities of selection will be attached to each district data record. These weights will also include upward adjustments for unit nonresponse. To properly reflect the complex features of the sample design, standard errors of the survey-based estimates will be calculated using jackknife replication. Under the jackknife replication approach, 50 subsamples or “replicates” will be formed in a way that preserves the basic features of the full sample design. A set of estimation weights (referred to as “replicate weights”) will then be generated for each jackknife replicate. Using the full sample weights and the replicate weights, estimates of any survey statistic can be calculated for the full sample and each of the 50 jackknife replicates. The mean square error of the replicate estimates then provides a measure of the variance (standard error) of the survey statistic. Previous surveys, using similar sample designs, have yielded relative standard errors (i.e., coefficients of variation) in the range of 2 to 10 percent for most national estimates. Similar results are expected for this survey.



7

File Typeapplication/msword
File TitleTO:
AuthorBasmat Parsad
Last Modified Bykatrina.ingalls
File Modified2008-05-15
File Created2008-05-15

© 2024 OMB.report | Privacy Policy