Part B NAEP 2021 School Survey

Part B NAEP 2021 School Survey.docx

NAEP 2021 School Survey

OMB: 1850-0957

Document [docx]
Download: docx | pdf






National Center for Education Statistics

National Assessment of Educational Progress






National Assessment of Educational Progress (NAEP) 2021 School Survey



Supporting Statement

Part B




OMB# 1850-0957 v.2









February 2021




Table of Contents


Part B. Collection of Information Employing Statistical Methods

B.1. Potential Respondent Universe and Sample Design

This survey will only collect data at the school level, however it relies on the standard NAEP sample methodology, which is designed to sample and collect data from students. The following methods were used to adapt the standard NAEP sample for this study.

During a typical NAEP administration, the possible universe of student respondents for NAEP is estimated to be 8.5 million at grades 4 and 8, attending the approximately 125,000 public and private elementary and secondary schools in 50 states and the District of Columbia, and including Bureau of Indian Education and Department of Defense Education Activity (DoDEA) Schools. Note that territories, including Puerto Rico, are not included in the national samples.

Respondents are selected according to student sampling procedures with these possible exclusions:

  • The student is identified as an English language learner (ELL), but is prevented from participation in NAEP, even with accommodations allowed in NAEP.

  • The student is identified as having a disability (SD) which prevents participation in NAEP, even with accommodations as allowed in NAEP, and has an Individualized Education Plan (IEP) or equivalent classification, such as a Section 504 plan.

Additional information regarding the classification of students is provided in Section B.2.b.

B.1.a. Sampling Procedures

During a typical NAEP administration, the goal is to assess a representative sample of students. Therefore, the process begins by identifying a sample of schools with student populations that reflect the varying demographics of a specific jurisdiction, be it the nation, a state, or a district. Within each selected school, students are chosen at random to participate and each has the same chance of being chosen, regardless of socio-economic status, disability, status as an English language learner, or any other factors. Selecting schools that are representative helps ensure that the student sample is representative.

The following are characteristic features of typical NAEP sample designs:

  • for state-level assessments, approximately equal sample sizes (1,750–3,000 assessed students) from each participating state’s1 public schools;

  • for district-level assessments, sample sizes of approximately 1,000–2,000 from each participating district’s public schools;

  • sample sizes of approximately 6,000–20,000 for national-only operational subjects, depending on the size of the item pool;2

  • samples sizes of approximately 3,000–12,000 for pilot assessments, depending on the size of the item pool;3 and

  • in each school, some students to be assessed in each subject.



Initially, the program proceeded with a modified sample design for NAEP 2021 due to COVID-19 outbreak considerations. This design specifies one-half of the typical state-level assessment student sample size (as noted above) for each participating state’s public schools, and no district-level assessments. The NAEP 2021 School Survey will be conducted in the schools that were selected to support the modified design just described. In addition, certain urban districts (i.e., TUDA) were invited to participate in the NAEP 2021 School Survey at the district level under the modified design. This design specifies one-half to two-thirds of the typical district-level assessment student sample size (as noted above) for each participating district’s public schools. Note that although student assessments will not be conducted, these student sample size targets yield the reduced school sample sizes desired under the modified sample design.

The NAEP 2021 School Survey will be conducted in the same schools selected for the NAEP 2021 Teacher and School Questionnaire Special Study administration (see OMB# 1850-0956 v.2).

Additional information about the sampling procedures used in NAEP can be found in the technical documentation at http://nces.ed.gov/nationsreportcard/tdw/sample_design/. Note, while the latest documentation for main NAEP that has been published (as of the drafting of this document) is from 2013, the procedures for selecting schools have essentially remained the same. A summary of the sampling procedures for schools is included below.

Although typical NAEP samples are based on multistage designs, the NAEP 2021 School Survey only requires a one-stage sample of schools.

The following steps were used to select a sample of public schools for the NAEP 2021 School Survey. Private schools are not included in a state-level sample, which focuses solely on public schools.

  1. Generate a sampling frame.
    For sampling frames, NAEP uses the most current versions of the NCES Common Core of Data (CCD; public schools) and Private School Universe Survey (PSS; private schools) files. In addition, to address the fact that the CCD file does not necessarily include the most recent changes to schools by the time of the assessment, NAEP also conducts a survey of NAEP State Coordinators to check for additional new schools in a sample of public school districts.

  2. Classify schools into groups.
    Using the list, schools are classified into groups, first by type of location and then by the race/ethnicity classification within those locations. This step takes into account the distribution of schools and students across rural, suburban, and urban areas in each state, and the diversity of the student population at each school.

  3. Within each group, order schools by a measure related to student achievement.
    Within each group, schools are sorted by student achievement to ensure that schools with varying levels of student achievement are represented in the NAEP sample. This is done using school-level results on state achievement tests. In a few cases where recent achievement data are not available, schools are sorted by the median household income for the area where the school is located.

  4. Assign a measure of size to all schools.
    All schools on the list are assigned a measure of size. A school’s measure of size is based on the size of its enrollment in relation to the size of the state’s student population at the selected grade-level. Larger schools have a larger measure of size as they represent a larger proportion of the state’s student population. This step ensures that students from schools of different sizes are appropriately represented in the sample.

  5. Select the school sample.
    After schools are assigned a measure of size and grouped on an ordered list based on the characteristics that are referred to in previous steps, the sample is selected using stratified systematic sampling with probability proportional to the measure of size using a sampling interval. This procedure ensures that each school has the required selection probability. By proceeding systematically throughout the entire list, schools of different sizes and varying demographics are selected, and a representative sample of students will be chosen for the assessment. Additional details regarding the selection of the school sample are included in the technical documentation (https://nces.ed.gov/nationsreportcard/tdw/sample_design/2013/sample_design_for_the_2013_state_assessment.aspx).

  6. Confirm school eligibility.
    The list of schools selected to participate is sent to each state to verify that the school is eligible for participation. Some factors that would make a school ineligible include schools that have closed or if the grade span has changed so that a grade level or age assessed by NAEP is no longer in the school. Eligibility counts are included in the technical documentation (https://nces.ed.gov/nationsreportcard/tdw/sample_design/2013/eligible_schools_sampled_for_the_2013_state_assessment.aspx). Information on response rates can be found in Section B.3.b.

The process for private school selection is similar to the public school selection process but depends on the U.S. Department of Education’s private education system databases to create the initial list of all known private schools. Private schools are sampled to be representative of private schools nationwide. The results for private schools are not included in state-level results which are solely focused on public schools.

Additional information about the sampling procedures used in NAEP can be found in the technical documentation at http://nces.ed.gov/nationsreportcard/tdw/sample_design/.

B.1.b. Weighting Procedures

Since each selected school that participates in the survey effort constitutes only a portion of the full population of interest, weights are applied to schools. The weights permit valid inferences to be drawn from the samples about the respective populations from which they were drawn and, most importantly, ensure that the results of the surveys are fully representative of the target populations.

Additional information about the typical weighting procedures used in NAEP can be found in the technical documentation at http://nces.ed.gov/nationsreportcard/tdw/weighting/. Note, while the latest documentation that has been published (as of the drafting of this document) is from 2013, the procedures have essentially remained the same. These procedures will be adapted to produce school weights for the NAEP 2021 School Survey.

The final weights assigned to each school as a result of the estimation procedures are the product of the following steps (which are described in additional detail below):

  • assignment of a “base” weight, the reciprocal of the overall initial probability of selection;

  • adjustment of the school base weights to reduce extreme variability, arising from special circumstance; and

  • adjustments for school nonresponse.

School base weights are assigned separately by grade and, as noted, are the reciprocal of the school’s probability of selection for that grade.

Each sampled school receives a base weight, whether or not the school participated in the assessment process. The base weight reflects the number of schools that the sampled school represents in the population of interest. The sum of the school base weights for a given subgroup provides an estimate of the total number of schools in that subgroup.

Since nonresponse is unavoidable in any survey of a human population, a weighting adjustment is introduced to compensate for the loss of sample data and to improve the precision of the assessment estimates. Nonresponse adjustments are applied at the school level; the weights of responding schools are adjusted to reflect the nonresponding schools. School nonresponse adjustment cells are formed in part by geography (state or TUDA), urbanicity, and race/ethnicity.

The complexity of the sample design can result in extremely large weights for schools. Since unusually large weights are likely to produce large sampling variances for statistics of interest, and especially so when the large weights are associated with sample cases reflective of rare or atypical characteristics, such weights usually undergo an adjustment procedure that “trims” or reduces extreme weights. Again, the motivation is to improve the precision of the survey estimates. The weight trimming procedure for NAEP uses a multiple median rule to detect excessively large weights.

Estimates of the sampling variance of statistics derived through the survey effort are developed through a replication method known as “jackknife.” This process of replication involves the repeated selection of portions of the sample (replicates). A separate set of weights is produced for each replicate, using the same weighting procedures as for the full sample. The replicate weights, in turn, are used to produce estimates for each replicate (replicate estimates). The variability among the calculated replicate estimates is then used to obtain the variance of the full-sample estimate.

Additional information about the weighting procedures used in NAEP for school-level weights can be found in the technical documentation at http://nces.ed.gov/nationsreportcard/tdw/weighting/.

B.2. Procedures for Collection of Information

B.2.a. Recruitment of Schools

Note: This package includes communication and recruitment materials that will be used in the NAEP 2021 School Survey recruitment in Appendices A.

Once the sample of schools is selected, the NAEP State Coordinators typically follow a standard set of procedures for securing the voluntary participation of public and nonpublic schools. The process includes:

  • sending a letter/email4 from the NAEP State Coordinators to district superintendents (see Appendix A-1);

  • sending a letter/email from NAEP State Coordinators to school principals (see Appendix A-2);

  • sending appropriate attachments (for this study, the MyNAEP survey instructions) to accompany the letters sent to school principals (see Appendix A-3);

  • sending a letter/email from the Gaining Cooperation Recruiter (GCR) to private school principal (see Appendix A-4);

  • sending a letter/email from Mary Erbe, Private School Task Force Leader, to diocese superintendent (see Appendix A-5);

  • sending a sample endorsement letter/email from private school organization to private school administrator (see Appendix A-6);

  • sample email from Catholic diocese to Catholic school administrator NAEP 2021 School and Teacher Questionnaire Special Study and NAEP 2021 School Survey (see Appendix A-7).

In addition, translated versions of the public-school letters will be utilized in Puerto Rico (see Appendix A-9 and A-10).

B.2.b School Coordinator Responsibilities

The school coordinators are responsible for completing the survey for the NAEP 2021 School Survey data collection on behalf of the school using the MyNAEP system, which is an online secure site that provides schools that are volunteering with a convenient way to participate in the upcoming study. School Coordinators will be able to download a PDF of the survey items to facilitate the collection of the information and enter into MyNAEP once they have collected all the information. MyNAEP serves as the primary resource and action center throughout the study. The site also offers school coordinators an electronic way to participate in the study at their own pace.

B.3. Methods to Secure Cooperation, Maximize Response Rates, and Deal with Nonresponse

Schools within each state will be selected and the chief state school officer and the NAEP State or TUDA Coordinator will be asked to solicit their cooperation. The Department of Education officials may assist gaining cooperation of states and TUDAs, if necessary. In addition to the normal non-response procedures that NAEP conducts, NCES will explore the use of administrative or publicly available data to determine if response rates differ across in-person, fully online, and hybrid schools.

B.4. Pilot Testing and Data Uses

Given the need to provide timely information regarding the state of the pandemic, the survey items were not pilot tested. Rather, the items were reviewed by numerous individuals and consultant groups.

B.5. Consultants on NAEP 2021 School Survey

ETS, Westat, and NCES staff have collaborated on aspects of the design. The primary persons responsible from NCES are: James Woodworth, Peggy Carr, Patricia Etienne, Holly Spurlock, and James Deaton; from ETS: Jay Campbell, Amy Dresher, Jonas Bertling; and from Westat: Chris Averett, Keith Rust, and Greg Binzer. In addition, the NAEP Principals Panel reviewed the survey to determine its suitability.

1 Participating states vary depending on the subject and grade assessed, but may include the 50 states, the District of Columbia, the Department of Defense Education Activity, and (for mathematics assessments only) Puerto Rico.

2 NAEP IRT scaling requires a minimum sample size of 1,500-2,000 students per item in order to estimate stable item parameters. Therefore, national assessments with larger item pools have larger samples.

3 NAEP IRT scaling is conducted for most pilot assessments, requiring a minimum of 1,500-2,000 students per item in order to estimate stable item parameters. Therefore, pilot assessments with larger item pools have larger samples.

4 NAEP State Coordinators will determine whether the best method of sending these letters is via Postal Service or via e-mail.





File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSystem Clearance Part B - revisions with track changes
Authorjoconnell
File Modified0000-00-00
File Created2021-03-17

© 2024 OMB.report | Privacy Policy