Part B NPSAS 2016 Field Test Student Collection

Part B NPSAS 2016 Field Test Student Collection.docx

2015-16 National Postsecondary Student Aid Study (NPSAS:16) Field Test Student Data Collection

OMB: 1850-0666

Document [docx]
Download: docx | pdf










2015-16 National Postsecondary Student Aid Study (NPSAS:16)


Student Interview and Student Records





Supporting Statement Part B

OMB # 1850-0666 v.13







Submitted by

National Center for Education Statistics

U.S. Department of Education







October 2014









Contents



Tables




  1. Collection of Information Employing Statistical Methods

This submission requests clearance for the 2015-16 National Postsecondary Student Aid Study (NPSAS:16) field test activities. Specific plans are provided below. Materials for field test institution contacting, enrollment list collection, and list sampling activities were submitted in a separate package and approved in July 2014.

    1. Respondent Universe

      1. Institution Universe

To be eligible for NPSAS:16, an institution will be required, during the 2014–15 academic year for the field test and the 2015-16 academic year for the full-scale, to:

  • Offer an educational program designed for persons who had completed secondary education;

  • Offer at least one academic, occupational, or vocational program of study lasting at least 3 months or 300 clock hours;

  • Offer courses that are open to more than the employees or members of the company or group (e.g., union) that administered the institution;

  • Be located in the 50 states, the District of Columbia, or Puerto Rico;1

  • Be other than a U.S. Service Academy; and

  • Have a signed Title IV participation agreement with the U.S. Department of Education.

Institutions providing only avocational, recreational, or remedial courses or only in-house courses for their own employees will be excluded. The five U.S. Service Academies are excluded because of their unique funding/tuition base.

      1. Student Universe

The students eligible for inclusion in the NPSAS:16 sample are those who are enrolled in a NPSAS-eligible institution in any term or course of instruction between July 1, 2014 and April 30, 2015 for the field test and between July 1, 2015 and April 30, 2016 for the full-scale who are:

  • Enrolled in (a) an academic program; (b) at least one course for credit that could be applied toward fulfilling the requirements for an academic degree; (c) exclusively non-credit remedial coursework but who the institution has determined are eligible for Title IV aid; or (d) an occupational or vocational program that required at least 3 months or 300 clock hours of instruction to receive a degree, certificate, or other formal award;

  • Not currently enrolled in high school; and

  • Not enrolled solely in a GED or other high school completion program.

    1. Statistical Methodology

      1. Institution Sample

The NPSAS:16 field test and full-scale institution samples will be selected in a different manner than has been done in previous NPSAS studies. The field test institution frame will be constructed from the IPEDS:201213 header, Institutional Characteristics (IC), Completions, and Full-year Enrollment files.2 The full-scale institution frame will be constructed a year later from the IPEDS:2013-14 header, Institutional Characteristics (IC), Completions, and Full-year Enrollment files. Creating a separate institution frame for the field test and full-scale studies carries the advantage of having a more accurate and current full-scale institution sample since the frame will be constructed using the most up-to-date IPEDS files. Also, freshening the institution sample will not be needed since we will be using the most up-to-date institution frame available. So that we do not burden them with both field test and full-scale data collections, we will remove from the field test frame any large systems (reporters) and individual institutions likely to be selected with certainty (i.e., probability of selection equal to one) for the full-scale.3 Also, we will remove field test sample institutions from the full-scale frame and later adjust the weights for the full-scale sample institutions so that they represent the full population of eligible institutions.

For the small number of institutions on the frames that have missing enrollment information, we will impute the data using the latest IPEDS imputation procedures to guarantee complete data for the frames. Then, a statistical sample of 600 institutions will be selected from the field test frame and about 2,000 institutions will be selected from the full-scale frame. We will select institutions for both the field test and full-scale studies using stratified random sampling with probabilities proportional to a composite measure of size,4 which is the same methodology that we have used since NPSAS:96. Institution measures of size will be determined using full-year enrollment and baccalaureate completions data. Using composite measure of size sampling will ensure that the full-scale target sample sizes are achieved within institution and student sampling strata while also achieving approximately equal student weights across institutions. We will purposively subsample 300 of the 600 field test institutions to allow for some flexibility in the sample, such as excluding institutions unlikely to participate based on past experience.

The institutional strata will be the ten sectors that were used for NPSAS:12, which are based on institutional level, control, and highest level of offering:

  • Public less-than-2-Year

  • Public 2-year

  • Public 4-year non-doctorate-granting

  • Public 4-year doctorate-granting

  • Private for-profit less-than-2-year

  • Private for-profit 2-year

  • Private for-profit 4-year

  • Private nonprofit less-than-4-year

  • Private nonprofit 4-year non-doctorate-granting

  • Private nonprofit 4-year doctorate-granting

Further refinement of the ten sectors may be deemed necessary for the full-scale in order to target specific types of institutions that are not being captured sufficiently with the current ten sectors or to adapt to the changing landscape in postsecondary education. For example, the private for-profit 4-year sector could possibly be split into two strata based on academic offerings.

For the field test and full-scale, we expect to obtain overall 97 and 99 percent eligibility rates, respectively, and at least an overall 85 percent institutional participation (response) rate. The eligibility and response rates will likely vary by institutional strata. Based on these expected rates, the estimated institution sample sizes and sample yield by the ten institutional strata (described above) for the field test and full-scale are presented in tables 7 and 8, respectively.

Within each institutional stratum, additional implicit stratification will be accomplished by sorting the sampling frame by the following classifications: (1) historically Black colleges and universities (HBCU) indicator; (2) Hispanic-serving institutions (HSI) indicator;5 (3) Carnegie classifications of postsecondary institutions;6 (4) the Office of Business Economics (OBE) Region from the IPEDS header file (Bureau of Economic Analysis of the U.S. Department of Commerce Region);7 (5) state and, for states with large systems, e.g., the SUNY and CUNY systems in New York, the state and technical colleges in Georgia, and the California State University and University of California systems in California; and (6) the institution measure of size. The objective of this implicit stratification will be to approximate proportional representation of institutions on these measures.

Table 7. NPSAS:16 field test estimated institution sample sizes and yield

Institutional sector

Frame count1

Number sampled

Number eligible

List respondents

Total

7,278

300

290

247

Public less-than-2‑year

256

5

5

4

Public 2‑year

1,046

11

11

9

Public 4‑year non-doctorate-granting

348

110

106

95

Public 4‑year doctorate-granting

338

0

0

0

Private nonprofit less-than-4‑year

256

6

6

4

Private nonprofit 4‑year non-doctorate-granting

973

125

122

102

Private nonprofit 4‑year doctorate-granting

609

21

20

17

Private for-profit less-than-2‑year

1,637

8

7

5

Private for-profit 2‑year

1,030

5

5

4

Private for-profit 4‑year

785

9

9

7

1 Institution counts based on IPEDS:2011‑12 header files.

NOTE: Detail may not sum to totals because of rounding.

Table 8. NPSAS:16 preliminary full-scale institution sample sizes and yield

Institutional sector

Frame count1

Number sampled

Number eligible

List respondents

Total

7,278

2,000

1,980

1,683

Public less-than-2‑year

256

22

22

19

Public 2‑year

1,046

376

375

332

Public 4‑year non-doctorate-granting

348

180

179

162

Public 4‑year doctorate-granting

338

338

337

295

Private nonprofit less-than-4‑year

256

20

19

15

Private nonprofit 4‑year non-doctorate-granting

973

325

325

277

Private nonprofit 4‑year doctorate-granting

609

268

266

222

Private for-profit less-than-2‑year

1,637

70

67

49

Private for-profit 2‑year

1,030

120

117

93

Private for-profit 4‑year

785

280

273

218

1 Institution counts based on IPEDS:2011‑12 header files.

NOTE: Detail may not sum to totals because of rounding.

      1. Student Sample

Student Enrollment List Collection

To begin NPSAS data collection, sampled institutions are asked to provide a list of all their NPSAS-eligible undergraduate and graduate students enrolled in the targeted academic year, covering July 1 through June 30. Since NPSAS:2000, institutions have been asked to limit listed students to only those enrolled through April 30. This truncated enrollment period excludes students who first enrolled in May or June, but it allows lists to be collected earlier and, in turn, data collection to be completed in less than 12 months. When evaluated during NPSAS:96, the abbreviated schedule missed only about three percent of the target population, and weighting can account for the minimal lack of coverage.

Given the short time frame for the NPSAS:16 field test, institutions with continuous enrollment will be asked to include students enrolled only through March 31, instead of April 30, to expedite data collection.8 However, following completion of the field test, we will again evaluate the impact of this truncated enrollment period. We will request that the date first enrolled at the institution be included on the lists and that some field test institutions provide lists with students enrolled through the end of June. We will not select student samples from these later lists, but will use administrative data and frame data from the lists to conduct a bias analysis to determine if there are differences between May/June enrollees and all other students. If this analysis shows that there are differences, we will modify our approach prior to the full-scale list collection.

NPSAS:16 will serve as the base year data collection for the 2016/17 Baccalaureate and Beyond Longitudinal Study (B&B:16/17) and will be used to qualify students for cohort membership. To that end, we will ask institutions that award baccalaureate degrees to identify students who are expected to receive the baccalaureate degree by June 30 of the NPSAS year (2015 for the field test; 2016 for the full-scale). Instead of waiting until June for institutions to positively confirm degree award to these students, we will request that enrollment lists include an indicator (B&B flag) of cohort eligibility for students who have received or are expected to receive the baccalaureate degree during the NPSAS year.

As shown in table 9, the percentage of students, initially flagged as potential baccalaureate recipients, who do not actually receive their bachelor’s degree in the NPSAS year (i.e., the false positive rate) is expected to be high. Therefore, the NPSAS sampling rates for potential baccalaureates and other undergraduate students will be adjusted to yield the appropriate sample sizes, after accounting for the expected false positive and false negative rates by sector.

Table 9. Weighted false positive rate observed in baccalaureate identification, by sector: NPSAS:08

Institutional sector

False positive rate (weighted)

Public 4‑year non-doctorate-granting

34.7

Public 4‑year doctorate-granting

27.2

Private nonprofit 4-year non-doctorate-granting

22.3

Private nonprofit 4-year doctorate-granting

20.7

Private for-profit 4‑year

32.9



Student Stratification

The student sampling strata for the field test will be:

  • Baccalaureate STEM majors9

  • Baccalaureate business majors

  • Baccalaureate teacher majors

  • All other baccalaureate students

  • Other undergraduate students

  • Masters students

  • Doctoral STEM majors

  • Doctoral other majors

  • Other graduate students

Similar to the approach taken in prior NPSAS collections with a B&B spinoff, several student subgroups will be intentionally sampled at rates different than their natural occurrence within the population due to specific full-scale analytic objectives. We anticipate that the four following groups will be oversampled in the field test:

  1. Baccalaureate STEM majors

  1. Baccalaureate teacher majors

  2. Doctoral STEM majors

  3. Undergraduate students at all award levels enrolled in for-profit institutions

In addition, because of their sheer number, we anticipate that baccalaureate business majors will be under-sampled. Sampling business majors in proportion to the population would make it difficult to draw inferences about the experiences of baccalaureates more broadly.

In the field test, we will investigate the possibility of identifying federal financial aid applicants or recipients prior to student sampling. If this is feasible then, in the full-scale, we could stratify students by financial aid application status, Pell Grant or Direct Loan receipt, or Pell Grant or Direct Loan amount. This additional stratification for sampling may help the poststratification weighting adjustment, which is typically done using Pell Grant and Direct Loan control totals.10 To determine feasibility, NCES will talk with Federal Student Aid (FSA) about obtaining student data from CPS, Pell, and/or Direct loan files prior to sampling. Timing of when the relevant data are available from FSA may be an issue. We will explore how best to combine the financial aid strata with the other strata mentioned above, and we will look at design effects.

Sample Sizes and Student Sampling

Based on past experience, NCES expects to obtain, minimally, 95 percent eligibility rates and 70 percent student interview response rates overall and in each sector. The expected student sample sizes and sample yield are presented in table 10 for the field test. The field test is designed to sample about 4,500 students, which is similar to NPSAS:12. Table 11 does not show sample sizes adjusted for false positives and false negatives, but a large percentage of the field test sample will be comprised of potential baccalaureates in order to obtain a sufficient sample yield for the B&B field tests. The NPSAS field test sample size of graduate students is relatively small due to the large baccalaureate sample size.

To meet the truncated field test schedule, students must be selected by mid-May of 2015. Like past NPSAS field tests, the 3,000 student respondents to the NPSAS:16 field test will be sufficient to test the data collection instruments. However, in order to also reach a good representation of students across the ten sectors, the number of participating institutions needs to be at least 150. If more than 150 lists are received by mid-May, only 150 will be sampled. Limiting the sampling to 150 institutions will increase the student sample size for each institution, making the Student Records burden closer to what it will be in the full-scale.

Students will be sampled on a flow basis as student lists are received. Stratified systematic sampling procedures will be utilized. Sample yield will be monitored by institutional and student sampling strata, and the sampling rates will be adjusted early, if necessary, to achieve the desired sample yields.

The student sampling procedures implemented in the field test will be as comparable as possible to those planned for the full-scale study to evaluate the feasibility of the processes and procedures required by the full-scale plan. In particular, following the field test, we will evaluate whether differences in NPSAS and IPEDS definitions of student eligibility result in student counts that either under- or over-estimate NPSAS-eligible students. The field test sample design will not change while we investigate aspects of the sampling plan for the full-scale study. We will describe the outcomes of these explorations and any sample design changes in the full-scale sampling plan and OMB package.

Table 10. Expected student sample sizes and yields for the NPSAS:16 field test

Institutional sector

Sample students

Eligible students

Responding students

Responding students per responding institution1

Total

Bacca-laureates

Other under-graduate students

Graduate students

Total

Bacca-laureates

Other under-graduate students

Graduate students

Total

Bacca-laureates

Other under-graduate students

Graduate students

Total

4,511

1,695

2,616

200

4,286

1,610

2,486

190

3,000

1,127

1,740

133

12















Public less-than-2‑year

123

0

123

0

109

0

109

0

67

0

67

0

16

Public 2‑year

445

0

445

0

407

0

407

0

270

0

270

0

29

Public 4-year non-doctorate-granting

951

501

429

21

910

476

414

20

669

337

317

14

7

Public 4‑year doctorate-granting

0

0

0

0

0

0

0

0

0

0

0

0

0

Private nonprofit less-than-4‑year

149

0

149

0

142

0

142

0

90

0

90

0

21

Private nonprofit 4‑year non-doctorate-granting

969

506

432

31

921

478

414

29

692

345

323

21

7

Private nonprofit 4‑year doctorate-granting

948

442

379

127

907

421

366

121

693

304

285

86

42

Private for-profit less-than-2‑year

249

0

249

0

236

0

236

0

132

0

132

0

25

Private for-profit 2‑year

61

0

61

0

59

0

59

0

38

0

38

0

10

Private for-profit 4‑year

616

246

349

21

594

235

339

20

350

141

219

12

50

1 The number of responding students per participating institution is based on the 247 list respondents shown above in table 7, rather than on the 150 institutions from which students will be selected.

NOTE: Detail may not sum to totals because of rounding.

Table 11. Preliminary student sample sizes and yields, NPSAS:16 full-scale

Institutional sector

Sample students

Eligible students

Responding students

Responding students per responding institution

Total

Bacca-laureates

Other under-graduate students

Graduate students

Total

Bacca-laureates

Other under-graduate students

Graduate students

Total

Bacca-laureates

Other under-graduate students

Graduate students

Total

126,316

51,277

53,986

21,053

120,000

48,713

51,287

20,000

84,000

34,099

35,901

14,000

50















Public less-than-2‑year

680

0

680

0

608

0

608

0

382

0

382

0

20

Public 2‑year

21,296

0

21,296

0

19,617

0

19,617

0

13,321

0

13,321

0

40

Public 4‑year non-doctorate-granting

12,890

7,141

3,751

1,998

12,342

6,792

3,649

1,901

9,166

4,940

2,858

1,369

57

Public 4‑year doctorate-granting

26,120

13,224

6,346

6,550

24,806

12,487

6,129

6,189

18,892

9,358

4,945

4,589

64

Private nonprofit less-than-4‑year

870

0

870

0

838

0

838

0

543

0

543

0

37

Private nonprofit 4‑year non-doctorate-granting

12,160

6,813

2,601

2,746

11,540

6,434

2,512

2,595

8,682

4,772

2,006

1,904

31

Private nonprofit 4‑year doctorate-granting

13,890

7,590

2,271

4,029

13,262

7,219

2,209

3,834

9,920

5,347

1,762

2,811

45

Private for-profit less-than-2‑year

3,650

0

3,650

0

3,482

0

3,482

0

1,998

0

1,998

0

41

Private for-profit 2‑year

6,890

0

6,890

0

6,737

0

6,737

0

4,450

0

4,450

0

48

Private for-profit 4‑year

27,870

16,509

5,631

5,730

26,768

15,782

5,506

5,481

16,646

9,682

3,636

3,327

76

NOTE: Detail may not sum to totals because of rounding.

Quality Control Checks for Lists and Sampling

The number of enrollees on each institution’s student list will be checked against the latest IPEDS full-year enrollment and completions data. The comparisons will be made for each student level: baccalaureate, undergraduate, and graduate. Based on past experience, we expect that only counts within 50 percent of non-imputed IPEDS counts will pass quality control (QC) and will be moved onto student sampling. We will re-evaluate these checks after the field test for use in the full-scale study.

Institutions that fail QC will be re-contacted to resolve the discrepancy and to verify that the institution coordinator who prepared the student list clearly understood our request and provided a list of the appropriate students. When we determine that the initial list provided by the institution was not satisfactory, we will request a replacement list. We will proceed with selecting sample students when we have either confirmed that the list received is correct or have received a corrected list.

QC is very important for sampling and all statistical activities, and statistical procedures will undergo thorough quality control checks. We have technical operating procedures (TOPs) in place for sampling and general programming. These TOPs describe how to properly implement statistical procedures and QC checks. We will employ a checklist for all statisticians to use to make sure that all appropriate QC checks are done for student sampling.

Some specific sampling QC checks will include, but are not limited to, checking that the:

  • Institutions and students on the sampling frames all have a known, non-zero probability of selection;

  • Distribution of implicit stratification for institutions is reasonable; and

  • Number of institutions and students selected match the target sample sizes.

    1. Methods for Maximizing Response Rates

Response rates in the NPSAS:16 field test and full-scale study are a function of success in two basic activities: identifying and locating the sample members involved, then contacting them and gaining their cooperation. Two classes of respondents are involved: institutions and students who were enrolled in those institutions. Institutions will be asked to provide data from institutional records for sampled students. In this section, we describe our plans for maximizing response to the request for data from institutional records. We also present our plans for maximizing response to the student survey.

The data collection contractor for this effort will be RTI International. RTI has worked with postsecondary institutions for multiple studies on behalf of the Department with experience in both developing a rapport with data providers at postsecondary institutions and with converting student nonrespondents via telephone or web interviews.

      1. Collection of Data from Institutional Records

Our plans for contacting and communicating with institutions, beginning with the process of list acquisition, are designed to ensure the cooperation of as many institutions as possible and to establish rapport with institutional staff. This process will include sending the chief administrator of each institution a package of descriptive materials about the study, follow-up telephone calls to obtain the chief administrator’s consent and cooperation, and asking the chief administrator to designate an Institutional Coordinator (IC) who will serve as our primary point of contact.

All institution coordinators receive information that informs them about the purposes of NPSAS, describes their tasks, and assures them of our commitment to maintaining the confidentiality of data. Written materials will be provided to coordinators explaining each phase of the study, as well as their role in each phase. Training of institution coordinators is geared toward the method of data collection selected by the institution (see below). The system used for collecting institutional record data is accessible only with an ID and password. It provides institution coordinators with instructions for all phases of study participation. Copies of all written materials, as well as answers to frequently asked questions, are available on the website.

Experienced NPSAS interview staff carry out these contacts and are assigned to specific institutions, which remain their responsibility throughout the data collection process. This allows NPSAS staff members to establish rapport with the institution’s staff and provides those individuals with a consistent point of contact. Staff members are thoroughly trained in basic financial aid concepts and in the purposes and requirements of the study, which helps them establish credibility with the institution staff.

As an additional means of maximizing institutional participation, we have secured endorsements from 24 professional associations for NPSAS:16 (see appendix B).

NPSAS staff will offer several options for providing the Student Records for sampled students (as in prior NPSAS studies) and will invite the coordinator to select the method that is least burdensome and most convenient for the institution. The optional methods for providing student record data are:

  • Student Records obtained via a web-based data entry interface. The web-based data entry interface displays one student at a time, and the coordinator may enter data in a top to bottom fashion before moving onto the next student.

  • Student Records obtained by completing an Excel workbook. An Excel workbook will be created for each institution and will be preloaded with the sampled students’ ID, name, and SSN (if available). To facilitate simultaneous data entry by different offices within the institution, the workbook contains a separate worksheet for each topic area. The user will download the Excel worksheet from the secure NPSAS institution website, enter the data, and then upload the data to the website. Validation checks occur both within Excel as data are entered and when the data are uploaded via the website.

  • Student Records obtained by uploading CSV (comma separated values) files. Institutions with the means to export data from their internal database systems to a flat file may opt for this method of supplying Student Records. Over the last several NPSAS studies, the number of institutions providing data files has increased. Institutions that select this method will be provided with detailed import specifications, and all data uploading will occur through the project’s secure website.

Institution coordinators who elect to use the web-based data entry interface will receive detailed instructions for accessing and using the site.

Prior to data collection, student records are matched to the U.S. Department of Education Central Processing System (CPS)—which contains data on federal financial aid applications—for locating purposes and to reduce the burden on the institutions for the student record abstractions. The vast majority of the federal aid applicants (about 95 percent) will match successfully to the CPS prior to Student Records data collection. During data collection, institutions will be asked to provide the student’s last name and Social Security number for the small number of federal aid applicants who did not match to the CPS on the first attempt. After Student Records data collection ends, we will submit the new names and Social Security numbers to CPS for file matching. Any new data obtained for the additional students will be delivered on the Electronic Code Book (ECB) with the data obtained prior to Student Records data collection.

      1. Student Survey: Self-Administered Web and CATI

Methods for maximizing response to the study survey include: (1) tracing of sample members; (2) thorough training for all staff involved in data collection; (3) use of a sophisticated case management system; (4) a carefully designed survey instrument; and (5) detailed plans for averting and converting refusals.

  1. Tracing of Sample Members

To achieve the desired response rate, we propose an integrated tracing approach that consists of up to 12 steps designed to yield the maximum number of locates with the least expense. During the field test, we will evaluate the effectiveness of these procedures for the full-scale study effort. The steps of our tracing plan include the following elements.

    • Matching student list information with NCOA, Telematch, CPS, and other databases, which will yield locating information for the students sampled for NPSAS:16.

    • Providing a system for moving locator information obtained during collection of student record data quickly into CATI so that this new information can be put to immediate use.

    • Lead letter and other mailings as necessary to sample members. A personalized letter (signed by an NCES official) and study brochure will be mailed to all sample members to initiate data collection. This letter will include a toll-free number, study website address, and study ID and password, and will request that sample members complete the self-administered interview over the Internet. A few days after the lead letter mailing, an email message mirroring the letter will also be sent to sample members.

    • Conducting batch tracing before data collection and before and after the start of CATI as needed

    • Advance tracing prior to the start of CATI efforts. Not all schools will be able to give complete or up-to-date locating information on each student, and some cases will require more advanced tracing, before mailings can be sent or the cases can be worked in CATI. NPSAS staff will conduct batch tracing on all cases to obtain updated address information prior to mailing the lead letters. This step will minimize the number of returned letters and maximize the number of early completes. To handle cases for which mailing address, phone number, or other contact information is invalid or unavailable, NPSAS staff plan to conduct advance tracing of the cases prior to lead letter mailout and data collection. This advance tracing will involve searching for address and telephone information. As lead information is found, additional searches will be conducted through interactive databases to expand on leads found. This will be an important step in the tracing components because of the nature of this sample. After locating information is found, more advanced database searches, such as Experian, will be used, to provide more comprehensive information for the individual.

    • CATI tracing

    • Pre-intensive tracing including FastData and Accurint. We plan to send cases to both FastData and Accurint to identify a new phone number, to minimize the number of cases requiring more expensive intensive interactive tracing. Through FastData we can tap into 260 million consumer records and over 33 million public records. We are also able to access a national directory assistance database—updated daily—of over 156 million phone numbers. FastData has also recently added a more comprehensive cell phone search (SuperPhones & Phone+Premium) built into existing searches; obtaining reliable cell phone numbers is becoming an increasingly critical component of locating and interviewing this population. Accurint is a flexible search vendor capable of providing a variety of contact information for a very low cost per case. This vendor provides an indicator that the phone number returned has been verified as accurate and belonging to a subject in the past 24 hours. Accurint uses SSN to search, making it a viable tool for NPSAS:16 and the follow-up studies due to the high percentage of SSNs we expect to obtain on the student enrollment lists and through tracing sources and student interviews.

Also during the field test, NPSAS staff will be evaluating whether matching sample members to the National Student Loan Data System (NSLDS), prior to the start of intensive tracing, improves locating rates.

    • Conducting intensive in-house tracing, including proprietary database searches. RTI’s tracing specialists conduct intensive interactive searches to locate contact information for sample members. In NPSAS:08, about 60 percent of sample members requiring intensive tracing were located, and about 59 percent of those located responded to the interview. Intensive interactive tracing differs from batch tracing in that a tracer can assess each case on an individual basis to determine which resources are most appropriate and the order in which they should be used. Intensive interactive tracing is also much more detailed due to the personal review of information. During interactive tracing, tracers utilize all previously obtained contact information to make tracing decisions about each case. These intensive interactive searches are completed using a special program that works with RTI’s CMS to provide organization and efficiency in the intensive tracing process. Sources that may be used, as appropriate, include credit database searches, such as Experian, various public websites, and other integrated database services.

    • Conducting NPSAS List Completer (NLC) searches. NLC is an RTI software application that compiles all information available for the school and sample members to Tracing Services for additional address, phone, and e-mail searches to be made. The application will send Tracing Services the school name, school web address, and total number of students to be worked. If student name, address, and phone number are available, this information will also be sent to the NLC. Tracing Services will then use the school web page directly to conduct searches and update records in the NLC for any new school or student information found. This application was used on the last round of IES’s National Study of Postsecondary Faculty and more than 4,000 new email addresses were located. Many major universities have student directory information available, and NPSAS staff believe this application could allow for additional success on NPSAS:16.

    • University, college, or personal web pages.

  1. Training for Data Collection Staff

Telephone data collection will be conducted at the contractor’s call center. NPSAS staff at the call center will include Quality Control Supervisors (QCSs), Help Desk Agents (HDAs), Telephone Interviewers (TIs), and Refusal Conversion Specialists. Training programs for these staff members are critical to maximizing response rates and collecting accurate and reliable data.

Quality control supervisors, who are responsible for all supervisory tasks, will attend project-specific training for QCSs, in addition to the content of the HDS and TI training. They will receive an overview of the study, background and objectives, and the data collection instrument through a question-by-question review. Supervisors will also receive training in the following areas: providing direct supervision during data collection; handling refusals; monitoring interviews and maintaining records of monitoring results; problem resolution; case review; specific project procedures and protocols; reviewing CATI reports; and monitoring data collection progress.

Training for HDAs, who assist sample members who call the project-specific toll-free line, and Telephone Interviewers is designed to help staff become familiar with and practice using the Help Desk application and survey instrument, as well as to learn project procedures and requirements. Particular attention will be paid to quality control initiatives, including refusal avoidance and methods to ensure that quality data are collected. Both HDAs and TIs will receive project-specific training on telephone interviewing, and HDAs will receive additional training specifically geared toward solving technical problems and answering questions from web participants regarding the study or related to specific items within the interview. They will also be able to unlock cases, reissue passwords, and respond to sample member e-mail messages, using prepared text approved by the NCES Contracting Officer’s Representative. At the conclusion of training, all HDAs and TIs must meet certification requirements by successfully completing a certification interview. This evaluation consists of a full-length interview with project staff observing and evaluating interviewers, as well as an oral evaluation of interviewers’ knowledge of the study’s Frequently Asked Questions.

  1. Case Management System

Student interviews will be conducted using a single web-based survey instrument for both self-administered and CATI data collection. The data collection activities will be accomplished through the Case Management System (CMS), which is equipped with the following capabilities:

    • on-line access to locating information and histories of locating efforts for each case;

    • state-of-the-art questionnaire administration module with full “front-end cleaning” capabilities (i.e., editing as information is obtained from respondents);

    • sample management module for tracking case progress and status; and

    • automated scheduling module which delivers cases to interviewers and incorporates the following features:

      • Automatic delivery of appointment and call-back cases at specified times. This reduces the need for tracking appointments and helps ensure the interviewer is punctual. The scheduler automatically calculates the delivery time of the case in reference to the appropriate time zone.

      • Sorting of non-appointment cases according to parameters and priorities set by project staff. For instance, priorities may be set to give first preference to cases within certain sub-samples or geographic areas; cases may be sorted to establish priorities between cases of differing status. Furthermore, the historic pattern of calling outcomes may be used to set priorities (e.g., cases with more than a certain number of unsuccessful attempts during a given time of day may be passed over until the next time period). These parameters ensure that cases are delivered to interviewers in a consistent manner according to specified project priorities.

      • Restriction on allowable interviewers. Groups of cases (or individual cases) may be designated for delivery to specific interviewers or groups of interviewers. This feature is most commonly used in filtering refusal cases, locating problems, or foreign language cases to specific interviewers with specialized skills.

      • Complete records of calls and tracking of all previous outcomes. The scheduler tracks all outcomes for each case, labeling each with type, date, and time. These are easily accessed by the interviewer upon entering the individual case, along with interviewer notes, thereby eliminating the need for a paper record of calls of any kind.

      • Flagging of problem cases for supervisor action or supervisor review. For example, refusal cases may be routed to supervisors for decisions about whether and when a refusal letter should be mailed, or whether another interviewer should be assigned.

      • Complete reporting capabilities. These include default reports on the aggregate status of cases and custom report generation capabilities.

The integration of these capabilities reduces the number of discrete stages required in data collection and data preparation activities and increases capabilities for immediate error reconciliation, which results in better data quality and reduced cost. Overall, the scheduler provides a highly efficient case assignment and delivery function by reducing supervisory and clerical time, improving execution on the part of interviewers and supervisors by automatically monitoring appointments and call-backs, and reducing variation in implementing survey priorities and objectives.

  1. Survey Instrument Design

To prepare the student records instrument, in January 2014, NCES convened a technical review panel to discuss the challenges to responding to the NPSAS list and student records data collection requests and approaches that might facilitate the process. In June 2014, NCES received approval to conduct focus groups with institution staff who have participated in past NPSAS student records data collection. This qualitative evaluation informed refinement of items used in the student records instrument for which clearance is requested, and system functionality.

Student interview preparation has involved two meetings of the NPSAS technical review panel. The June 2014 meeting focused specifically on the design and content of the graduate student portion of the survey, while the August 2014 meeting covered the survey more broadly, across all topics and student levels. Since the August meeting, cognitive testing with graduate students has helped to clarify concepts to be used in the student survey, and highlighted differences across the various levels and degrees of graduate students. Following the field test data collection, additional cognitive and usability is planned with the programmed, mobile-friendly instrument.

The NPSAS:16 instruments will employ a web-based instrument and deployment system, created by RTI, known as Hatteras which has been in use since NPSAS:08. Hatteras is a flexible system that provides multimode functionality, whereby the survey instrument is created one time and can be used for self-administration, including on mobile devices, CATI, CAPI, or data entry.

In addition to the functional capabilities of the CMS and web instruments described above, our efforts to achieve the desired response rate will include using established procedures proven effective in other large-scale studies we have completed. These include:

    • Providing multiple response modes, including a mobile-friendly self-administered and interviewer-administered options.

    • Offering incentives to encourage response (see incentive structure described below).

    • Assigning experienced CATI data collectors who have proven their ability to contact and obtain cooperation from a high proportion of sample members.

    • Training the interviewers thoroughly on study objectives, study population characteristics, and approaches that will help gain cooperation from sample members.

    • Maintaining a high level of monitoring and direct supervision so that interviewers who are experiencing low cooperation rates are identified quickly and corrective action is taken.

    • Making every reasonable effort to obtain an interview at the initial contact, but allowing respondent flexibility in scheduling appointments to be interviewed.

    • Thoroughly reviewing all refusal cases and making special conversion efforts whenever feasible (see next section).

  1. Refusal Aversion and Conversion

Recognizing and avoiding refusals is important to maximize the response rate. We will emphasize this and other topics related to obtaining cooperation during data collector training. Supervisors will monitor interviewers intensely during the early days of data collection and provide retraining as necessary. In addition, the supervisors will review daily interviewer production reports produced by the CATI system to identify and retrain any data collectors who are producing unacceptable numbers of refusals or other problems.

After encountering a refusal, the data collector enters comments into the CMS record. These comments include all pertinent data regarding the refusal situation, including any unusual circumstances and any reasons given by the sample member for refusing. Supervisors will review these comments to determine what action to take with each refusal. No refusal or partial interview will be coded as final without supervisory review and approval. In completing the review, the supervisor will consider all available information about the case and will initiate appropriate action.

If a follow-up is clearly inappropriate (e.g., there are extenuating circumstances, such as illness or the sample member firmly requested that no further contact be made), the case will be coded as final and will not be recontacted. If the case appears to be a “soft” refusal, follow-up will be assigned to an interviewer other than the one who received the initial refusal. The case will be assigned to a member of a special refusal conversion team made up of interviewers who have proven especially adept at converting refusals.

Refusal conversion efforts will be delayed for at least one week to give the respondent some time after the initial refusal. Attempts at refusal conversion will not be made with individuals who become verbally aggressive or who threaten to take legal or other action. Refusal conversion efforts will not be conducted to a degree that would constitute harassment. We will respect a sample member’s right to decide not to participate and will not impinge this right by carrying conversion efforts beyond the bounds of propriety.

    1. Tests of Procedures and Methods

NCES’s goal for the full-scale NPSAS:16 study is to reduce total error compared to NPSAS:12 so that informed decisions may be made given the resources provided. This NPSAS:16 field test will address some, although not all of the initiatives that will be implemented in the full-scale study. Through an analysis of previous NPSAS studies, NPSAS staff is focusing efforts on the following sources of error which are likely to have the largest impacts on improving estimates. Specifically:

Frame Error – In order to reduce sampling error due to the sampling frame, we will be using more recent data from IPEDS for the sampling frame to reduce the reliance on post-stratification adjustment and yield smaller standard errors.

Coverage Error – NPSAS staff will re-test tracing and contacting procedures to reduce non-response due to bad contact information.

Measurement Error due to questionnaire design – This field test is testing new questions that were previously tested in cognitive labs. Other questions are being piloted and will be revised using cognitive labs following this data collection. The experiments in the field test are also designed to test if error is being introduced due to how the items are designed. The field test will also allow NPSAS staff to test if supplemental help text is having the desired effect.

Measurement Error due to variability in telephone interviewer – The field test will test procedures for training telephone interview staff.

We are exploring additional strategies to employ in the full-scale study to reduce total error, including responsive/adaptive design to reduce non-response error, adding loan information/data from NSLDS to the sampling design to reduce frame error, and updating poststratification control totals to better match the NPSAS target population.

The NPSAS:16 field test will include two sets of data collection experiments: the first set of experiments focuses on survey participation, to address nonresponse, and the second set focuses on data accuracy to further investigate measurement error.

      1. Experiment # 1: Evaluation of Burden and Motivation to Participate

In part because NPSAS:16 will launch a mobile option of the entire survey, a considerable challenge during data collection will likely be to convince sample members to start a 30 minute survey, raising concern that participation rates in the student survey will decline. Motivated by the “foot in the door” approach (Freedman and Fraser, 1966), where a small request is followed by a larger request, we propose dividing the survey instrument in half, into two modules of approximately 10 to 15 minutes in duration, to determine if breaking the survey into two smaller tasks increases the likelihood that sample members will participate.

Sample members will be assigned at random into one of three conditions described below.

  1. Treatment Group 1: Sample members will be asked to first complete a 10 to 15 minute survey – Module 1 – for an initial incentive offer of $15. At the end of the first module, respondents will be offered the option to continue with the second half of the survey, Module 2, also 10 to 15 minutes in duration, to receive another $15 (for a total of $30 for the survey).

  2. Treatment Group 2: Sample members will be asked to complete the same 10 to 15 minute Module 1 for an initial incentive offer of $20, then will be offered the opportunity to continue with Module 2, also 10 to 15 minutes in duration, to receive another $10 (for a total of $30 for the survey).

  3. Control Group: Sample members will be asked to complete the usual 30 minute survey, like in prior NPSAS data collections. They will receive a $30 incentive for the completed survey.

For the first time, sample members will be offered the option of receiving their incentives by check, which typically takes 3 to 4 weeks for delivery, or by PayPal, which can be accomplished immediately upon completion of a module.

The modules will include questions designed specifically to assess measurement error due to recall, fatigue, lack of motivation, and so on. Module 1 will include questions to obtain information also available from administrative sources to evaluate accuracy (N16CFEDAMT). Some of those same items will be repeated in Module 2. In addition, Module 2 will include questions on fictitious issues (N16ASNOW and N16SPNNOW) and some questions with reversed wording (N16ACDSATIS and N16SATISACD). The three groups will be evaluated on participation and response rates, breakoff, and timing.

      1. Experiment # 2: Questionnaire Design

Another potential source of measurement error is the survey instrument itself. Given the mobile survey option that will be launched for NPSAS:16, we will need to introduce changes to the survey instrument design to facilitate completion on a smaller screen size with mobile phone navigation. Two types of question designs will be restructured and evaluated across devices.

Questionnaire Design 1

  1. Design Group 1: Rather than presenting the question on high school coursework as a grid, which asks and obtains responses to several questions in a single screen, a series of three yes/no questions will be asked in succession.

Control Group 1: The alternative design will be the standard presentation in which questions about high school courses are asked in a single grid.

Questionnaire Design 2

  1. Design Group 2: Rather than asking just one question, parents’ education, with several response options, that may not properly display on the screen of a mobile device, we will use a branching design in which a general question with limited response options will be asked first, followed by a series of more specific branching questions to obtain the details.

Control Group 2: The alternative design will be the standard presentation in which the question of parents’ education will be asked as a single question.

Questionnaire Design 3

  1. Design Group 3: Among students who report having studied abroad, a follow-up question will ask respondents to select from a list all countries in which they have studied abroad.

Control Group 3: The alternative design will present respondents with a text box in which to list all countries in which they have studied abroad.

      1. Experimental Design

The “motivation to participate” and three questionnaire design experiments described above will test the numerous hypotheses outlined below. The experimental design includes estimation of the minimum difference between the control and treatment groups necessary to detect statistically significant differences.

The control and treatment groups with the null hypotheses to be tested are defined as follows:

Experiment # 1: Evaluation of Burden and Motivation to Participate

  • Treatment group 1 – Participants will be asked to complete a 10-15 minute survey and be offered $15. At the end of the 15 minute survey, participants will be asked if they would like to continue with another 15 minute survey and receive an additional $15.

  • Treatment group 2 – Participants will be asked to complete a 10-15 minute survey and be offered $20. At the end of the 15 minute survey, participants will be asked if they would like to continue with another 15 minute survey and receive an additional $10.

  • Control group – Participants will be asked to complete a 30 minute survey and receive a $30 incentive.

  1. Response rates will not be different for the two treatment groups combined than for the control group.

  2. Response rates for the first module will not be different for the two treatment groups combined than for the control group.

  3. Response rates for the second module will not be different for the two treatment groups combined than for the control group.

  4. Breakoff rates will not be different for the control group than for the two treatment groups combined.

  5. Participation rates will not be different for the two treatment groups combined than for the control group.

  6. Mean time to complete the first module will not be different for the two treatment groups combined than for the control group.

  7. Mean time to complete will not be higher for the first module for the two treatment groups combined than for the second module for the two treatment groups combined.

  8. Item missingness rates for the first module will not be different for the control group than for the two treatment groups combined.

  9. Item missingness rates will not be higher for the second module of the two treatment groups combined than for the first module of the two treatment groups combined.

  10. Percentage differences between estimates and benchmarks will not be different for the control group than for the two treatment groups combined.

  11. Percentage differences between estimates and benchmarks will not be higher for the second module of the two treatment groups combined than for the first module of the two treatment groups combined.

  12. Percentages of substantive responses to fictitious issues will not be different for the control group than for the two treatment groups combined.

  13. Percentages of substantive responses to fictitious issues will not be higher for the second module of the two treatment groups combined than for the first module of the two treatment groups combined.

  14. Participation rates will not be different for treatment group 1 than for treatment group 2.

  15. Response rates will not be different for treatment group 1 than for treatment group 2.

Experiment # 2: Questionnaire Design

Questionnaire Design 1
  • Treatment groupParticipants will receive the high school course questions as three yes/no questions.

  • Control group – Participants will receive the high school course questions in a grid format.

  1. Mean time to complete the interview will not be higher for the treatment group than for the control group.

  2. Item missingness rates will not be different for the treatment group than for the control group.

  3. Percentage differences between estimates and benchmarks will not be different for the treatment group than for the control group.

Questionnaire Design 2
  • Treatment group – Participants will receive the parents’ education questions as a series of branching questions.

  • Control group – Participants will receive the parents’ education questions in a checkbox format.

  1. Mean time to complete the interview will not be higher for the treatment group than for the control group.

  2. Item missingness rates will not be different for the treatment group than for the control group.

  3. Percentage differences between estimates and benchmarks will not be different for the treatment group than for the control group.

Questionnaire Design 3
  • Treatment group – Participants will receive the study abroad question in an open-ended format.

  • Control group – Participants will receive the study abroad question in a closed-ended format.

  1. Item missingness rates will not be different for the treatment group than for the control group.

  2. The percentage difference between the mean number of countries provided for the treatment group and the control group will be zero.

      1. Detectable Differences

The differences between the control and treatment group(s), the two modules, and the two treatment groups necessary to detect statistically significant differences are shown in table X. The following assumptions were made in computing detectable differences:

  1. Detectable differences with 95 percent confidence were calculated as follows:

    1. Hypotheses 7, 9, 11, 13, 16, and 19 assume a one-tailed test.

    2. Hypotheses 1, 2, 3, 4, 5, 6, 8, 10, 12, 14, 15, 17, 18, 20, 21, 22, and 23 assume a two-tailed test.

  2. The sample will be equally distributed across the three experimental groups (control and two treatment groups) for hypotheses 1 through 15.

  3. The sample will be equally distributed between the two experimental groups (control and treatment) for hypotheses 16 through 23.

  4. Analysis of participation rates in hypotheses 5 and 14 will include all sample members, both eligible and ineligible students.

  5. Analysis of response rates in hypotheses 1, 2, 3, and 15 will include all eligible sample members.

  6. Analysis of hypotheses 4, 6, 7, 8, 9, 10, 11, 12, 13, 16, 17, 18, 19, 20, 21, 22, and 23 will include respondents.

  7. The eligibility rate will be 95 percent.

  8. Comparisons between the first and second modules for the two treatment groups combined will include only those students who completed both modules. 50 percent of students responding to first module will also respond to second module.

  9. The response rate for the control group in hypotheses 1, 2, and 3 will be 60 percent.

  10. The breakoff rate for the control group in hypothesis 4 will be 20 percent.

  11. The participation rate for the control group in hypothesis 5 will be 60 percent.

  12. The mean time to complete the first module for the control group for hypothesis 6 will be 15 minutes.

  13. The mean time to complete the second module for the two treatment groups combined for hypothesis 7 will be 15 minutes.

  14. The item missingness rate for the first module of the control group for hypothesis 8 will be ten percent.

  15. The item missingness rate for the second module of the two treatment groups combined for hypothesis 9 will be ten percent.

  16. The percentage difference between estimates and benchmarks for the control group for hypothesis 10 will be 10 percent.

  17. The percentage difference between estimates and benchmarks for the second module of the two treatment groups combined for hypothesis 11 will be 10 percent

  18. The percentage of substantive responses to fictitious issues for the control group for hypothesis 12 will be 30 percent.

  19. The percentage of substantive responses to fictitious issues for the second module of the two treatment groups combined for hypothesis 13 will be 30 percent.

  20. The participation rate for treatment group 1 in hypothesis 14 will be 60 percent.

  21. The response rate for treatment group 2 in hypothesis 15 will be 60 percent.

  22. The mean time to complete the interview for the control group for hypotheses 16 and 19 will be 15 minutes.

  23. The item missingness rate for the control group for hypotheses 17, 20, and 22 will be ten percent.

  24. The percentage difference between estimates and benchmarks for the control group for hypotheses 18 and 21 will be 10 percent.

  25. The statistical tests will have 80 percent power with an alpha of 0.05.

  26. The study design effect will be about 2.0.

  27. The intraclass correlation will be about 0.2 when comparing the control group with the treatment group(s).

  28. The intraclass correlation will be about 0.8 when comparing the first module of the treatment groups with the second module.


Table 12. Detectable differences for field test experiment hypotheses

Hypothesis

Control group


Treatment group

Detectable difference with 95 percent confidence

Definition

Sample size


Definition

Sample size

1

30 minute survey

1,429


Two 10-15 minute
survey modules

2,857

5.7

2

30 minute survey

1,429


Two 10-15 minute
survey modules

2,857

5.7

3

30 minute survey

1,429


Two 10-15 minute
survey modules

2,857

5.7

4

30 minute survey

1,000


Two 10-15 minute
survey modules

2,000

5.7

5

30 minute survey

1,504


Two 10-15 minute
survey modules

3,007

5.6

6

30 minute survey

1,000


Two 10-15 minute
survey modules

2,000

3.7 minutes

7

Second of the two 10–15 minute survey modules

1,000


First of the two 10-15 minute survey modules

1,000

1.9 minutes

8

30 minute survey

1,000


Two 10-15 minute
survey modules

2,000

4.4

9

Second of the two 10–15 minute survey modules

1,000


First of the two 10-15 minute survey modules

1,000

2.0

10

30 minute survey

1,000


Two 10-15 minute
survey modules

2,000

4.4

11

Second of the two 10–15 minute survey modules

1,000


First of the two 10-15 minute survey modules

1,000

2.0

12

30 minute survey

1,000


Two 10-15 minute
survey modules

2,000

6.4

13

Second of the two 10–15 minute survey modules

1,000


First of the two 10-15 minute survey modules

1,000

3.2

14

Two 10-15 minute survey modules for $15 each

1,504


Two 10-15 minute survey modules for $20/$10

1,504

6.4

15

Two 10-15 minute survey modules for $15 each

1,429


Two 10-15 minute survey modules for $20/$10

1,429

6.6

16

Grid format

1,500


Three new questions

1,500

3.1 minutes

17

Grid format

1,500


Three new questions

1,500

4.2

18

Grid format

1,500


Three new questions

1,500

4.2

19

Checkbox format

1,500


Branching questions

1,500

3.1 minutes

20

Checkbox format

1,500


Branching questions

1,500

4.2

21

Checkbox format

1,500


Branching questions

1,500

4.2

22

Closed-ended format

225


Open-ended format

225

12.2

23

Closed-ended format

225


Open-ended format

225

6.7


    1. Reviewing Statisticians and Individuals Responsible for Designing and Conducting the Study

The study is being conducted for the National Center for Education Statistics (NCES), U.S. Department of Education. NCES’s prime contractor is the RTI International (RTI). Subcontractors include Coffey Consulting; Hermes; HR Directions; Kforce Government Solutions, Inc.; Research Support Services; Shugoll Research; and Strategic Communications, Inc. Consultants are Dr. Sandy Baum, Dr. Stephen Porter, and Ms. Alisa Cunningham. Principal professional RTI staff, not listed above, who are assigned to the study include Mr. Jeff Franklin, Ms. Christine Rasmussen, Ms. Kristin Dudley, Ms. Annaliza Nunnery, and Ms. Tiffany Mattox.

The following statisticians at NCES are responsible for the statistical aspects of the study: Dr. Tracy Hunt-White, Dr. Sarah Crissey, Dr. Sean Simone, Mr. Ted Socha, and Dr. Elise Christopher. The following staff members at RTI are working on the statistical aspects of the study design: Dr. Jennifer Wine, Dr. James Chromy, Dr. Natasha Janson, Mr. Peter Siegel, Dr. David Wilson, Dr. Bryan Shepherd, Dr. Emilia Peytcheva, Dr. Andy Peytchev, Dr. John Riccobono, Mr. David Radwin, and Dr. Jennie Woo.



  1. References

Folsom, R.E., Potter, F.J., and Williams, S.R. (1987). Notes on a Composite Size Measure for Self-Weighting Samples in Multiple Domains. Proceedings of the Section on Survey Research Methods of the American Statistical Association, 792-796.



1 Institutions in Puerto Rico were not eligible for NPSAS:12.

2 A preliminary sampling frame has been created using IPEDS:2011-12 data, on which frame counts in table 7 are based. The frame will be recreated with the most up-to-date data prior to both the field test and full-scale sample selections.

3 There is a small chance that certain institutions may be selected for both the field test and full-scale studies, such as small systems.

4 Folsom, R.E., Potter, F.J., and Williams, S.R. (1987). Notes on a Composite Size Measure for Self-Weighting Samples in Multiple Domains. Proceedings of the Section on Survey Research Methods of the American Statistical Association, 792-796.

5 A Hispanic-serving institutions indicator is no longer available from IPEDS, so we will create an indicator following the logic that was previously used for IPEDS.

6 We will decide what, if any, collapsing is needed of the categories for the purposes of implicit stratification.

7 For sorting purposes, Alaska and Hawaii will be combined with Puerto Rico in the Outlying Areas region rather than in the Far West region.

8 All institutions will be asked to include students enrolled through April 30 in the full-scale.

9 STEM (Science, Technology, Engineering, and Mathematics) fields include mathematics; physical sciences; biological/life sciences; computer and information sciences; engineering and engineering technologies; and science technologies.

10 In NPSAS:12, poststratification caused an increase in bias and design effects. Accounting for financial aid in the sampling stratification may help avoid these issues for NPSAS:16 poststratification.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleChapter 2
Authorspowell
File Modified0000-00-00
File Created2021-01-25

© 2024 OMB.report | Privacy Policy