Part B NPSAS 2020

Part B NPSAS 2020.docx

2019–20 National Postsecondary Student Aid Study (NPSAS:20)

OMB: 1850-0666

Document [docx]
Download: docx | pdf

2019–20 NATIONAL POSTSECONDARY STUDENT AID STUDY (NPSAS:20)



Supporting Statement Part B

OMB # 1850-0666 v.25

Submitted by

National Center for Education Statistics

U.S. Department of Education

August 2019

revised October 2019


Tables

Table 1. Number of institutions to be sampled, by control and level of institution and state 3

Table 2. Number of institutions to be sampled, by control and level of institution 5

Table 3. Potential first-time beginners’ false positive rates, by source and control and level of institution: 2011–12 8

Table 4. First-time beginner status determination, by student type: 2011–12 9

Table 5. First-time beginner false positive rates, by control and level of institution: 2011–12 9

Table 6. Preliminary graduate student sample sizes, by control and level of institution 10

Table 7. Preliminary undergraduate student survey sample sizes, by control and level of institution 11

Table 8. Preliminary undergraduate student sample sizes, by control and level of institution and state 12

Table 9. Preliminary first-time beginning student (FTB) sample sizes, by control and level of institution 13

Table 10. Calibration sample design by condition and phase of data collection 21

Table 11. NPSAS:20 Data Collection Design 23



Figures

Figure 1. Institution contacting 18

Figure 2. NPSAS:20 calibration timeline 23


  1. Collection of Information Employing Statistical Methods

This submission requests clearance for: (1) the 2019–20 National Postsecondary Student Aid Study (NPSAS:20) full-scale student data collection materials and procedures, which includes the institution student record data abstraction and student survey; (2) panel maintenance activities with the field test data sample for the 20/22 Beginning Postsecondary Students Longitudinal Study (BPS:20/22); and (3) carries over respondent burden, procedures, and materials related to the NPSAS:20 institution sampling, enrollment list collection, and matching to administrative data files as approved by OMB in July and September 2019 (OMB#1859-0666 v.23-24). Specific plans are provided below.

    1. Respondent Universe

NPSAS:20 will be nationally representative of both undergraduate and graduate students and state-representative of undergraduate students overall and in public 2-year and public 4-year institutions, with a two-stage sampling design. As described below, the first stage involves selection of institutions and, in the second stage, students will be selected from within the sampled institutions. Because NPSAS:20 will serve as the base year for the 2020 cohort of the Beginning Postsecondary Students (BPS) Longitudinal Study, it will include a nationally representative sample of first-time beginning students (FTBs).

      1. Institution Universe

To construct the full-scale institution sampling frame for NPSAS:20, we used institution data collected from various surveys of the Integrated Postsecondary Education Data System (IPEDS). The NPSAS:20 institution (first stage) sampling frame includes all levels (less-than-2-year, 2-year, and 4-year) and control classifications (public, private nonprofit, and private for-profit) of Title IV eligible postsecondary institutions in the 50 states, the District of Columbia, and Puerto Rico.

To be eligible for NPSAS:20, an institution must do the following during the 2019–20 academic year:

  • offer an educational program designed for persons who have completed secondary education;

  • offer at least one academic, occupational, or vocational program of study lasting at least 3 months or 300 clock hours;

  • offer courses that are open to more than the employees or members of the company or group (e.g., union) that administer the institution;

  • be located in at least one of the 50 states, the District of Columbia, or Puerto Rico;

  • be other than a U.S. service academy;1 and

  • have a signed Title IV participation agreement with the U.S. Department of Education.2

Institutions providing only avocational, recreational, or remedial courses or only in-house courses for their own employees will be excluded.

      1. Student Universe

The student sampling frame will include all students who meet eligibility requirements from the participating institutions. The student (second stage) sampling frame is described below. NPSAS-eligible undergraduate and graduate students are those who were enrolled in the NPSAS institution in any term or course of instruction between July 1, 2019 and April 30, 2020 and who are:

  • enrolled in either (1) an academic program; (2) at least one course for credit that could be applied toward fulfilling the requirements for an academic degree; (3) exclusively noncredit remedial coursework that has been determined by their institution to be eligible for Title IV aid; or (4) an occupational or vocational program that requires at least 3 months or 300 clock hours of instruction to receive a degree, certificate, or other formal award; and

  • not concurrently enrolled in high school; and

  • not enrolled solely in a General Educational Development (GED®)3 or other high school completion program.

    1. Statistical Methodology

      1. Institution Sample

The NPSAS:20 institution sampling frame was constructed from the Integrated Postsecondary Education Data System (IPEDS) 2018-19 Institutional Characteristics Header, 2018-19 Institutional Characteristics, 2018-19 Completions, 2018-2019 12-Month Enrollment, and 2017-18 Fall Enrollment files. Freshening the institution sample will not be needed because we used the most up-to-date institution frame available. It is possible that some for-profit institutions and large chains of for-profit institutions may have been closed or sold after the latest IPEDS data collection. We have taken this into account in the sample design by using all available resources, such as the Office of Federal Student Aid (FSA) Postsecondary Education Participants System (PEPS) website, and conducting web searches for articles about closed institutions to identify these closed for-profit institutions. When using IPEDS to create the sampling frame, we identified and excluded institutions that are still in IPEDS but are no longer eligible for NPSAS:20 due to closure. For the small number of institutions on the frame that have missing enrollment information because they are not imputed as part of IPEDS, enrollment data will be imputed using the latest IPEDS imputation procedures to guarantee complete data for the frame.4

The institution strata for NPSAS:20 are the following three sectors within each state and territory, for a total of 156 (52 x 3) sampling strata:

    • public 2-year;

    • public 4-year;5 and

    • all other institutions, including:

    • public less-than-2 year;

    • private nonprofit (all levels); and

    • private for-profit (all levels).

Institution sample sizes by the 156 institution strata are presented in table 1. The sample sizes presented in table 1 will allow us to have state-representative6 undergraduate student samples for public 2-year and public 4-year institutions as well as overall. The sample will be nationally representative for both undergraduate and graduate students.

Table 1. Number of institutions to be sampled, by control and level of institution and state

State

Number of institutions

Public 2-year


Public 4-year


Other sectors


All Institutions

Population estimate

Sample size

Population estimate

Sample size

Population estimate

Sample size

Population estimate

Total sample size

Total

960

960


777

777


4,490

1,369


6,227

3,106

Alabama

24

24


14

14


41

30


79

68

Alaska

0

0


4

4


5

5


9

9

Arizona

20

20


9

9


77

30


106

59

Arkansas

22

22


11

11


50

30


83

63

California

105

105


50

50


483

30


638

185

Colorado

11

11


19

19


70

30


100

60

Connecticut

14

14


10

10


46

30


70

54

Delaware

0

0


3

3


14

14


17

17

District of Columbia

0

0


2

2


19

19


21

21

Florida

30

30


42

42


261

30


333

102

Georgia

24

24


27

27


94

30


145

81

Hawaii

6

6


4

4


12

12


22

22

Idaho

4

4


4

4


29

29


37

37

Illinois

48

48


12

12


184

30


244

90

Indiana

1

1


15

15


92

30


108

46

Iowa

16

16


7

7


60

30


83

53

Kansas

25

25


8

8


43

30


76

63

Kentucky

16

16


8

8


65

30


89

54

Louisiana

15

15


17

17


82

30


114

62

Maine

7

7


10

10


20

20


37

37

Maryland

16

16


14

14


49

30


79

60

Massachusetts

16

16


14

14


126

30


156

60

Michigan

24

24


22

22


117

30


163

76

Minnesota

32

32


12

12


54

30


98

74

Mississippi

15

15


8

8


32

32


55

55

Missouri

17

17


14

14


123

30


154

61

Montana

10

10


7

7


14

14


31

31

Nebraska

9

9


9

9


28

28


46

46

Nevada

0

0


7

7


32

32


39

39

New Hampshire

7

7


6

6


25

25


38

38

New Jersey

19

19


13

13


126

30


158

62

New Mexico

19

19


9

9


19

19


47

47

New York

37

37


43

43


348

30


428

110

North Carolina

58

58


17

17


90

30


165

105

North Dakota

5

5


9

9


14

14


28

28

Ohio

30

30


36

36


212

30


278

96

Oklahoma

22

22


17

17


66

30


105

69

Oregon

17

17


9

9


52

30


78

56

Pennsylvania

18

18


45

45


273

30


336

93

Puerto Rico

5

5


14

14


114

30


133

49

Rhode Island

1

1


2

2


19

19


22

22

South Carolina

20

20


13

13


61

30


94

63

South Dakota

5

5


7

7


16

16


28

28

Tennessee

39

39


10

10


101

30


150

79

Texas

60

60


49

49


279

30


388

139

Utah

3

3


7

7


57

30


67

40

Vermont

1

1


4

4


19

19


24

24

Virginia

24

24


17

17


105

30


146

71

Washington

7

7


36

36


58

30


101

73

West Virginia

12

12


13

13


49

30


74

55

Wisconsin

17

17


17

17


63

30


97

64

Wyoming

7

7

 

1

1

 

2

2

 

10

10

SOURCE: Population estimates based on IPEDS 2018-2019 data.

We selected a total of 3,106 institutions which include a census of all public 2-year and all public 4-year institutions and a sample of 1,369 institutions from the “all other institutions” stratum. As was achieved in NPSAS:18-AC, for NPSAS:20 we expect about a 99 percent eligibility rate, an 85 percent rate for provision of student enrollment lists, and a 93 percent rate for provision of student records among institutions providing lists. This will yield approximately 2,614 enrollment lists, and student records from 2,431 institutions. Within the “all other institutions” stratum, our goal was to sample at least 30 institutions per state (when possible) so that the institutions in the stratum are sufficiently represented within the state and national samples. We used the following criteria from NPSAS:18-AC to determine NPSAS:20 institution sample sizes within the “all other institutions” stratum:

  1. In states with 30 or fewer institutions in the “all other institutions” strata, we took a census of these institutions.

  2. In states with more than 30 institutions in the “all other institutions” strata and where selecting only 30 institutions would result in a very high sampling fraction, we took a census of institutions. We have arbitrarily chosen 36 institutions as the cutoff to avoid high sampling fractions. This cutoff results in taking a census of institutions in states that have between 31 and 36 institutions in the “all other institutions” strata.7

  3. In states with more than 36 institutions in the “all other institutions” strata, we sampled 30 of these institutions.

Within the “all other institutions” stratum, we selected institutions using a variation of probability proportional to size (PPS) sampling called sequential probability minimum replacement (PMR) sampling.8 This method selects institutions sequentially with probability proportional to size and with minimum replacement. Selection with minimum replacement means that the actual number of hits for an institution can equal the integer part of the expected number of hits for that institution, or the next largest integer, that is, institutions have a chance of being selected more than once.9 Instead of the PMR sampling algorithm selecting some institutions multiple times, prior to the PMR sample selection, we set aside for inclusion with certainty in the sample all institutions with a probability of being selected more than once, that is, adjusting their probability of selection to be one. Then, the probabilities of selection for other institutions were adjusted accordingly, prior to PMR selection, so that the total institution sample size target was met. A composite size measure10 will be used to help achieve self-weighting samples11 for student-by-institution strata (e.g., FTBs in public 2-year institutions) and to allow flexibility to change sampling rates in selected strata without losing the self-weighting attribute of the sampling method. Institution composite measures of size will be determined using undergraduate and graduate student enrollment counts and FTB counts from the IPEDS 12-Month Enrollment and Fall Enrollment files, respectively.

Within the “all other institutions” stratum, additional implicit stratification was accomplished by sorting the sampling frame by the following classifications, as appropriate:

  1. Control and level of institution;

  2. Historically Black Colleges and Universities (HBCUs) indicator;

  3. Hispanic-serving institutions (HSIs) indicator;12

  4. Carnegie classifications of postsecondary institutions;13 and

  5. The institution measure of size.

The objective of this implicit stratification is to approximate proportional representation of institutions on these measures.

Table 2 shows the approximate distribution of the sample sizes across control and level of institution:

  • Public less-than-2-year;

  • Public 2-year;

  • Public 4-year, non-doctorate-granting, primarily sub-baccalaureate;

  • Public 4-year, non-doctorate-granting, primarily baccalaureate;

  • Public 4-year, doctorate-granting;

  • Private nonprofit, less-than-4-year;

  • Private nonprofit, 4-year, non-doctorate-granting;

  • Private nonprofit, 4-year, doctorate-granting;

  • Private for-profit, less-than-2-year;

  • Private for-profit, 2-year; and

  • Private for-profit, 4-year.

Table 2. Number of institutions to be sampled, by control and level of institution

Control and level of institution

Population estimate

Sample size

Total

6,227

3,106

Public less-than-2-year

228

37

Public 2-year

960

960

Public 4-year, non-doctorate-granting, primarily sub-baccalaureate

155

155

Public 4-year, non-doctorate-granting, primarily baccalaureate

228

228

Public 4-year, doctorate-granting

394

394

Private nonprofit, less-than-4-year

200

34

Private nonprofit, 4-year, non-doctorate-granting

936

393

Private nonprofit, 4-year, doctorate-granting

706

391

Private for-profit, less-than-2-year

1,433

231

Private for-profit, 2-year

618

160

Private for-profit, 4-year

369

123

SOURCE: Population estimates based on IPEDS 2018-2019 data.

      1. Student Sample

Student Enrollment List Collection

To begin NPSAS data collection, sampled institutions will be asked to provide a list of all their NPSAS-eligible undergraduate and graduate students enrolled in the targeted academic year, covering July 1 through June 30 (methods for contacting the sampled institutions are described below in section B.3, and student list data elements are described in the previously submitted package). Since NPSAS:04, institutions have been asked to limit listed students to only those enrolled through April 30. This truncated enrollment period excludes students who first enrolled in May or June, but it allows lists to be collected earlier and, in turn, data collection to be completed in less than 12 months. Any lack of coverage resulting from the truncated enrollment period will be accounted for by the poststratification weight adjustment.

Many institutions know their enrolled students prior to April 30 and provide lists in February, March, or April. However, continuous enrollment institutions, including many of the for-profit institutions, typically cannot provide enrollment lists until mid-May, at the earliest, given that the lists include students enrolled through April 30. This results in students from these institutions having less time in data collection and potentially lower survey response rates than other students. For institutions with continuous enrollment, we will change the endpoint of enrollment from April 30 to March 31 to receive their enrollment lists earlier, allowing more time for student data collection. We conducted research using NPSAS:16 data and concluded that we will not significantly harm representation of the target population by excluding students who enroll in continuous enrollment institutions in April for the first time during the academic year. Again, any lack of coverage resulting from the truncated enrollment period will be accounted for by the poststratification weight adjustment.

Student Stratification

The student sampling strata will be:

  1. undergraduate students who are potential FTBs;

  1. other undergraduate students;

  2. graduate students who are veterans;

  3. master’s degree students in science, technology, engineering, and mathematics (STEM) programs;

  4. master’s degree students in education and business programs;

  5. master’s degree students in other programs;

  6. doctoral-research/scholarship/other students in STEM programs;

  7. doctoral-research/scholarship/other students in education and business programs;

  8. doctoral-research/scholarship/other students in other programs;

  9. doctoral-professional practice students; and

  10. other graduate students.

To be comparable to NPSAS:16 and NPSAS:18-AC, we are keeping the graduate strata similar to the sampling strata used in those studies.

If students fall into multiple strata, such as graduate students who are veterans, the ordering of the strata above will be used to prioritize the stratification.

Several student subgroups will be intentionally sampled at rates different than their natural occurrence within the population due to specific analytic objectives. The following groups will be oversampled:

  • undergraduate students who are potential FTBs;

  • graduate students who are veterans;

  • master’s degree students in STEM programs;

  • doctoral-research/scholarship/other students in STEM programs; and

  • master’s degree students enrolled in for-profit institutions.

Similarly, we anticipate the following groups will be under sampled:

  • master’s degree students in education and business programs; and

  • doctoral-research/scholarship/other students in education and business programs.

Because these two groups are so large, sampling in proportion to the population would make it difficult to draw inferences about the experiences of other master’s degree and doctoral students, respectively.

As was done for NPSAS:16 and NPSAS:18-AC, we will match the student enrollment lists to two supplemental databases prior to sampling (pre-sampling matching). To identify veterans, we will match the student enrollment lists with a list of veterans from the Veterans Benefits Administration (VBA) because the veterans identified by institutions on the lists are incomplete. The information provided by the VBA match will be used with the veteran status from the enrollment lists to explicitly stratify graduate students and implicitly stratify undergraduate students. As in NPSAS:18-AC, the undergraduate students who are veterans will not be oversampled within each state because that would require too large of a total sample size. The implicit stratification will allow the sample proportions of veterans to approximately match the population within institution and student strata, which will ensure that we have enough undergraduate veterans in the sample for analytic purposes. We will also match the student lists to the National Student Loan Data System (NSLDS) data and use the financial aid data for student-implicit stratification. Within the student-explicit strata for graduate students and the veteran-implicit strata for undergraduate students, we will sort the students by federally aided/unaided, which will allow the sample proportions of aided and unaided students to approximately match the population within institution and student strata.

Identification of FTBs

As mentioned in section 1 above, NPSAS:20 will serve as the base year for the 2020 cohort of BPS and will include a nationally representative sample of FTBs, hence the stratification described above. Accurately qualifying sample members as FTBs is a continuing challenge, but is very important because unacceptably high rates of misclassification (i.e., false positives and false negatives) can and have resulted in: (1) excessive cohort loss with too few eligible sample members to sustain the longitudinal study, (2) excessive cost to “replenish” the sample with little value added, and (3) inefficient sample design (excessive oversampling of “potential” FTBs) to compensate for anticipated misclassification error.

In NPSAS:04, the FTB false positive and false negative rates were 54 and 25 percent, respectively, because institutions tend to have difficulty identifying FTBs. In NPSAS:12 (the next NPSAS after NPSAS:04 and the last NPSAS prior to NPSAS:20 to spin off a BPS cohort), we greatly improved the identification of FTBs from what was provided on student enrollment lists, and we will take several steps early in the NPSAS:20 listing and sampling processes to similarly improve the rate at which FTBs are correctly classified for sampling. First, in addition to an FTB indicator, we will request that enrollment lists provided by institutions (or institution systems) include degree program, class level, date of birth, an indicator for dual enrollment in high school, and high school completion date. Students identified by the institution as FTBs, but also identified as in their third year or higher and/or not an undergraduate student, will not be classified as FTBs for sampling. Additionally, students who are dually enrolled at the postsecondary institution and in high school based on the enrollment in high school (or completion program) indicator and the high school graduation date will not be eligible for sampling. If the FTB indicator is not provided for a student on the list, but the student is 18 years old or younger and does not appear to be dually enrolled, the student will be classified as an FTB for sampling. Otherwise, if the FTB indicator is not provided for a student on the list and the student is over the age of 18, then the student will be sampled as an “other undergraduate,” but will be part of the BPS cohort if identified during the survey as an FTB.

Second, prior to sampling, we will match all students listed as potential FTBs to NSLDS records to determine if any have a federal financial aid history pre-dating the NPSAS year (earlier than July 1, 2019). Since NSLDS maintains current records of all Title IV grant and loan funding, any students with data showing disbursements from prior years can be reliably excluded from the sampling frame of FTBs. Given that about 68 percent of FTBs receive some form of Title IV aid in their first year, this matching process will not be able to exclude all listed FTBs with prior enrollment but will significantly improve the accuracy of the listing prior to sampling, yielding fewer false positives. All potential FTBs will be sent to NSLDS because 11 percent of students 18 years of age and younger who were sampled as FTBs and surveyed in NPSAS:12 were not FTBs (false positives). In NPSAS:12, matching to NSLDS identified about 20 percent of the cases sent for matching as false positives (see table 3). NPSAS:12 showed that it is feasible to send all potential FTBs to NSLDS for matching. NSLDS has a free process to match the FTBs, and lists were usually returned to us in one day.

Third, simultaneously with NSLDS matching, we will match all potential FTBs to the Central Processing System (CPS) to identify students who, on their Free Application for Federal Student Aid (FAFSA), indicated that they had attended college previously. In NPSAS:12, we identified about 17 percent of the cases sent for CPS matching as false positive (see table 3). CPS has an automated, free process for matching that we have used in NPSAS:12 for this purpose, as well as for other purposes in the past for NPSAS sample students. This matching can handle large numbers of cases, and the matching usually takes one day.

Fourth, after NSLDS and CPS matching, we will match a subset of the remaining potential FTBs to the National Student Clearinghouse (NSC) for further narrowing of FTBs based on the presence of evidence of earlier enrollment. In NPSAS:12, matching to NSC identified about 7 percent of the remaining potential FTBs, after NSLDS and CPS matching, as false positives. NSC worked with us to set up a process that can handle a large number of potential FTBs and return FTB lists to us within two or three days. There is a “charge per case matched” for NSC matching, so we plan a targeted approach to the matching. We plan to target potential FTBs over the age of 18 in the public 2-year and for-profit sectors because these sectors had high false positive rates in NPSAS:12 and have large NPSAS:20 sample sizes.

Fifth, in setting our FTB selection rates, we will take into account the false positive rates, based on the NPSAS:12 survey, as shown in table 4 (by student type) and table 5 (by control and level of institution). In NPSAS:12, of the 36,620 survey respondents sampled as potential FTBs, the survey confirmed that 28,550 were FTBs, for an unweighted false positive rate of 22 percent (100 percent minus 78 percent). Conversely, of the 48,380 survey respondents sampled as other undergraduate or graduate students, about 1,590 were FTBs, for a false negative rate of about 3 percent unweighted. With the help of the presampling matching, the NPSAS:12 overall false positive rate of 22 percent was much less than the 54 percent false positive rate in NPSAS:04, when pre-sampling matching was not conducted. The false negative rate is small, but we will also account for it when setting the FTB selection rates.

Table 3. Potential first-time beginners’ false positive rates, by source and control and level of institution: 2011–12

Control and level of institution

Total


Source


NSLDS


CPS


NSC


Sent for matching

False positives

Percent false positive

Sent for matching

False positives

Percent false positive

Sent for matching

False positives

Percent false positive

Sent for matching

False positives

Percent false positive

Total

2,103,620

571,130

27.1


2,103,620

417,910

19.9


2,103,620

364,350

17.3


719,450

48,220

6.7

Public
















Less-than-2-year

3,690

2,030

54.9


3,690

1,720

46.5


3,690

1,520

41.2


2-year

816,150

276,500

33.9


816,150

188,630

23.1


816,150

153,150

18.8


584,950

45,300

7.7

4-year, non-doctorate-granting

194,600

26,500

13.6


194,600

17,180

8.8


194,600

18,010

9.3


4-year, doctorate-granting

517,380

53,870

10.4


517,380

28,000

5.4


517,380

42,840

8.3


Private nonprofit
















Less-than-4-year

2,570

1,020

39.6


2,570

750

29.0


2,570

640

24.8


4-year, non-doctorate-granting

106,800

18,860

17.7


106,800

13,880

13.0


106,800

15,830

14.8


4-year, doctorate-granting

152,450

13,940

9.1


152,450

8,680

5.7


152,450

11,850

7.8


Private for-profit
















Less-than-2-year

16,800

9,820

58.4


16,800

8,800

52.4


16,800

4,940

29.4


7,110

130

1.8

2-year

69,070

42,980

62.2


69,070

37,920

54.9


69,070

29,730

43.0


26,770

680

2.5

4-year

224,110

125,610

56.0


224,110

112,370

50.1


224,110

85,850

38.3


100,620

2,120

2.1

Not applicable.

NOTE: NSLDS = National Student Loan Data System; CPS = Central Processing System; and NSC = National Student Clearinghouse. Detail may not sum to totals because of rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2011–12 National Postsecondary Student Aid Study (NPSAS:12).

Table 4. First-time beginner status determination, by student type: 2011–12

Student type

Students surveyed

Confirmed FTB eligibility

Number

Unweighted percent

Total

85,000

30,140

35.5

Total undergraduate

71,000

30,140

42.4

Potential FTB

36,620

28,550

78.0

FTB in certificate program

10,900

7,670

70.3

Other FTB

25,720

20,880

81.2

Other undergraduate

34,380

1,580

4.6

Graduate

14,000

10

#

# Rounds to zero.

NOTE: Students surveyed includes all eligible sample members who completed the survey. FTB = first-time beginner. Detail may not sum to totals because of rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2011–12 National Postsecondary Student Aid Study (NPSAS:12).

Table 5. First-time beginner false positive rates, by control and level of institution: 2011–12

Control and level of institution

FTB false positive rate

Total

22.0

Public less-than-2-year

41.6

Public 2-year

23.5

Public 4-year, non-doctorate-granting

11.9

Public 4-year, doctorate-granting

8.8

Private nonprofit, less-than-4-year

24.0

Private nonprofit, 4-year, non-doctorate-granting

11.2

Private nonprofit, 4-year, doctorate-granting

10.0

Private for-profit, less-than-2-year

31.2

Private for-profit, 2-year

31.2

Private for-profit, 4-year

27.3

NOTE: There were 10 categories of control and level of institution defined for NPSAS:12, instead of 11 in NPSAS:16 and NPSAS:20.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2011–12 National Postsecondary Student Aid Study (NPSAS:12).

Sample Sizes and Student Sampling

NPSAS:20 will be designed to sample a total of 400,000 students. Student records and administrative data will be collected for all sample students, and a subset of 150,000 will be asked to complete a survey. Based on past cycles of NPSAS, we expect about a 95 percent eligibility rate (among all sampled students, with eligibility determined based on the survey, student records, and administrative data), a 70 percent survey response rate, and a 93 percent student records completion rate.14 This will yield approximately 100,000 surveys and 342,000 student records, with an average student sample size per institution of approximately 165 students.

We expect to sample 25,000 graduate students, and the remaining sample will be of undergraduate students. All sampled graduate students will be asked to complete a survey, in addition to us collecting student records and administrative data. The preliminary graduate student sample sizes by institution strata are presented in table 6.

Table 6. Preliminary graduate student sample sizes, by control and level of institution

Control and level of institution

Population estimate

Sample size

Total

3,875,095

25,000

Public 4-year non-doctorate-granting primarily sub-baccalaureate

1,521

80

Public 4-year non-doctorate-granting primarily baccalaureate

166,519

1,510

Public 4-year doctorate-granting

1,627,687

7,040

Private nonprofit 4-year non-doctorate-granting

247,948

2,620

Private nonprofit 4-year doctorate-granting

1,411,067

6,490

Private for-profit 4-year

420,353

7,260

NOTE: Detail may not sum to totals because of rounding. roun

SOURCE: Population estimates based on IPEDS 2016-2017 data.

The undergraduate student sample of 125,000 will be both nationally representative and state-representative for public 2-year and public 4-year institutions, as well as overall. These students will be surveyed in addition to collecting their institution records and matching to administrative data. The preliminary distribution of the undergraduate sample by control and level of institution is shown in table 7. The remaining 250,000 undergraduate students will be sampled only for collecting student records and administrative data. All 375,000 undergraduate students will be included in the state-representative sample. We propose initially dividing the undergraduate sample evenly between states (resulting in 7,212 students per state) and proportionally within states to obtain the preliminary sample sizes for each stratum. This initial distribution is shown in table 8; the final sample sizes per state will be determined after taking the NPSAS:18-AC results into account.

As part of setting the NPSAS:20 sample sizes, we need to determine the sample size of FTBs, which will be part of both NPSAS and the BPS 2020 cohort. The BPS:20/22 sample size is planned to be about 37,000, including 30,000 FTBs who respond to the NPSAS:20 survey and confirm that they are FTBs, and 7,000 potential FTBs who do not respond to the survey. The NPSAS:20 potential FTB sample size will be approximately 55,400, assuming 95 and 70 percent NPSAS:20 eligibility and survey response rates, respectively, and a 22 percent false positive rate and a 5 percent false negative rate, as in NPSAS:12.15 The preliminary distribution of potential FTBs by control and level of institution is shown in table 9.

The FTB sample size may be large enough in some states to allow state-level estimates in BPS:20/22. After the NPSAS state sample sizes are finalized, we will determine which states are likely to have a sufficient FTB sample size to produce BPS state-level estimates overall and in public 2- and 4-year institutions. If the FTB sample size is not large enough to have BPS state-level estimates based on the planned NPSAS design, we will consider oversampling FTBs in some of the states (and undersampling FTBs in others) to achieve state-level representation, overall and in public 2- and 4-year institutions, as long as the effect on the NPSAS and BPS unequal weighting effects is minimal.

During the NPSAS:18-AC Technical Review Panel (TRP) meeting, panel members expressed an interest in being able to create their own groupings of institutions for analysis (i.e. institutions within specific university systems). We expect to sample at least 120 and 200 undergraduates, on average, per public 2-year and 4-year institutions, respectively. The minimum sample size will vary by institution depending on the strata and enrollment size of the institution. Therefore, the sample size will be sufficient to allow researchers to aggregate institutions for analysis of undergraduate students in public 2-year and public 4-year institutions.

Institution-level student sampling rates will be set based on frame data and adjusted, based on NPSAS:18-AC data, to account for IPEDS data overestimating the enrollment counts for the student lists. Based on these adjusted rates, students will be sampled on a flow basis as student lists are received. Stratified systematic sampling procedures will be used. Within the graduate-student strata for veterans, the students will be sorted by master’s and doctoral to ensure that the sample will be roughly proportional to the frame. As mentioned above, undergraduate student strata will be sorted (implicitly stratified) by veteran status and all strata will be sorted by federally aided/unaided students to maintain proportionality between the sample and frame. Sample yield will be monitored by institution and student sampling strata, and the sampling rates will be adjusted early, if necessary, to achieve the desired sample yields.

After undergraduate students are initially sampled, they will be randomly divided into two groups, within student strata within institution. While both groups will have student records and administrative data collected, only one group will be administered the survey. From the 375,000 undergraduates sampled, the proportion of students in each group will be determined such that an overall sample size of 125,000 undergraduate students for the survey, including 55,400 FTBs, will be achieved.

Table 7. Preliminary undergraduate student survey sample sizes, by control and level of institution

Control and level of institution

Population estimate

Sample size

Total

23,030,788

125,000

Public less-than-2-year

74,141

1,600

Public 2-year

8,724,915

48,600

Public 4-year, non-doctorate-granting, primarily sub-baccalaureate

1,748,376

3,975

Public 4-year, non-doctorate-granting, primarily baccalaureate

1,207,840

6,654

Public 4-year, doctorate-granting

5,828,957

17,706

Private nonprofit, less-than-4-year

73,373

1,600

Private nonprofit, 4-year, non-doctorate-granting

1,408,181

6,946

Private nonprofit, 4-year, doctorate-granting

2,024,218

5,694

Private for-profit, less-than-2-year

350,055

4,518

Private for-profit, 2-year

430,138

10,588

Private for-profit, 4-year

1,160,594

17,119

SOURCE: Population estimates based on IPEDS 2016-2017 data.

Table 8. Preliminary undergraduate student sample sizes, by control and level of institution and state

State

Number of students

Public 2-year


Public 4-year


Other sectors


All institutions

Population estimate

Sample size

Population estimate

Sample size

Population estimate

Sample size

Population estimate

Total sample size

Total

8,724,915

120,079


8,785,173

154,452


5,520,700

100,493


23,030,788

375,000

Alabama

118,972

2,576


150,217

3,252


63,901

1,384


333,090

7,212

Alaska

0

0


40,288

6,740


2,822

472


43,110

7,212

Arizona

294,365

2,886


164,298

1,611


276,935

2,715


735,598

7,212

Arkansas

67,914

2,688


92,477

3,660


21,856

865


182,247

7,212

California

1,832,697

3,930


1,041,511

2,233


488,875

1,048


3,363,083

7,212

Colorado

93,753

1,630


217,354

3,779


103,694

1,803


414,801

7,212

Connecticut

71,492

2,378


60,184

2,002


85,144

2,832


216,820

7,212

Delaware

0

0


44,705

5,275


16,419

1,937


61,124

7,212

District of Columbia

0

0


5,242

637


54,063

6,575


59,305

7,212

Florida

76,289

412


928,602

5,011


331,492

1,789


1,336,383

7,212

Georgia

163,372

1,991


314,093

3,828


114,261

1,393


591,726

7,212

Hawaii

35,522

3,347


27,163

2,559


13,856

1,306


76,541

7,212

Idaho

37,681

1,593


53,548

2,264


79,382

3,356


170,611

7,212

Illinois

553,121

4,181


153,534

1,161


247,370

1,870


954,025

7,212

Indiana

164,851

2,411


227,092

3,321


101,183

1,480


493,126

7,212

Iowa

134,204

3,142


71,199

1,667


102,617

2,403


308,020

7,212

Kansas

128,435

3,576


86,688

2,414


43,881

1,222


259,004

7,212

Kentucky

108,182

2,848


116,762

3,074


49,017

1,290


273,961

7,212

Louisiana

92,856

2,393


138,895

3,580


48,072

1,239


279,823

7,212

Maine

23,525

2,042


32,031

2,781


27,523

2,389


83,079

7,212

Maryland

172,695

3,171


173,555

3,187


46,507

854


392,757

7,212

Massachusetts

128,297

2,006


116,393

1,820


216,612

3,387


461,302

7,212

Michigan

222,662

2,524


309,533

3,509


104,037

1,179


636,232

7,212

Minnesota

171,168

3,064


132,289

2,368


99,435

1,780


402,892

7,212

Mississippi

98,883

3,666


74,606

2,766


21,021

779


194,510

7,212

Missouri

127,432

2,234


143,740

2,520


140,274

2,459


411,446

7,212

Montana

12,015

1,521


38,989

4,935


5,979

757


56,983

7,212

Nebraska

62,982

3,119


52,224

2,586


30,413

1,506


145,619

7,212

Nevada

15,893

809


109,822

5,589


15,998

814


141,713

7,212

New Hampshire

21,733

1,001


26,369

1,214


108,523

4,997


156,625

7,212

New Jersey

217,050

3,241


172,033

2,569


93,860

1,402


482,943

7,212

New Mexico

103,714

4,509


54,055

2,350


8,108

353


165,877

7,212

New York

433,328

2,324


397,424

2,132


513,935

2,756


1,344,687

7,212

North Carolina

317,005

3,625


203,834

2,331


109,786

1,256


630,625

7,212

North Dakota

9,403

1,178


41,302

5,174


6,863

860


57,568

7,212

Ohio

257,646

2,491


312,564

3,022


175,616

1,698


745,826

7,212

Oklahoma

92,340

2,613


116,046

3,284


46,478

1,315


254,864

7,212

Oregon

160,820

3,875


105,549

2,543


32,953

794


299,322

7,212

Pennsylvania

188,102

1,804


254,964

2,446


308,825

2,962


751,891

7,212

Puerto Rico

3,124

108


18,862

651


186,939

6,453


208,925

7,212

Rhode Island

20,162

1,679


24,573

2,046


41,878

3,487


86,613

7,212

South Carolina

118,283

3,126


99,317

2,624


55,319

1,462


272,919

7,212

South Dakota

8,478

1,000


40,682

4,798


11,988

1,414


61,148

7,212

Tennessee

132,194

2,653


127,126

2,552


100,008

2,007


359,328

7,212

Texas

1,083,113

3,956


668,026

2,440


223,374

816


1,974,513

7,212

Utah

54,097

1,021


166,950

3,150


161,236

3,042


382,283

7,212

Vermont

8,626

1,317


20,109

3,070


18,498

2,824


47,233

7,212

Virginia

244,220

2,943


193,123

2,328


161,045

1,941


598,388

7,212

Washington

56,772

876


359,839

5,550


50,956

786


467,567

7,212

West Virginia

23,505

950


62,221

2,514


92,777

3,748


178,503

7,212

Wisconsin

132,721

2,497


192,297

3,618


58,293

1,097


383,311

7,212

Wyoming

29,221

5,153


10,874

1,918


803

142


40,898

7,212

NOTE: Detail may not sum to totals because of rounding.

SOURCE: Population estimates based on IPEDS 2016-2017 data.

Table 9. Preliminary first-time beginning student (FTB) sample sizes, by control and level of institution

Control and level of institution

Sample size

Total

55,393

Public less-than-2-year

952

Public 2-year

19,683

Public 4-year, non-doctorate-granting, primarily sub-baccalaureate

1,741

Public 4-year, non-doctorate-granting, primarily baccalaureate

2,595

Public 4-year, doctorate-granting

5,131

Private nonprofit, less-than-4-year

1,244

Private nonprofit, 4-year, non-doctorate-granting

3,519

Private nonprofit, 4-year, doctorate-granting

2,258

Private for-profit, less-than-2-year

3,100

Private for-profit, 2-year

4,979

Private for-profit, 4-year

10,191

NOTE: Population estimates will be added once there is a final sampling frame.

BPS:20/22 Field Test Sample

In prior NPSAS data collections, a field test has been used to identify a field test sample for longitudinal follow up, either the Baccalaureate and Beyond (B&B) longitudinal study (e.g., NPSAS:16 field test) or the Beginning Postsecondary Students (BPS) Longitudinal Study (e.g., NPSAS:12 field test), with the field test collection occurring one year earlier than the full-scale study. Because the award of NPSAS:20 did not allow time for a field test to be conducted, a field test sample for BPS:20/22 will be created by drawing potential cohort members from NPSAS:20 full-scale sample. While the BPS:20 full-scale cohort will comprise students who first enroll in postsecondary education after high school during the 2019-20 academic year, the BPS:20 field test cohort will follow up with students whose first postsecondary enrollment after high school occurred during the 2018-19 academic year.

A field test sample of approximately 3,400 students will be identified through enrollment lists and survey responses collected during the NPSAS:20 full-scale study. We anticipate approximately 2,200 NPSAS:20 survey respondents will be confirmed as 2018-19 FTBs through the survey and another 1,000 survey nonrespondents through administrative data. We expect to also identify a small number of students (approximately 200) not currently enrolled but who were enrolled as FTBs during 2018-19 and may have either completed a short-term credential or withdrawn from postsecondary education. In each prior NPSAS administration there are students named on institution enrollment lists whose survey responses indicate they are no longer enrolled. While these students will not be eligible for NPSAS:20, they will be given a subset of survey questions to confirm they were FTBs in 2018-19, notify them about the potential for follow up in the BPS:20/22 field test, and administer the locating section of the NPSAS:20 survey. Additional details of the BPS:20/22 field test sample will be provided in the BPS:20/22 field test clearance package currently scheduled for submission in August 2020.

Quality Control Checks for Lists and Sampling

The number of enrollees on each institution’s NPSAS:20 student enrollment list will be checked against the latest IPEDS 12-month enrollment by student level: undergraduate and graduate. Based on experience with past rounds of NPSAS, an institution’s student list should be allowed to pass quality control (QC) and be moved on to student sampling when the student counts are within 50 percent of the most recent, non-imputed IPEDS counts. Institutions failing these criteria will be contacted to resolve the discrepancy and, when needed, a replacement requested. Sampling will not proceed until we have either confirmed that the list received is correct or have received a corrected list.

All statistical procedures will undergo thorough quality-control checks. The data collection contractor has a plan in place for sampling and all statistical activities and their statisticians employ a checklist to ensure that all appropriate QC checks are done for student sampling.

Some specific sampling QC checks include, but are not limited to, checking that the:

  • institutions and students on the sampling frames all have a known, non-zero probability of selection;

  • distribution of implicit stratification for institutions is reasonable; and

  • number of institutions and students selected match the target sample sizes.

    1. Methods for Maximizing Response Rates

Achieving high response rates in the NPSAS:20 full-scale data collection will depend on successfully identifying and locating sample members and being able to contact them and gain their cooperation. The following sections outline methods for maximizing response to the NPSAS:20 data collection.

      1. Tracing of Sample Members

To yield the maximum number of located cases with the least expense, we designed an integrated tracing approach, with the following elements.

  • Advance tracing activities, which will occur prior to the start of data collection, include initial batch database searches, such as to the National Change of Address (NCOA) databases, for cases with sufficient contact information to be matched. To handle cases for which contact information is invalid or unavailable, additional advance tracing through proprietary interactive databases will expand on leads found.

  • Hard copy mailings, emails, and text messages will be used to maintain ongoing contact with sample members, prior to and throughout data collection. The student contacting materials are provided in Appendix E. The initial mailing to sample members will include a letter announcing the start of data collection and requesting that the sample member complete the web survey as well as a toll-free number, the study website address, a Study ID and password, and a study brochure. After the data collection announcement mailing, we will send an email message mirroring the letter.

  • Sample members will have a variety of means to provide updated contact information and contact preferences. Students can use an Update Contact Information page on the secure NPSAS:20 website to provide their contact information, including cell phone number, as well as provide contacting preferences with respect to phone calls, mailings, emails, and text messages. Help Desk calls and emails providing information about a sample member’s text message preferences will be monitored and the sample member’s data record updated as soon as the information becomes known.

  • The telephone locating and surveying stage includes calling all available telephone numbers and following up on leads provided by parents and other contacts.

  • The pre-intensive batch tracing stage consists of the LexisNexis SSN and Premium Phone batch searches that will be conducted between the telephone locating and surveying stage and the intensive tracing stage.

  • Once all known telephone numbers are exhausted, a case will move into the intensive tracing stage during which tracers will conduct interactive database searches using all known contact information for a sample member. With interactive tracing, a tracer assesses each case on an individual basis to determine which resources are most appropriate and the order in which each should be used. Sources that may be used, as appropriate, include credit database searches, such as Experian, various public websites, and other integrated database services.

  • Other locating activities will take place as needed, including a LexisNexis email search conducted for nonrespondents toward the end of data collection.

      1. Training for Data Collection Staff

Telephone data collection will be conducted at the contractor’s call center. Call center staff will include Performance Team Leaders (PTLs) and Data Collection Interviewers (DCIs). Training programs for these staff members are critical to maximizing response rates and collecting accurate and reliable data.

Performance Team Leaders, who are responsible for all supervisory tasks, will attend project-specific training for PTLs, in addition to the interviewer training. They will receive an overview of the study, background and objectives, and the data collection instrument through a question-by-question review. PTLs will also receive training in the following areas: providing direct supervision during data collection; handling refusals; monitoring interviews and maintaining records of monitoring results; problem resolution; case review; specific project procedures and protocols; reviewing reports generated from the ongoing Computer Assisted Telephone Interviewing (CATI); and monitoring data collection progress.

Training for DCIs is designed to help staff become familiar with and practice using the CATI case management system and the survey instrument, as well as to learn project procedures and requirements. Particular attention will be paid to quality control initiatives, including refusal avoidance and methods to ensure that quality data are collected. DCIs will receive project-specific training on telephone interviewing and answering questions from web participants regarding the study or related to specific items within the survey. At the conclusion of training, all NPSAS call center staff must meet certification requirements by successfully completing a certification interview. This evaluation consists of a full-length interview with project staff observing and evaluating interviewers, as well as an oral evaluation of interviewers’ knowledge of the study’s Frequently Asked Questions.

      1. Case Management System

Surveys will be conducted using a single web-based survey instrument for both web (including mobile devices) and CATI data collection. The data collection activities will be accomplished through a CATI case management system, which is equipped with the numerous capabilities, including: on-line access to locating information and histories of locating efforts for each case; a questionnaire administration module with full “front-end cleaning” capabilities (i.e., editing as information is obtained from respondents); sample management module for tracking case progress and status; and automated scheduling module which delivers cases to interviewers. The automated scheduling module incorporates the following features:

  • Automatic delivery of appointment and call-back cases at specified times. This reduces the need for tracking appointments and helps ensure the interviewer is punctual. The scheduler automatically calculates the delivery time of the case in reference to the appropriate time zone.

  • Sorting of non-appointment cases according to parameters and priorities set by project staff. For instance, priorities may be set to give first preference to cases within certain sub-samples or geographic areas; cases may be sorted to establish priorities between cases of differing status. Furthermore, the historic pattern of calling outcomes may be used to set priorities (e.g., cases with more than a certain number of unsuccessful attempts during a given time of day may be passed over until the next time period). These parameters ensure that cases are delivered to interviewers in a consistent manner according to specified project priorities.

  • Restriction on allowable interviewers. Groups of cases (or individual cases) may be designated for delivery to specific interviewers or groups of interviewers. This feature is most commonly used in filtering refusal cases, locating problems, or foreign language cases to specific interviewers with specialized skills.

  • Complete records of calls and tracking of all previous outcomes. The scheduler tracks all outcomes for each case, labeling each with type, date, and time. These are easily accessed by the interviewer upon entering the individual case, along with interviewer notes.

  • Flagging of problem cases for supervisor action or supervisor review. For example, refusal cases may be routed to supervisors for decisions about whether and when a refusal letter should be mailed, or whether another interviewer should be assigned.

  • Complete reporting capabilities. These include default reports on the aggregate status of cases and custom report generation capabilities.

The integration of these capabilities reduces the number of discrete stages required in data collection and data preparation activities and increases capabilities for immediate error reconciliation, which results in better data quality and reduced cost. Overall, the scheduler provides an efficient case assignment and delivery function by reducing supervisory and clerical time, improving execution on the part of interviewers and supervisors by automatically monitoring appointments and call-backs, and reducing variation in implementing survey priorities and objectives.

      1. Survey Instrument Design

The survey will employ a web-based instrument and deployment system, which has been in use since NPSAS:08. The system provides multimode functionality that can be used for self-administration, including on mobile devices, CATI, Computer-Assisted Personal Interview (CAPI), or data entry.

In addition to the functional capabilities of the case management system and web instruments described above, our efforts to achieve the desired response rate will include using established procedures proven effective in other large-scale studies we have completed. These include:

  • Providing multiple response modes, including mobile-friendly self-administered and interviewer-administered options.

  • Offering incentives to encourage response.

  • Assigning experienced CATI interviewers who have proven their ability to contact and obtain cooperation from a high proportion of sample members.

  • Training the interviewers thoroughly on study objectives, study population characteristics, and approaches that will help gain cooperation from sample members.

  • Maintaining a high level of monitoring and direct supervision so that interviewers who are experiencing low cooperation rates are identified quickly and corrective action is taken.

  • Making every reasonable effort to obtain an interview during the initial contact, but allowing respondent flexibility in scheduling appointments to be interviewed.

  • Thoroughly reviewing all refusal cases and making special conversion efforts whenever feasible (see next section).

  • Assurance of confidentiality procedures, including restricting the ability for the respondent to view survey responses from prior log in sessions (i.e. no ability to use navigation buttons to go to “Previous” survey questions from another log in session) and the survey automatically logging out of a session after 20 minutes of inactivity.

  • Item-by-item toggling between English and Spanish languages at the discretion of the web respondent, or telephone interviewer when warranted.

      1. Refusal Aversion and Conversion

Recognizing and avoiding refusals is important to maximize the response rate. We will emphasize this and other topics related to obtaining cooperation during interviewer training. PTLs will monitor interviewers intensely during the early days of outbound calling and provide retraining as necessary. In addition, the supervisors will review daily interviewer production reports produced by the CATI system to identify and retrain any data collectors who are producing unacceptable numbers of refusals or other problems.

Refusal conversion efforts will be delayed for at least 1 week to give the respondent time after the initial refusal. Attempts at refusal conversion will not be made with individuals who become verbally aggressive or who threaten to take legal or other action. Refusal conversion efforts will not be conducted to a degree that would constitute harassment. We will respect a sample member’s right to decide not to participate and will not impinge this right by carrying conversion efforts beyond the bounds of propriety.

      1. Institution Contacting

Establishing and maintaining contact with sampled institutions throughout the data collection process is vital to the success of NPSAS:20. Institutional participation is required in order to collect enrollment lists and draw the student sample. The process by which institutions will be contacted is depicted in figure 1 and described below.

The data collection contractor will be responsible for contacting institutions on behalf of NCES. Each staff member will be assigned a set of institutions that is their responsibility throughout the data collection process. This allows the contractor's staff members to establish rapport with the institution staff and provides a reliable point of contact for the institution. Staff members are thoroughly trained in basic financial aid concepts and in the purposes and requirements of the study, which helps them establish credibility with the institution staff.

The first step in the process is verification of the chief administrator’s contact information using the Higher Ed Directory (https://hepinc.com/). Web searches and verification calls will be conducted as needed (e.g., for institutions not listed in the Directory) to confirm eligibility and confirm contact information obtained from the IPEDS header files before study information is mailed. Once the contact information is verified, we will prepare and send an information packet to the chief administrator of each sampled institution. A copy of the letter and brochure can be found in Appendix D1. The materials provide information about the purpose of the study and the nature of subsequent requests. In addition to the hardcopy materials, we will send an email to the chief administrator, copying the previous campus coordinator (if still at the institution), the IPEDS Keyholder, and the Director of Institutional Research to make them aware of the NPSAS:20 data collection. Several versions of the chief administrator letter will be used, tailored to the institution’s situation: (1) one letter for institutions that were sampled for NPSAS:16, NPSAS:18-AC, the student records (SR) collection for the 2012 Beginning Postsecondary Students Longitudinal Study cohort (BPS:12 SR), and/or the student records collection for the High School Longitudinal Study Second Follow-up (HSLS F2 SR), and have an identified campus coordinator; (2) one for new institutions with a campus coordinator candidate identified; and (3) another for new/prior institutions for which a campus coordinator has not been identified. For the last group, institutional contactors will conduct follow-up calls to the chief administrator to secure study participation and identify a campus coordinator. If the coordinator is not already a Postsecondary Data Portal (described below) user, they will be added as a user.

NCES and its contractor will identify relevant multi-campus systems within the sample because these systems can supply enrollment list data at the system level, minimizing burden on individual campuses. Even when it is not possible for a system to supply data from a centralized office, the system can lend support in other ways, such as by prompting institutions under its jurisdiction to participate. NCES and its contractor will undertake additional outreach activities, such as engaging state associations and agencies, networking with the higher education community at conferences and professional meetings, and reaching out to state government leaders. These activities are intended to promote the value of NPSAS both to data providers and data users thereby increasing interest and participation in NPSAS:20.

Figure 1. Institution contacting

Once a campus coordinator has been identified for an institution, the contractor will send the coordinator study materials with a request to complete the online Registration Page as the first step. The materials include a letter, the study brochure, and a quick guide to participation in the study (see Appendix D1). The primary functions of the Registration Page are to confirm the date the institution will be able to provide the student enrollment list and to determine how they will report student records data, by term or by month. Based on the information provided, a customized timeline for collecting the enrollment list will be created for each institution.

After the Registration Page is completed, the campus coordinator will be sent a letter requesting an electronic enrollment list of all students enrolled during the academic year. The NPSAS:20 data collection includes a calibration sample, described in Supporting Statement Part B, Section 2, and the main sample. The calibration sample institutions will be providing two enrollment lists, one in the fall and one in the spring, instead of one. The earliest enrollment lists will be due in November 2019 for the calibration sample. For the main sample, enrollment lists will be collected from January 2020 to July 2020. As described above, the lists will serve as the frame from which the student samples will be drawn. Follow-up contacts with institutions include telephone prompts, reminder emails and mailers, typically sent two weeks prior to a deadline, and touch-base emails typically sent when 3-4 weeks have passed with no outbound contact from study staff (see Appendix D1). After enrollment lists are received and validated by the contractor for completeness and quality, the campus coordinator will be sent a “thank you” email acknowledging appreciation for their time and effort.

Alternate Enrollment List Submission Methods

Two alternate submission methods will be available to campus coordinators who report a lack of time or resources needed to complete the full enrollment list. The first is compiling an enrollment list with a reduced set of critical data elements (See Appendix D1, pp D-33 to D-36 for a list of elements). The second is submitting files the institution already compiles for the National Student Clearinghouse (NSC) Enrollment Reporting service. This option will be suggested only to institutions participating in the NSC program. These alternate submission methods are designed to collect data needed for sampling while improving response rates and decreasing burden on the institutions. In the final weeks of the enrollment list data collection period, submitting a further reduced set of data elements (First Name, Last Name, Social Security Number, Undergraduate/Graduate) will be offered to institutions that have not yet participated to maximize response.

Spanish Contact Materials

Select contact materials have been translated into Spanish and will be sent to institution staff at institutions in Puerto Rico. The contact materials include the letters sent to the chief administrator and coordinator as well as the study brochure and the Quick Guide to NPSAS:20 (see Appendix D).

      1. Matching to Administrative Databases

Information about NPSAS:20 sampled students will be matched with their data from several administrative databases. The administrative data sources for NPSAS:20 will be NSLDS, CPS including FAFSA, NSC, VBA, ACT and SAT test scores, and student records obtained directly from postsecondary institutions. Further details about these matches are provided in the Supporting Statement Part A (sections A.1, A.2, A.10, and A.11) and in Appendix C. The matching methodology was approved by OMB in July 2019 (OMB#1859-0666 v.23).

      1. Postsecondary Data Portal (PDP)

NPSAS:20 institution data collection will utilize NCES’ Postsecondary Data Portal (PDP) website. The PDP is used across NCES postsecondary institution data collections; the flexible design allows it to be used for multiple studies that are in data collection at the same time, even when those studies collect different types of data. Currently, there are no plans for other postsecondary data collections to be underway using the PDP when NPSAS:20 will begin collecting enrollment lists and student records.

There are two types of content on the PDP: general-purpose content and study-specific content. General-purpose pages provide overview information about NCES postsecondary studies and use of the website. These pages are identified in Appendix D1 as the “pre-login” pages. Once a user logs in, they see pages with study-specific content. These pages are identified in Appendix D1 as the “after login” content. The NPSAS:20 study-specific content includes FAQs about NPSAS:20 and instructions for providing data (Appendix D1), and the student records instrument. Institutions see study-specific PDP content only for the study or studies for which they have been sampled.

Data Security on the PDP

Because of the risks associated with transmitting protected data on the internet, the latest technology systems will be incorporated into the web application to ensure strict adherence to NCES confidentiality guidelines. The web server will include a Secure Sockets Layer (SSL) certificate and will be configured to force encrypted data transmission over the Internet. All data-entry modules on this site require the user to log in before accessing protected data. Logging in requires entering an assigned ID number and two-factor authentication, entering a code that is sent via email and a password. Through the PDP, the campus coordinator at the institution will be able to use a “Manage Users” link to add and delete users, as well as reset passwords and assign roles. Each user will have a unique user name and will be assigned to one email address. Upon account creation, the new user will be sent a temporary password by the PDP. Upon logging in for the first time, the new user will be required to create a new password. The system automatically will log out the user after 30 minutes of inactivity. Files uploaded to the secure website will be stored in a secure project folder that is accessible and visible to authorized project staff only.

      1. Student Records

After students are sampled from an institution’s enrollment list, the institution coordinator will receive a mailing containing a letter requesting student records data for those sampled students. Institutional contactors will follow up after the mailing to ensure receipt of the package and to answer any questions. Follow-up contacts include telephone prompts, reminder emails that are typically sent 2 weeks prior to a deadline, and touch-base emails typically sent when 3–4 weeks have passed with no outbound contact from study staff. Contact materials are included in Appendix D2. Staff will also be available by telephone and email to help when institution staff have questions or encounter problems.

As with the enrollment list collection, the student record collection will utilize the PDP. The content of the PDP specific to student records collection is included in Appendix H (the student records instrument content) and Appendix D2 (student records communication materials). The following options will be offered to institutions for collecting student records:

  • Web-based data entry interface. The web-based data entry interface allows the coordinator to enter data by student, by year.

  • Excel workbook. An Excel workbook will be created for each institution and will be preloaded with each sampled student’s ID, name, date of birth, and last four digits of SSN (if available). To facilitate simultaneous data entry by different offices within the institution, the workbook contains a separate worksheet for each of the following topic areas: Student Information, Financial Aid, Enrollment, and Budget. The user will download the Excel worksheet from the PDP, enter the data, and then upload the data. Validation checks will occur both within Excel as data are entered and when the data are uploaded. Data will be imported into the web application so that institution staff can check their submission for quality control purposes.

  • CSV (comma separated values) file. Institutions with the means to export data from their internal database systems to a flat file may use this method of supplying student records. Institutions that select this method will be provided with detailed import specifications, and all data uploading will occur through the PDP. Like the Excel workbook option, data will be imported into the web application so that institution staff can check their submission before finalizing.

Refusal aversion strategies. If institution staff report a lack of time or resources needed to provide student records data, the following additional accommodations will be offered:

  • reimbursement to help offset labor or staffing costs;

  • a reduced set of the most critical data elements (see data elements marked with an asterisk in Appendix D2, page D-212); and/or

  • an alternate submission format allowing staff to upload data in any format or file type that is convenient, rather than making their data conform to our template or CSV specifications.


    1. Tests of Procedures or Methods

In lieu of a field test study, which is more typically used for tests of procedures and methodologies, we will conduct limited testing with a subset of the full-scale student sample, or calibration sample. Below we describe our process for selecting the calibration sample and the tests planned.

      1. NPSAS:20 Calibration Sample

The respondent universe for the calibration sample will be the same as that for the main full-scale NPSAS:20 sample, except the NPSAS:20 calibration sample will be drawn in advance of the main sample. Testing with the calibration sample will inform the NPSAS:20 full-scale design regarding incentive structure and nonresponse follow-up strategies: type of baseline incentive (prepaid and promised versus only promised; frontloading of the promised incentive vs. doubling the promised incentive at a later stage) and incentive design for the nonresponse follow up. Since data collected from the calibration sample will be included in the final data files, the calibration cases will follow the same data collection protocol as the main sample cases once the experimental manipulation of incentives is over. The incentive experiments planned for the calibration sample are unlikely to have a direct impact on the individual survey responses, but rather should impact the overall participation decision. This is intentional – if the experiments produced large differences in survey estimates, the data could not be combined with the main sample. For this reason, questionnaire wording cannot be tested with a calibration sample.

      1. NPSAS:20 Calibration Sample Integration with Main Sample

We will select the student calibration sample in December 2019 from fall enrollment lists provided by institutions selected among the NPSAS:20 sampled institutions. The calibration institution sample size will be approximately 86 institutions to yield 60 participating institutions, assuming a 70 percent calibration sample participation rate and about 100 students sampled per institution from the fall lists, on average. These 86 institutions will be selected purposively from among the full NPSAS:20 sample across the institution strata and will include both small and large institutions, as well as systems and individual institutions. A purposive subsample will allow us to target institutions with which we have a good relationship and have a good idea if they would be willing to provide both fall and spring lists.

The calibration sample of students will be selected using the same sampling design as the main sample. The fall and spring lists for an institution will be deduplicated by SSN and, if there is no SSN, by name and date of birth. This will ensure that students will have one chance of selection per institution. Then, the student samples will be selected using the same sampling rates for both the fall and spring lists for these institutions, so that there are not unequal probabilities of selection, and thus unequal weights, within student strata in an institution. Ideally, all pre-sampling matching will occur for both the fall and spring enrollment lists to identify FTBs, veterans, and aided/unaided students. However, depending on how quickly institutions can provide fall lists, we may not have time to send data from all institutions to all sources for pre-sampling matching in time to select the calibration student sample.

A potential issue with requesting fall and spring enrollment lists is that some of the calibration sample institutions may send a fall list and then decide later to not send a spring list. Those institutions could be treated as nonresponding and their student data using the fall lists can be excluded from the full-scale data collection. However, we recommend computing an institution weight adjustment for non-responding institutions and their student data included in the data file.

While enrollment lists would be collected twice for the calibration sample institutions, we would only collect student records data once, after students are sampled from the spring lists.

      1. Calibration Sample Data Collection

The proposed calibration sample design will include two experimental phases: Phase 1 will test the baseline incentive, and Phase 2 will test the nonresponse follow-up incentive. For consistency with the treatment of all main data collection cases, all of the remaining nonresponding calibration sample cases at the end of Phase 2 will be moved to Phases 3 and 4, where abbreviated and nonresponse instruments will be offered (see proposed main data collection design). Table 10 presents an overview of the calibration sample design.

Table 10. Calibration sample design by condition and phase of data collection


Group 1
n = 2,000

Group 2
n = 2,000

Group 3 (Control)
n = 2,000

Phase 1

$2 prepaid + $30 promised

$2 prepaid + $15 promised

$0 prepaid + $30 promised

Phase 2 (nonresponse follow-up)

$10 prepaid (via PayPal or check) + $20 promised

$30 promised

$30 promised

In Phase 1, we will test two components of our incentive plan - the effectiveness of a $2 prepaid cash incentive, sent with the invitation letter, and frontloading or backloading of the promised incentive (offering $30 from the beginning vs. offering $15 that gets doubled at a later phase). Evidence from many surveys across various modes of data collection indicates that prepaid incentives are more effective than promised incentives in increasing response rates (e.g., Singer et al. 1999; Singer, 2002; Warriner et al., 1996), but an additional benefit for longitudinal studies is that prepaid incentives have also been found to help with locating sample members in subsequent waves of data collection (Kerachsky and Mallar,1981) and yield higher tracing and contact rates (Beydoun et al., 2006; Mann, Lynn, and Peterson, 2008). We will test the effectiveness of a $2 prepaid cash incentive on Phase 1 response rates by comparing Groups 1 and 3 in table 10.

The Control Group will receive the $30 promised incentive that has been used in NPSAS studies since 2004 and has proven to be effective. We are interested in investigating whether offering a smaller, $15 incentive from the beginning, followed by a doubled incentive later in the case of initial nonresponse will be more effective than offering $30 initially. We will test this experimentally by comparing the results in Phase 1 from Groups 1 and 2.

In Phase 2, we propose a different presentation of the same $30 incentive – for experimental Group 1, we will prepay $10 via PayPal or check, and offer the rest as promised. For Group 2, we will double the incentive offered in Phase 1 and offer $30 upon survey completion. For the control group, Group 3, we will continue to offer the same $30 promised incentive. The comparisons between Groups 1 and 2 at the end of Phase 2 will further inform whether front loading or backloading of the incentive is better for response rates and representativeness.

We will select a calibration sample of 6,000 students that would allow for comparisons of response rates among three equal sized experimental groups of 2,000 students each. This provides enough power to detect at least a 4 percentage point difference in response rates, assuming 80 percent power, Type I error of 5 percent and a base percentage of 30 percent. This calculation assumes a 2-sided chi-square test of the response proportions.

The experiments described above will allow us to test the following hypotheses:

  • Phase 1 outcomes

1a. There is no statistically significant difference in response rates between Group 1 and Group 3 (effect of the prepaid $2 incentive)

1b. There is no statistically significant difference in representativeness (demographic characteristics) between Group 1 and Group 3 (effect of the prepaid $2 incentive)

2a. There is no statistically significant difference in response rates between Group 1 and Group 2 (effect of front loading the promised incentive)

2b. There is no statistically significant difference in representativeness (demographic characteristics) between Group 1 and Group 2 (effect of front loading the promised incentive)

  • Phases 1 & 2 outcomes combined

3a. There is no statistically significant difference in response rates between Group 1 and Group 2;

3b. There is no statistically significant difference in representativeness (demographic characteristics) between Group 1 and Group 2

4a. There is no statistically significant difference in response rates between Group 1 and Group 3;

4b. There is no statistically significant difference in representativeness (demographic characteristics) between Group 1 and Group 3

5a. There is no statistically significant difference in response rates between Group 2 and Group 3.

5b. There is no statistically significant difference in representativeness (demographic characteristics) between Group 2 and Group 3

The proposed experimental period of the calibration sample is expected to run for 10 weeks—from January 6, 2020 to March 17, 2020 (see figure 2). The final decision on which incentive combination to propose for the main data collection will be driven by the overall increase in response rates at the end of week 3, Phase 2, rather than which condition was the most successful in each individual phase.

Phase 1 will consist of 7–8 weeks, and Phase 2 will consist of 2–3 weeks of data collection, after which the results will be analyzed to inform the full-scale implementation. Phase 2 will continue beyond the 2–3 weeks until response rates and other metrics being monitored level out (but analyses will be based on the first 2–3 weeks of Phase 2 data collection), after which cases will be switched to Phases 3 and 4. Analyses will focus on the Phase 1 outcome, as well as Phases 1 and 2 to determine the best incentive design for full-scale data collection. In addition to response rates, outcomes of interest will include a shift in overall estimates due to Phase 2 and characteristics of respondents and sectors successfully recruited in Phase 2. Results and the proposed main sample incentive structure will be submitted to OMB for a consideration as a change request in March 2020, before implementation with the main sample.

Except for Phase 1, phase duration will be based on phase capacity. Phase 1 duration will be set to 7–8 weeks due to the time required to analyze data from the calibration sample before main data collection begins. Phase capacity will be determined based on a series of individual indicators (e.g., response rate by sector, level of effort) and a summary performance measure, modeling the likelihood of responding over time by sector as a function of effort, response rate, etc.

Figure 2. NPSAS:20 calibration timeline

NOTE: DC = Data Collection

      1. NPSAS:20 Main Data Collection

The NPSAS:20 data collection is expected to start early in April 2020 and will involve a 4-phase design—the first two phases will be informed by the experimental incentive manipulation in the calibration sample described above, while the third and fourth phases will be an attempt to obtain any information on the remaining nonrespondents with an abbreviated 15-minute and 5-minute instruments, respectively. Table 11 presents the proposed design.

Table 11. NPSAS:20 Data Collection Design

Phase number

Description

Phase 1

Successful incentive from calibration sample offered to everyone.

Phase 2

Successful incentive from calibration sample offered to remaining nonresponding cases.

Phase 3

Abbreviated survey (15 minute) + $20 or $30 promised depending on the data collection group

Phase 4

Mini survey for nonresponse adjustments (5 minute) + $5 promised

Because Phase 1 and Phase 2 outcomes from the calibration sample cannot be considered in isolation (e.g., the propensity to respond in Phase 2 will be affected not only by what is offered in Phase 2, but also by what was previously offered in Phase 1), the successful incentive strategy for each phase of the NPSAS:20 data collection will be driven by the overall Phase 1 and 2 outcome of the calibration sample. For example, if the control condition (Group 3) outperforms the two experimental conditions (Groups 1 and 2) in week 3 of calibration Phase 2, the incentive strategy implemented in Phases 1 and 2 of the main NPSAS:20 data collection will be Group 3’s incentive design ($0 prepaid and $30 promised in Phase 1, and no change in Phase 2), regardless of whether there was a significant increase in response rates or representativeness based on the $2 prepaid incentive in Phase 1 calibration.

The duration of each phase may vary across sample release waves and will be determined based on phase capacity (monitored via various dashboard indicators for marginal returns). As in previous NPSAS data collections, we will offer an abbreviated survey once Phase 2 has reached its phase capacity. The abbreviated survey will target a 15-minute completion time and will continue with the same promised incentive offer for each corresponding data collection group in the calibration sample and the promised incentive associated with the selected data collection protocol selected for the main data collection. It will include questions from the enrollment section (e.g., current attendance at NPSAS institution, attendance dates, completed coursework); the FAFSA section (e.g., homelessness, age, marital status, military status, income, number of dependents, parents’ income and education), education experiences section (e.g., remedial courses, list of math course since high school), financial aid section (e.g., federal and private loans), employment section (e.g., school job, employer name, hours worked) and background section (e.g., meal plan, frequency of skipping meals, number of days without food).

The purpose of Phase 4 is to enable nonresponse bias assessment and, depending on response rate, develop a nonresponse weighting adjustment. Phase 4 data collection will begin approximately a month after the main data collection ends: beginning January 2021 and lasting through the end of February 2021. Information collected in Phase 4 will not be used in any response rate calculations (Phase 4 cases will not be included in the final data files), but only for nonresponse bias assessment and, potentially, adjustment. The target time for completion of the nonresponse instrument is 5 minutes. This instrument will be offered to all remaining nonrespondents, with a promised incentive of $5. The instrument will include a subset from the abbreviated instrument, specifically, mostly questions from the FAFSA section.

The invitation to complete the nonresponse bias instrument will be send to all remaining nonrespondents via mail, with a link to the web survey. The envelope will contain a sample questionnaire, just to demonstrate the minimal burden of the request. The email reminders will also contain a link to the website, where respondents would be able to see a sample of the questions. Following two emails, one letter/postcard, and two text message reminders, we will offer a text message survey as a final effort to collect data on nonrespondents.

      1. BPS:20/22 Field Test Panel Maintenance

For the BPS:20/22 field test panel maintenance we will implement an experiment that examines a continuing use of NPSAS:20 branding, compared to BPS:20/22-only branding that will be used throughout the BPS longitudinal study. Researchers state that positive institutional recognition on the outside of the envelope or in the identity of the sender of an email may increase the likelihood sample members open the letter or email (Dillman et al. 2014), and previous research has seen increased response rates for known organizations compared to unknown organizations (Groves et al. 2012; Avdeyeva and Matland 2013; Edwards et al. 2014). This lends us to hypothesize that sample members who see familiar branding (i.e., the NPSAS:20 design concept, including familiar images, colors, font, and logo) would be more likely to open and respond to the BPS:20/22 panel maintenance postcard compared to those who receive the panel maintenance postcard with new BPS:20/22 branding (i.e., design concept with new images, colors, font, and logo).

The benefits of this institutional recognition may be exacerbated by more explicitly connecting the survey request of the follow-up study (BPS) to the request of the initial study (NPSAS). In the BPS:20/22 field test, we would continue the exposure to the different branding design concepts and compare the impact of the study branding with how we frame the request of sample members to participate in BPS:20/22. Previous literature has examined two versions of framing the survey request for a follow-up study (1) the loss frame in which baseline respondents are informed that the information they have already provided will be less valuable if they do not participate in the follow-up study, and (2) a gain frame in which they are informed that the information they have already provided is more valuable if they participate (Tourangeau and Ye 2009). Tourangeau and Ye (2009) found higher response rates to the follow-up study when the loss frame request was used (88 percent) compared to when the gain frame request was used (78 percent). However, a similar experiment asking for administrative data linkage consent conducted by Sakshaug and Kreuter (2014) found that highlighting the benefits of the data linkage resulted in higher consent rates than a more neutral wording.

The first of the BPS:20/22 branding and wording experiments involves splitting the BPS:20/22 field test sample into two random assignment groups for panel maintenance postcards; the members of one group will receive a BPS:20/22 postcard , and members of the other group will receive a NPSAS:20 postcard. The BPS:20/22 branded panel maintenance postcard would continue to include the standard ‘tree of life’ logo used by the U.S. Department of Education on the outside of the postcard, but also include a unique BPS:20/22 design concept including images, colors and font. Additionally, the inside of the postcard would continue the BPS:20/22 branding with the design concept and the BPS:20/22 logo. In contrast, the NPSAS:20 branded panel maintenance postcard would include both the tree of life logo on the outside of the postcard and the NPSAS:20 study logo (removing the actual study name from the study logo as to not identify study members). To maintain the connection to the NPSAS:20 study, both the outside and inside of the postcard would incorporate the images, colors, and font from the NPSAS:20 design concept. The inside of the NPSAS:20 branded postcard would then include both the NPSAS:20 and the BPS:20/22 logos to connect the new BPS:20/22 study with the NPSAS:20 study. RTI would monitor differences in participation rates for the panel maintenance in both groups to determine if the connection with the initial study (NPSAS:20) encourages participation in the panel maintenance for the follow-up study (BPS:20/22).

Second, in the BPS:20/22 field test survey request, we will continue sample members on the same branding treatment and randomly assign sample members within the branding treatments to receive one of the two survey wording requests (loss vs gain framing). This will allow us to examine the impact of connecting the new BPS:20/22 study with the initial NPSAS:20 study though both design recognition and wording.

    1. Reviewing Statisticians and Individuals Responsible for Designing and Conducting the Study

NPSAS:20 is being conducted by NCES. The following statisticians at NCES are responsible for the statistical aspects of the study: Dr. Tracy Hunt-White, Dr. David Richards, Mr. Ted Socha, Dr. Elise Christopher, and Dr. Gail Mulligan. NCES’s prime contractor for NPSAS:20 is RTI International (Contract# 91990018C0039), and subcontractors include Leonard Resource Group; HR Directions; ManTech, Inc.; Research Support Services; EurekaFacts; Strategic Communications, Inc.; and Activate Research. Dr. Anthony Jones, Dr. Sandy Baum, and Dr. Stephen Porter are consultants on the study. The following staff members at RTI are working on the statistical aspects of the study design: Dr. Jennifer Wine, Mr. Peter Siegel, Mr. Stephen Black, Mr. Darryl Cooney, Dr. T. Austin Lacy, Dr. Antje Kirchner, and Dr. Emilia Peytcheva. Principal professional RTI staff not listed above, who are assigned to the study include: Ms. Ashley Wilson, Ms. Kristin Dudley, Ms. Jamie Wescott, Ms. Tiffany Mattox, Mr. Austin Caperton, Mr. Jeff Franklin, Dr. Nicole Tate, Mr. Johnathan Conzelmann, Dr. Rachel Burns, Mr. Michael Bryan, and Dr. Josh Pretlow.

  1. References

Avdeyeva, O.A., & Matland, R.E. (2013). An Experimental Test of Mail Surveys as a Tool for Social Inquiry in Russia. International Journal of Public Opinion Research, 25(2), 173–194.

Beydoun, H., Saftlas, A.F., Harland, K., and Triche, E. (2006). Combining Conditional and Unconditional Recruitment Incentives Could Facilitate Telephone Tracing in Surveys of Postpartum Women. Journal of Clinical Epidemiology, 59(7): 732–738.

Dillman, D.A., Smyth, J.D., and Christian, L.M. (2014). Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method 4th Edition. John Wiley & Sons, Hoboken, NJ.

Edwards, M.L., Dillman, D.A., and Smyth, J.D. (2014). An Experimental Test of the Effects of Survey Sponsorship on Internet and Mail Survey Response. Public Opinion Quarterly, 78(3), 734-750.

Groves, R.M., Cialdini, R., and Couper, M. (1992). Understanding the Decision to Participate in a Survey. Public Opinion Quarterly, 56(4), 475-495.

Groves, R.M, Presser, S., Tourangeau, R., West, B.T., Couper, M.P., Singer, E., and Toppe, C. 2012. Support for the Survey Sponsor and Nonresponse Bias. Public Opinion Quarterly. 76(3), 512-524.

Kerachsky, S. J., and Mallar, C.D. (1981). The Effects of Monetary Payments on Survey Responses: Experimental Evidence from a Longitudinal Study of Economically Disadvantaged Youths. In JSM proceedings (pp. 258–263). Alexandria, VA: American Statistical Association.

Mann, S. L., Lynn, D.J., and Peterson, A.V. (2008). The Downstream Effect of Token Prepaid Cash Incentives to Parents on Their Young Adult Children’s Survey Participation. Public Opinion Quarterly, 72(3): 487–501.

Sakshaug, J.W., and Kreuter, F. (2014). The Effect of Benefit Wording on Consent to Link Survey and Administrative Records in a Web Survey. Public Opinion Quarterly 78(1), 166-176.

Singer, E., Van Hoewyk, J., Gebler, N., Raghunathan, T., and McGonagle, K. (1999). The Effect of Incentives on Response Rates in Interviewer-Mediated Surveys. Journal of Official Statistics, 15:217-230.

Tourangeau, R., and Ye, C. (2009). The Framing of the Survey Request and Panel Attrition. Public Opinion Quarterly 73(2), 338-348.

1 The U.S. service academies (the U.S. Air Force Academy, the U.S. Coast Guard Academy, the U.S. Military Academy, the U.S. Merchant Marine Academy, and the U.S. Naval Academy) are not eligible for this financial aid study because of their unique funding/tuition base.

2 A Title IV eligible institution is an institution that has a written agreement (program participation agreement) with the U.S. Secretary of Education that allows the institution to participate in any of the Title IV federal student financial assistance programs other than the State Student Incentive Grant and the National Early Intervention Scholarship and Partnership programs.

3 The GED® credential is a high school equivalency credential earned by passing the GED® test, which is administered by GED Testing Service. For more information on the GED test and credential, see https://ged.com/about_test/test_subjects/.

4 See https://nces.ed.gov/pubs2018/2018195.pdf for further detail on imputation in IPEDS.

5 The public 4-year institution stratum includes all eligible institutions that IPEDS classifies as public 4-year institutions, including those that are non–doctorate-granting, primarily sub-baccalaureate institutions.

6 From this point forward, the word “state” will refer to the 50 states, the District of Columbia, and Puerto Rico.

7 Based on the latest IPEDS data, there are only two states (Mississippi and Nevada) that have between 31 and 36 institution in the “other” stratum and will be affected by this cutoff.

8 Chromy, J.R. (1979). Sequential Sample Selection Methods. In Proceedings of the Survey Research Methods Section of the American Statistical Association (pp. 401–406). Alexandria, VA: American Statistical Association.

10 Folsom, R.E., Potter, F.J., and Williams, S.R. (1987). Notes on a Composite Size Measure for Self-Weighting Samples in Multiple Domains. In Proceedings of the Section on Survey Research Methods of the American Statistical Association. Alexandria, VA: American Statistical Association, 792–796.

11 Self-weighting samples have equal weights within sampling domains.

12 A Hispanic-serving institutions indicator is no longer available from IPEDS, so we created an HSI proxy following the definition of HSI as provided by the U.S. Department of Education (https://www2.ed.gov/programs/idueshsi/definition.html) and using IPEDS Hispanic enrollment data.

13 We will decide what, if any, collapsing is needed of the categories for the purposes of implicit stratification.

14 Response rates for the student survey and student records collection for NPSAS:12 were 69 percent and 92 percent, respectively; for NPSAS:16, these rates were 66 percent and 93 percent, respectively. The expected student records completion rate and yield for NPSAS:20 are preliminary pending the completion of NPSAS:18-AC student records collection.

15 55,400 ≈ 30,000/.95/.70/.78 – (53,125 non-FTBs*.046), where .78 = 1-.22.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-15

© 2024 OMB.report | Privacy Policy