Part B BPS25 FS Collection v21_tracked

Part B BPS25 FS Collection v21_tracked.pdf

2020/25 Beginning Postsecondary Students (BPS:20/25) Full-Scale Study

Part B BPS25 FS Collection v21_tracked

OMB: 1850-0631

Document [pdf]
Download: pdf | pdf
2020/25 BEGINNING POSTSECONDARY STUDENTS
(BPS:20/25) FULL-SCALE STUDY
Supporting Statement Part B
OMB # 1850-0631 v.21

Submitted by
National Center for Education Statistics
U.S. Department of Education

October 2024

Contents
B. Collection of Information Employing Statistical Methods .............................................................. 4
1. Respondent Universe ............................................................................................................................................................ 4
2. Statistical Methodology ......................................................................................................................................................... 4
a. BPS:20/25 Full-scale Sample........................................................................................................................................ 7
3. Methods for Maximizing Response Rates ......................................................................................................................... 8
a. Tracing of Sample Members ........................................................................................................................................ 8
b. Training for Data Collection Staff ............................................................................................................................... 9
c. Case Management System ........................................................................................................................................... 10
d. Survey Instrument Design .......................................................................................................................................... 10
e. Refusal Aversion and Conversion .............................................................................................................................. 11
4. BPS:20/25 Full-scale Data Collection Procedures ......................................................................................................... 11
5. Tests of Procedures or Methods ....................................................................................................................................... 16
6. Reviewing Statisticians and Individuals Responsible for Designing and Conducting the Study ............................ 17

Tables and Figures
Table 1. Counts of BPS:20/22 field test sampled and eligible students and response rates, by NPSAS:20 fullscale data collection outcomes: 2020–22...................................................................................................... 4
Table 2. BPS:20/25 sample member status, by NPSAS:20 and BPS:20/22 field test response status and
known eligibility status: 2020–25 ................................................................................................................... 5
Table 3. BPS:20/25 Field Test Expected Completes by NPSAS:20 and BPS:20/22 Field Test Data Collection
Outcome Groups ............................................................................................................................................. 6
Table 4. BPS:20/22 full-scale study eligibility and unweighted response rates by base-year outcome status:
2020-22 .............................................................................................................................................................. 6
Table 5. BPS:20/25 sample member disposition, by NPSAS:20 and BPS:20/22 response status: 2020–22 .... 7
Table 6. Expected BPS:20/25 full-scale study response rates by base-year and BPS:20/22 outcome status:
2020-22 .............................................................................................................................................................. 7
Table 7. BPS:20/25 field test data collection protocols by data collection phase and group assignment ....... 13

ii

Table 1. BPS:20/22 full-scale study eligibility and unweighted response rates by base-year outcome status:
2020-22 ........................................................................................................................................................................ 7
Table 2. BPS:20/25 sample member disposition, by NPSAS:20 and BPS:20/22 response status: 2020–22 .............. 8
Table 3. Expected BPS:20/25 full-scale study response rates by base-year and BPS:20/22 outcome status:
2020–22 ....................................................................................................................................................................... 8
Table 4. BPS:20/25 full-scale data collection protocols by data collection phase and group assignment ................. 14

3

B. Collection of Information Employing Statistical Methods
This submission requests clearance for the 2020/25 Beginning Postsecondary Students Longitudinal Study
(BPS:20/25) field testfull-scale data collection materials and procedures. BPS:20/25 is the second follow-up of a
sample membersof students who began their postsecondary education during the 2019-20 academic year, a
sample drawn from the 2019-20 National Postsecondary Student Aid Study (NPSAS:20). Specifically, those who
began their postsecondary education during the 2018-19 (field test sample) or 2019-20 (full-scale sample)
academic year. For details on the NPSAS:20 sampling design see NPSAS:20 Supporting Statement Part B
(OMB# 1850-0666 v.25). Specific plans for the BPS:20/25 data collection are provided below.
1. Respondent Universe
The respondent universe for BPS:20/25 consists of all students who began their postsecondary education for the
first time during the 2018-19 (field test sample) or 2019-20 (full-scale sample) academic year at any Title IVeligible postsecondary institution in the United States.
In prior BPS data collections, a NPSAS field test, conducted a year before the full-scale, has been used to
identify a field test sample for the BPS longitudinal follow up. For example, the NPSAS:12 field test was the
source of the sample for the BPS:12/14 field test. However, because NPSAS:20 did not conduct a field test, the
BPS:20/22 field test sample was drawn from the NPSAS:20 full-scale sample. The BPS:20/25 sample is created
from the BPS:20/22 sample.
While the BPS:20/25 full-scale cohort will be comprised of students who first enroll in postsecondary education
after high school during the 2019-20 academic year, the BPS:20/25 field test cohort will follow students whose
first postsecondary enrollment after high school occurred during the 2018-19 academic year. This is necessary to
test the temporal components of the survey, which ask about enrollment and experiences during academic years
4-6 after their first enrollment. The BPS:20/25 field test sample will include students from the NPSAS:20 fullscale sample who were identified as confirmed or potential 2018-192019-20 academic year at any Title IV-eligible
postsecondary institution in the United States. The BPS:20/25 full-scale sample will include students from the
NPSAS:20 full-scale sample who were confirmed as 2019-20 academic year first-time beginner students based on
survey, institution, or other administrative data.
2. Statistical Methodology
Field Test Design
The BPS:20/25 field test will be implemented to test procedures, methods, and systems planned for the
BPS:20/25 full-scale study in a realistic operational environment prior to implementation in the full-scale study.
The field test is designed to test and validate data collection and monitoring procedures that will obtain the most
accurate data in the least amount of time. Specific field test evaluation goals for the student survey include the
following:
 Identifying problematic data elements in the BPS:20/25 student interview


Testing two data collection experiments aimed at reducing nonresponse error and the potential for
nonresponse bias:
o an “address confirmation” incentive experiment where respondents receive an additional $5 if
they confirm their phone number, mailing and e-mail address before the start of the survey, and
o a reminder mode experiment where we explore the implications of dropping telephone
reminders for sample members who have responded to previous survey requests.

Additionally, we will evaluate the time required to complete the survey and identify possible modifications to
reduce respondent burden in the full-scale survey, if needed. We will also conduct reliability re-interviews to
evaluate the temporal consistency of selected interview items.

4

BPS:20/22 Field Test Sample
The sample for the BPS:20/25 field test is built upon the sample from the prior field test data collection with
this cohort, BPS:20/22. Individuals who self-identified as first-time beginners (FTBs) in the 2018-19 academic
year (i.e., prior-year FTBs) in the NPSAS:20 survey, and those identified as potential prior-year FTBs using
administrative data, were eligible for selection for the BPS:20/22 field test. A field test sample of approximately
3,700 students was identified through enrollment lists, administrative data and survey responses collected during
the NPSAS:20 full-scale study. The BPS:20/22 field test sample consisted of four distinct groups based on the
NPSAS:20 full-scale data collection outcomes:
1) NPSAS:20-eligible survey respondents who self-identified as FTBs between July 1, 2018 and April 30,
2019 (approximate n = 2,440).
2) NPSAS:20-ineligible survey respondents who self-identified as FTBs between July 1, 2018 and April
30, 2019 (approximate n = 180). This constituted an unusual group because they were not enrolled
during the NPSAS:20 academic year but were enrolled during the prior year and have either completed a
short-term credential, stopped or dropped out. Lack of enrollment during the NPSAS:20 academic year
made them ineligible for NPSAS:20. However, because a BPS first follow-up data collection included
students who were not enrolled one year after they first begin postsecondary education, this group was
useful in field testing BPS:20/22 methods and procedures across different eligible populations. While
these students were not eligible for NPSAS:20, they were asked to complete a subset of survey questions
to confirm FTB status in 2018-19. If found eligible, we notified them about the potential for follow up
and administered the locating section of the NPSAS:20 survey so that they could be contacted and asked
to participate in the BPS:20/22 field test.
3) NPSAS:20 survey nonrespondents who were potential academic year 2018-19 FTBs based on
administrative data (approximate n = 540).
4) NPSAS:20 sample members who were not invited to participate in the survey (administrative-only
individuals) who were also potential academic year 2018-19 FTBs based on administrative data
(approximate n = 540).
For the first group, NPSAS:20-eligible survey respondents, rather than relying on random sampling from 201819 academic year FTB students in the NPSAS:20 sample, the BPS:20/22 field test used a quota sampling
approach to reduce burden on NPSAS:20 respondents. Potentially eligible BPS:20/22 field test sample members
received an additional question module targeted at 2018-19 FTB status in the NPSAS:20 instrument. This set of
survey items took approximately 10 minutes to administer. Once quotas were met, the administration of the
additional items was stopped. In contrast, a random sampling approach would have resulted in additional
NPSAS:20 respondents being administered additional items unnecessarily. Final BPS:20/22 field test sample
counts included here differ slightly from those described in the BPS:20/22 OMB package, since that package was
prepared prior to implementation of the quota sampling approach. Table 1 summarizes the BPS:20/22 sampled,
eligible, and responding individuals by their NPSAS:20 full-scale data collection outcomes. The count of eligible
students was updated after BPS:20/22 data collection concluded. A derived variable calculating month and year
of first postsecondary enrollment identified those with enrollment before the targeted academic year, thus
reducing the total eligible sample. This derived variable will also be applied to the full-scale sample. This process
allows for a detailed reporting of eligibility removing the need for an eligibility screener in the BPS:20/25 field
test and full-scale data collections.
Table 1. Counts of BPS:20/22 field test sampled and eligible students and response rates, by NPSAS:20 fullscale data collection outcomes: 2020–22
NPSAS:20 full-scale data collection
outcomes

Sampled
students

Eligible
students1

Respondents

Response Rate

Total
NPSAS:20-eligible survey respondent

3,700
2,440

3,280
2,310

2,120
1,800

0.65
0.78

5

NPSAS:20-ineligible survey respondent

180

160

110

0.70

NPSAS:20-eligible survey nonrespondent

540

430

50

0.12

NPSAS:20-eligible administrative only

540

370

150

0.42

1 Sample member eligibility was determined during the student screener combined with derived variables based on institutional records. The use of
derived month and year of first postsecondary enrollment was included to align with the approach used for full-scale data collection and to remove
the need for an eligibility screener during BPS:20/25 field test data collection.
NOTE: Sample sizes rounded to the nearest 10. Percentages are based on the unrounded count of eligible students within the row under
consideration. Detail may not sum to totals because of rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2020/22 Beginning Postsecondary Students Longitudinal Study
(BPS:20/22).

BPS:20/25 Field Test Sample
The BPS:20/25 field test is the second follow-up data collection, to be conducted 3 years after BPS:20/22 field
test. The BPS:20/25 field test sample is constructed from the BPS:20/22 field test sample. The main source of
change from the BPS:20/25 field test sample to the BPS:20/22 field test sample is the exclusion of sample
members determined to not be FTB students and therefore ineligible for BPS. Deceased individuals are also
excluded from the BPS:20/25 field test sample. Table 2 shows the distribution of the approximately 3,280
sample members determined to be eligible, by study member status and survey response status, and identifies
groups that will be fielded.
Table 2. BPS:20/25 sample member status, by NPSAS:20 and BPS:20/22 field test response status and known
eligibility status: 2020–25
NPSAS:20 Survey
Respondent

BPS:20/22 Field Test
Respondent

Known Eligibility1

1

Yes

Yes

Yes

1,910

Yes

2

Yes

No

Yes

550

Yes

32

Yes

No

No

20

No

4

No

Yes

Yes

200

Yes

53

No

No

Yes

370

No

63

No

No

No

230

No

Group
Total

Count

Fielded in BPS:20/25
Field Test

3,280

1 Eligibility remains unknown if the individual did not have enough administrative data and did not complete the BPS:20/22 field test eligibility screener.
2 Group 3 not fielded as these individuals were NPSAS:20 ineligible and do not have enough administrative data to determine if they are FTBs.
3 Groups 5 and 6 are not included as they did not respond to either the NPSAS:20 survey or the BPS:20/22 field test survey.
NOTE: Sample sizes rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2020/22 Beginning Postsecondary Students Longitudinal Study
(BPS:20/22).

While BPS:20/25 field test sample members who did not respond to either the base-year NPSAS:20 survey or
the BPS:20/22 field test survey are eligible for the BPS:20/25 field test, these sample members will not be fielded
due to low expected participation rates. This results in approximately 2,660 BPS:20/25 field test sample
members to be fielded.
Based on past second follow-up BPS field test data collections, we expect a response rate of 70 percent. Because
we are only including individuals known to be eligible for the BPS:20/25 field test sample, we have an overall
eligibility rate of 100 percent. Table 3 below displays the BPS:20/25 field test sample size by NPSAS:20 and
BPS:20/22 field test data collection outcome groups and the expected yield of completed interviews. 

6

Table 3. BPS:20/25 Field Test Expected Completes by NPSAS:20 and BPS:20/22 Field Test Data Collection
Outcome Groups
NPSAS:20 Survey
Respondent

BPS:20/22 Field
Test Respondent

Sample Size

Expected Response
Rate

Expected Completes

2,660

0.70

1,880

Total
Yes

Yes

1,910

0.85

1,620

Yes

No

550

0.28

150

No

Yes

200

0.50

100

NOTE: Sample sizes rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2020/22 Beginning Postsecondary Students Longitudinal Study
(BPS:20/22).

d.a.

BPS:20/22BPS:20/25 Full-scale Sample

The BPS:20/25 full-scale sample will be constructed from the BPS:20/22 full-scale sample. The BPS:20/22
sample was composed of students who began their postsecondary education for the first time during the 2019-20
academic year at any Title IV-eligible postsecondary institution in the United States. Approximately 26,470
respondents to the NPSAS:20 full-scale student surveys self-identified as FTBs. In addition, to have full
population coverage of the BPS sample, a subsample of approximately 10,860 students who are indicatedwere
identified by the NPSAS sample institution or administrative data to be potential FTBs were selected. This
subsample consisted of a combination of NPSAS:20 survey nonrespondents and administrative data-only sample
members who were NPSAS:20 study respondents.  
As shown in Table 41 below, the starting BPS:20/22 full-scale sample size was approximately 37,330, with an
eligible sample of about 34,240. Eligibility was determined using screener responses and administrative data and
deceased. Deceased individuals are also removed from eligible individual counts. We observed anAn unweighted
response rate of 65 percent was observed among the eligible sample members, which yielded approximately
22,350320 responding FTBs.
Table 4.1. BPS:20/22 full-scale study eligibility and unweighted response rates by base-year outcome status:
2020-22

NPSAS:20 full-scale outcome

Number

Eligible1
Number

Respondents

Rate

Number

Rate2

Total

37,330

34,240

92%

22,320

65%

NPSAS Survey respondents

26,470

25,230

95%

18,900

75%

5,510

4,470

81%

980

22%

5,350

4,540

85%

2,450

54%

NPSAS Survey nonrespondents
NPSAS Administrative-only
students

1 Sample member eligibility was determined during the student survey or from institutional records in the absence of a student survey. Individuals found
to be deceased during data collection have also been removed from the total eligible sample members.
2 Unweighted response rate.
NOTE: Sample sizes rounded to the nearest 10. Percentages are based on the unrounded count of eligible students within the row under consideration.
Detail may not sum to totals because of rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2020/22 Beginning Postsecondary Students Longitudinal Study
(BPS:20/22).

BPS:20/25 Full-scale Sample
BPS:20/25 will be the second follow-up data collection, conducted three years after BPS:20/22. The major
difference from the BPS:20/22 sample to the BPS:20/25 sample is the exclusion of BPS:20/25 sample members
who were determined to not be FTB students and are therefore ineligible for BPS. Deceased individuals are also
excluded from the BPS:20/22 sample. Table 5Table 2 shows the distribution of the 34,240 sample members
determined to be eligible for BPS:20/25 data collection, by study member status and survey response status, and
identifies groups that will be fielded (included in data collection activities with an objective of obtaining a
complete survey) in BPS:20/25. These 34,240 sample members are the same individuals identified as eligible
students in Table 41.
7

Table 5.2. BPS:20/25 sample member disposition, by NPSAS:20 and BPS:20/22 response status: 2020–22
Group

NPSAS:20 data collection outcomes

BPS:20/22
Respondent

Count

Total

Field in BPS:20/25
Full-scale

34,240

1

NPSAS Survey respondents

Yes

18,900

2

NPSAS Survey nonrespondents

Yes

980

Yes
Yes

3

NPSAS Administrative-only students

Yes

2,450

Yes
Yes

4

NPSAS Survey respondents

No

6,330

51

NPSAS Survey nonrespondents

No

3,490

No

61

NPSAS Administrative-only students

No

2,100

No

1 Groups 5 and 6 arewill not includedbe fielded as they did not respond to either the NPSAS:20 survey or the BPS:20/22 survey.
NOTE: Sample sizes rounded to the nearest 10. Detail may not sum to totals because of rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2020/22 Beginning Postsecondary Students Longitudinal Study
(BPS:20/22).

While BPS:20/25 sample members who did not respond to either the NPSAS:20 student survey nor the
BPS:20/22 survey are eligible for BPS:20/25, these sample members (groups 5 and 6 in Table 52) will not be
fielded as they have responded at very low rates in previous administrations. This results in approximately 5,590
BPS:20/25 sample members not being fielded. Instead, they will be treated as study nonrespondents for
purposes of response rate calculation and accounted for with weight adjustments.
Table 63 presents the BPS:20/25 sampled and expected responding individuals, by NPSAS:20 outcome and
BPS:20/22 response status. Based on administrative data, all BPS:20/25 sample members are considered eligible.
The response rate estimates are based on the BPS:20/22 data collection and the BPS:12 longitudinal cohort that
followed the NPSAS:12 data collection (Bryan et al. 2016).
Table 6.3. Expected BPS:20/25 full-scale study response rates by base-year and BPS:20/22 outcome status:
2020-–22
NPSAS:20 data collection outcomes
Total

BPS:20/22
Respondent

Expected
Sample Size
Response Rate
34,240 
0.59

Expected Completes
20,020

NPSAS Survey respondents

Yes

18,900

0.81

NPSAS Survey nonrespondents

Yes

980

0.55

540

NPSAS Administrative-only students

Yes

2,450

0.75

1,830

NPSAS Survey respondents

No

6,330

0.37

2,340

NPSAS Survey nonrespondents1

No

3,490

0.00

0

NPSAS Administrative-only students1

No

2,100

0.00

0

15,310

1 Expected response rate for these groups is zero as they will not be fielded for data collection.
NOTE: Sample sizes rounded to the nearest 10... Detail may not sum to totals because of rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2020/22 Beginning Postsecondary Students Longitudinal Study
(BPS:20/22).

3. Methods for Maximizing Response Rates
Achieving high response rates in the BPS:20/25 field testfull-scale study data collection will depend on
successfully identifying and locating sample members and being able to contact them and gain their cooperation.
As was used successfully in prior NCES longitudinal studies, shortly before data collection begins, we will send
an initial contact mailing/e-mail to remind sample members of their inclusion in the study.
a. Tracing of Sample Members
To yield the maximum number of located cases with the least expense, we designed an integrated tracing
approach with the following elements will be implemented. Advance tracing activities, which will occur prior to
the start of data collection, include initial batch database searches, such as to the National Change of Address
(NCOA) databases, for cases with sufficient contact information to be matched. To handle cases for which
8

contact information is invalid or unavailable, additional advance tracing through proprietary interactive databases
will expand on leads found.
 Hard copy mailings, e-mails, and text messages will be used to maintain ongoing contact with sample
members, prior to and throughout data collection. The contacting materials, which will be developed
with a design appealing to sample members in 20242025, are provided in Appendix C. The data
collection mailing to sample members will include a letter announcing the start of data collection,
requesting that the sample member complete the web survey, and including a toll-free number, the
study website address, a Study ID and password, and a study brochure. We will send a similar e-mail and
text message mirroring the information provided in the mailing.
 Sample members will have a variety of means to provide updated contact information and contact
preferences. Students can use an Update Contact Information page on the secure BPS:20/25 website to
provide their contact information, including cell phone number, as well as provide contacting
preferences with respect to phone calls, mailings, e-mails, and text messages. Help Desk calls and emails providing information about a sample member’s text message preferences will be monitored and
the sample member’s data record updated as soon as the information becomes known.
 The BPS:20/25 field test will include an experiment where a randomly selected group of sample
members will be asked to confirm contacting information at the start of the data collection, for which
they will be paid $5. Once they have provided the contact information, they can continue with the
survey.
 The telephone locating and surveying stage includes calling all available telephone numbers and
following up on leads provided by parents and other contacts.
 The pre-intensive batch tracing stage consists of the LexisNexis SSN and Premium Phone batch
searches that will be conducted between the telephone locating and surveying stage and the intensive
tracing stage.
 Once all known telephone numbers are exhausted, a case will move into the intensive tracing stage
during which tracers will conduct interactive database searches using all known contact information for
a sample member. With interactive tracing, a tracer assesses each case on an individual basis to
determine which resources are most appropriate and the order in which each should be used. Sources
that may be used, as appropriate, include credit database searches, such as Experian, various public
websites, and other integrated database services.
 Other locating activities will take place as needed, including a LexisNexis e-mail search conducted for
nonrespondents toward the end of data collection.
b. Training for Data Collection Staff
Telephone data collection will include supervisors and interviewers. Training programs for these staff members
are critical to maximizing response rates and collecting accurate and reliable data.
Team supervisors, who are responsible for all supervisory tasks, will attend their own project-specific training, in
addition to the interviewer training. They will receive an overview of the study, background and objectives, and
the data collection instrument through a question-by-question review. Supervisors will also receive training in the
following areas: providing direct supervision during data collection; handling refusals; monitoring interviews and
maintaining records of monitoring results; problem resolution; case review; specific project procedures and
protocols; reviewing reports generated from the ongoing Computer Assisted Telephone Interviewing (CATI);
and monitoring data collection progress.
Training for interviewers is designed to help staff become familiar with and practice using the CATI case
management system and the survey instrument, as well as to learn project procedures and requirements.
Particular attention will be paid to quality control initiatives, including refusal avoidance and methods to ensure
that quality data are collected. Interviewers will receive project-specific training on telephone interviewing and
answering questions from web participants regarding the study or related to specific items within the survey.
Bilingual interviewers will receive a supplemental training that will focus on Spanish contacting and interviewing
9

procedures. At the conclusion of training, all data collection staff must meet certification requirements by
successfully completing a certification interview. This evaluation consists of a full-length interview with project
staff observing and evaluating interviewers, as well as an oral evaluation of interviewers’ knowledge of the study’s
Frequently Asked Questions.
c. Case Management System
Surveys will be conducted using a single web-based survey instrument for both web (including mobile devices)
and CATI data collection. Control of data collection activities will be accomplished through a CATI case
management system, which is equipped with the numerous capabilities, including: on-line access to locating
information and histories of locating efforts for each case; a questionnaire administration module with full
“front-end cleaning” capabilities (i.e., editing as information is obtained from respondents); sample management
module for tracking case progress and status; and automated scheduling module which delivers cases to
interviewers. The automated scheduling module incorporates the following features:









Automatic delivery of appointment and call-back cases at specified times. This reduces the need for
tracking appointments and helps ensure the interviewer is punctual. The scheduler automatically
calculates the delivery time of the case in reference to the appropriate time zone.
Sorting of non-appointment cases according to parameters and priorities set by project staff. For
instance, priorities may be set to give first preference to cases within certain sub-samples or geographic
areas; cases may be sorted to establish priorities between cases of differing status. Furthermore, the
historic pattern of calling outcomes may be used to set priorities (e.g., cases with more than a certain
number of unsuccessful attempts during a given time of day may be passed over until the next time
period). These parameters ensure that cases are delivered to interviewers in a consistent manner
according to specified project priorities.
Restriction on allowable interviewers. Groups of cases (or individual cases) may be designated for
delivery to specific interviewers or groups of interviewers. This feature is most commonly used in
filtering refusal cases, locating problems, or foreign language cases to specific interviewers with
specialized skills.
Complete records of calls and tracking of all previous outcomes. The scheduler tracks all outcomes for
each case, labeling each with type, date, and time. These are easily accessed by the interviewer upon
entering the individual case, along with interviewer notes.
Flagging of problem cases for supervisor action or supervisor review. For example, refusal cases may be
routed to supervisors for decisions about whether and when a refusal letter should be mailed, or whether
another interviewer should be assigned.
Complete reporting capabilities. These include default reports on the aggregate status of cases and
custom report generation capabilities.

The integration of these capabilities reduces the number of discrete stages required in data collection and data
preparation activities and increases capabilities for immediate error reconciliation, which results in better data
quality and reduced cost. Overall, the scheduler provides an efficient case assignment and delivery function by
reducing supervisory and clerical time, improving execution on the part of interviewers and supervisors by
automatically monitoring appointments and call-backs, and reducing variation in implementing survey priorities
and objectives.
d. Survey Instrument Design
The survey will employ a web-based instrument and deployment system, which has been in use since NPSAS:08.
The system provides multimode functionality that can be used for self-administration, including on mobile
devices, CATI, or data entry. The survey instrument can be found in Appendix E.
In addition to the functional capabilities of the case management system and web survey instruments described
10

above, our efforts to achieve the desired response rate will include using established procedures proven effective
in other large-scale studies. These include:











Providing multiple response modes, including mobile-friendly self-administered and intervieweradministered options.
Offering incentives to encourage response.
Assigning experienced CATI interviewers who have proven their ability to contact and obtain
cooperation from a high proportion of sample members.
Training the interviewers thoroughly on study objectives, study population characteristics, and
approaches that will help gain cooperation from sample members.
Maintaining a high level of monitoring and direct supervision so that interviewers who are experiencing
low cooperation rates are identified quickly and corrective action is taken.
Making every reasonable effort to obtain a completed interview at the initial contact, but allowing
respondent flexibility in scheduling appointments to be interviewed.
Thoroughly reviewing all refusal cases and making special conversion efforts whenever feasible.
Implementing and assuring participants of confidentiality procedures, including:
o Requiring respondents to answer security questions before obtaining and resuming access to the
survey;
o Restricting the ability for the respondent to view survey responses from prior log in sessions (i.e.,
no ability to use navigation buttons to go to “Previous” survey questions from another log in
session); and
o The survey automatically logging out of a session after 10 minutes of inactivity.
For the BPS:20/25 full-scale collection, the Spanish language survey will be offered to approximately 400
sample members based on their use of Spanish surveys in one or both of the prior surveys (NPSAS:20
or BPS:20/22). If a Spanish survey administration is selected the respondent will receive the abbreviated
web survey, rendering in Spanish.
e. Refusal Aversion and Conversion

Recognizing and avoiding refusals is important to maximize the response rate. We will emphasize this, and other
topics related to obtaining cooperation during interviewer training. Supervisors will monitor interviewers
intensely during the early days of outbound calling and provide retraining as necessary. In addition, the
supervisors will review daily interviewer production reports produced by the CATI system to identify and retrain
any data collectors who are producing unacceptable numbers of refusals or other problems.
Refusal conversion efforts will be delayed for at least 1 week to give the respondent time after the initial refusal.
Attempts at refusal conversion will not be made with individuals who become verbally aggressive or who
threaten to take legal or other action. Refusal conversion efforts will not be conducted to a degree that would
constitute harassment. We will respect a sample member’s right to decide not to participate and will not impinge
on this right by carrying conversion efforts beyond the bounds of propriety.
4.1.Tests of Procedures or Methods
The BPS:20/25 field test will include two data collection experiments focused on survey participation:
1) an “address confirmation” incentive experiment where respondents receive an additional $5 if they
confirm their phone number, mailing and e-mail address before the start of the survey, and
2) a reminder mode experiment where we explore the implications of eliminating telephone reminders
for sample members who have responded to NPSAS:20 and BPS:20/22.
The following section describes the two data collection experiments, followed by a description of the field test
data collection procedures. We will evaluate the generalizability of the results from these experiments by
11

assessing the representativeness of the BPS:20/25 field test sample. Even limited generalizability will provide
valuable insight on the potential of implementing the address confirmation incentive and reducing CATI
reminders as enhancements for the BPS:20/25 fullFull-scale study.
Appendix C provides materials designed for contacting student sample members (e.g., letters, brochures, e-mails,
text messages).
a.4.

Experiment #1, Data Collection: Address Confirmation Incentive Procedures

Input from NPSAS and BPS focus groups (Appendix D), open-ended questions in NPSAS and BPS surveys, as
well as posts on social media platforms have indicated that some sample members question the legitimacy of
NPSAS/BPS survey invitations and therefore may be hesitant to complete the survey. Previous NPSAS-family
studies (e.g., NPSAS:20 calibration, NPSAS:20 full-scale, and BPS:20/22) have used prepaid incentives via
PayPal in an attempt to increase sample members’ perceived legitimacy of the survey invitation. However, in
these studies, the incentive was automatically deposited into a sample member’s PayPal account without any
action from the sample member (i.e., separate from the survey invitation itself). This may have limited a sample
member’s ability to make a connection between the incentive delivered to their PayPal account and the survey
request.
Experiment 1 will build on this methodology by investigating the effects of offering sample members a monetary
incentive in exchange for verifying their address information (e.g., phone number, mailing, and e-mail address) at
the beginning of the survey. Immediately upon completion of this address confirmation (i.e., before the survey
starts), sample members in the experimental group will receive a $5 incentive payment via their method of choice
(PayPal or check), while sample members in the control group will receive no incentive for completing this
confirmation. This immediate payment may increase the salience of the connection between the incentive and
the survey request, increasing the legitimacy of the study: giving sample members $5 at the beginning of the
survey may give them more confidence that they will receive their post-paid incentive at the end of the survey.
All BPS:20/25 field test sample members will receive a request to confirm their phone number, mailing address,
and e-mail address. However, the incentive offer in exchange for completing this address confirmation involves
splitting the BPS:20/25 field test sample into two randomly assigned groups:


Control Group (n = 1,331): Sample members will receive no incentive for completing the address
confirmation.



Experimental Group (n = 1,331): Sample members will receive a $5 incentive if they complete the
address confirmation.

This design will allow for a comparison of response rates among equally-sized experimental groups and provide
enough power to detect a 4 percentage point difference in response rates assuming 80 percent power, type I
error of 5 percent, and a base response rate of 66 percent. This calculation assumes a 2-sided chi-square test of
response proportions.
The experiment described above will allow us to test the following hypotheses:


H1a. There is no statistically significant difference in response rates between the Control and the
Experimental Group.



H1b. There are no statistically significant differences in representativeness (demographic characteristics)
between the Control and the Experimental Group.



H1c. There are no statistically significant differences in timeliness of responses (days to complete)
between the Control and the Experimental Group.
Experiment #2, Data Collection: CATI Reminders

Survey interviews conducted via CATI calls represent a small percentage of completes in recent NPSAS-family
studies (4 percent of completes in NPSAS:20, 5 percent of completes in the BPS:20/22 full-scale). CATI is also
the most expensive mode as it requires a human interviewer. Eliminating CATI prompting for certain groups of
12

sample members (e.g., sample members who have responded to survey requests in previous waves like
NPSAS:20 and BPS:20/22) may present an opportunity to reduce data collection costs in BPS:20/25 without
adversely affecting response rates or representativeness.
Experiment 2 will explore this idea in BPS:20/25 by investigating the effects of eliminating CATI reminders for
sample members who are NPSAS:20 and BPS:20/22 interview respondents, excluding final partials (i.e., the
default data collection group; see Field Test Data Collection Procedures below for more information about this
data collection group). Eight weeks into data collection, nonresponding sample members in the default group
will either receive CATI prompting as usual (control group) or not receive any CATI prompting (experimental
group). All other data collection activities (e.g., sending reminder e-mails and texts) will continue throughout the
experiment for both groups.
Eight weeks into data collection, nonresponding sample members in the default data collection group will be
split into two randomly assigned groups:


Control Group (n ~= 368; assuming a 63 percent response rate at 8 weeks): Members of this group will
receive telephone reminders until the end of data collection, in additional to all other reminders.



Experimental Group (n ~= 368; assuming a 63 percent response rate at 8 weeks): Members of this group
will not receive telephone reminders but will receive all other reminders.

This design will allow for a comparison of response rates among roughly equally-sized experimental groups and
provide enough power to detect a 7 percentage point difference in response rates assuming 80 percent power,
type I error of 5 percent, and a base response rate of 34 percent. This calculation assumes a 2-sided chi-square
test of the response proportions.
The experiment described above will allow us to test the following hypotheses:


H2a. There is no statistically significant difference in response rates between the Control and the
Experimental Group.



H2b. There are no statistically significant differences in representativeness (demographic characteristics)
between the Control and the Experimental Group.
BPS:20/25 Field Test Data Collection Procedures

BPS:20/25 field testBPS:20/25 full scale data collection will use two distinct data collection groups and three
main data collection phases. This approach builds upon the designs implemented in other longitudinal studies
where it has contributed to maximizing response rates and minimizing the potential for nonresponse bias (e.g.,
BPS:12/14, BPS:12/17, BPS: 20/22, 2016-17 Baccalaureate and Beyond Longitudinal Study (B&B:16/17,), and
B&B:08/18). In BPS:20/25, we plan to implement differential treatments based on prior round response status,
an approach that was successfully implemented in the B&B:16/17 field test, where NPSAS:16 field test
nonrespondents received either an aggressive or a default protocol. The response rate among NPSAS:16 field
test nonrespondents who received the aggressive protocol was about 12 percentage points higher than the group
that received the default protocol (37 percent; default 25 percent response rate t(2,097) = 3.52, p < .001).
 
For the BPS:20/25 field testBPS:20/25 field test.
For the BPS:20/25 full-scale design, we will use the following two data collection groups:


Default Group: Any sample member that responded to all NPSAS:20 and BPS:20/22 survey requests
(total n = 1,98920,620), including:
o NPSAS:20 and BPS:20/22 survey respondents, excluding BPS:20/22 final partials (n = 1,741)
o NPSAS:20 administrative only cases who were also BPS:20/22 survey respondents, excluding
BPS:20/22 final partials (n = 142)
o NPSAS:20 ineligible respondents who, via a screener, self-identified that they began their
postsecondary education between July 1, 2018 and April 30, 2019, excluding BPS:20/22 final partials
(n = 106).
13



Aggressive Group: NPSAS:20 survey nonrespondents (n = 50),, BPS:20/22 survey nonrespondents (n =
547),, or BPS:20/22 survey final partials (total n = 768,030). The goal of this treatment is to convert
reluctant sample members (i.e., those who have not responded to a previous survey request) to
participate in the study as early in data collection as possible.

Table 74 below presents the type and timing of interventions to be applied in the field testfull-scale data
collection by groups and protocol and is described in more detail in the next section.
Table 7.4. BPS:20/25 field testfull-scale data collection protocols by data collection phase and group
assignment

1.

Data Collection Group AssignmentsDeleted Cells
Default Group

Sample

Aggressive Group

• NPSAS:20 and BPS:20/22 survey respondents, excluding BPS:20/22 final partials (n

• NPSAS:20 survey nonrespondents (n = 50)
• BPS:20/22 survey nonrespondents (n = 547)

= 1,741)
• NPSAS:20 administrative only cases and BPS:20/22 survey respondents, excluding

• BPS:20/22 survey final partials (n = 76)
• Total n = 6738,030

BPS:20/22 final partials (n = 142)
•

NPSAS:20-ineligible screener respondents and BPS/22 survey respondents, excluding final
partials (n = 106)
Total n = 1,989

•



20,620

Data Collection Protocols (and approximate schedule)
Prior to data
collection
Early completion
phase (Weeks 1-4)

2.

• Greeting card /panel maintenance

DC announcement mail, text, and e-mail

• DC announcement mail, text, and e-mail



Production Phase
(Weeks 5-10)

3.

Deleted Cells

• Greeting card /panel maintenance
•

Deleted Cells

• CATI begins 2 weeks after mail outs – continued throughout

• Light CATI outbound begins for sample members in Experiment 2 control groupkey

• Postcard, e-mail, text message reminders – continued throughout

subgroups
• Postcard, e-mail, text message reminders – continued throughout

Nonresponse
Follow-Up (Weeks
11 – 16)+)
Total incentives

• $10 boost post-paid (at week 11)


•

• $20 boost post-paid (at week 11)



Abbreviated survey (at week 20)
$5 address confirmation incentive for sample members in Experiment 1 experimental group

• $30 baseline

•

Abbreviated survey (at week 16)
$5 address confirmation incentive for sample members in Experiment 1 experimental
group

• $45 baseline

• $10 boost

• $20 boost

• Maximum = $4540

• Maximum = $7065

Turning toFor incentives, the baseline incentive for the prior year respondents in the default group will be $30.
Including the possible $5 address confirmation incentive (if a sample member is in the experimental group
discussed above), and, plus a $10 boost postpaid incentive (see incentive boosts section below),). This yields a
maximum total incentive of $4540 for sample members in the default group.
The baseline incentive for sample members in the aggressive group will be $45. An experiment conducted in
BPS:12/14 showed that a $45 baseline incentive yielded the highest response rates; however, this was 
underpowered to detect differences from $30 in the lower propensity response groups (Wilson et al. 2015).
Nonetheless, the $30 baseline incentive offered to these sample members in prior studies was not sufficient to
encourage response (i.e., n = 50 Field Test sample members did not respond to NPSAS:20 field test
nonrespondents and n = 623 did not provide complete responses to BPS:20/22 field test nonrespondents).
Therefore, we recommend implementing a higher baseline incentive given that the $30 baseline incentive was not
enough to encourage these sample members to respond in prior years. Further, the $40 BPS:20/22 field test
incentive yielded a completion rate of only 23 percent among sample members in the aggressive group, while a
$45 baseline incentive for the aggressive group in the BPS:20/22 full-scale yielded a completion rate of 38
percent. and a 41 percent completion rate in the BPS:20/25 field test. The baseline incentive will be paid in
14

addition to a possible $5 address confirmation incentive (if in the experimental group as discussed above), and a
$20 boost postpaid incentive (see incentive boosts section below). The maximum possible total incentive is
$7065 in this aggressive data collection protocol. 
Beyond the baseline incentives, both survey data collection protocols employ similar interventions, although the
timing of these interventions differs across groups: interventions occur sooner in the aggressive protocol.
Field Test Data Collection Protocol Design Elements:
Greeting card. The first mailing that individuals in the default and aggressive data collection protocols will
receive is a greeting card expressing gratitude for being part of the study and announcing the upcoming data
collection. Greeting cards have been shown to significantly increase response rates in longitudinal studies (Griggs
et al. 2019) and we will use this method as a precursor to the invitation letter for both groups. The greeting card
will be mailed a few weeks in advance of data collection, upon OMB approval.
Contact information confirmation incentive. We will implement a $5 contact information confirmation incentive
experiment as described above.
Reminders.
Reminders. Text messaging has been shown to significantly increase response rates in different survey modes
(e.g., Callegaro et al. 2011; Schober et al. 2015). Results from the BPS:20/22 field test showed that the response
rate for sample members who received only text message reminders was not statistically significantly different
from the response rate of sample members who received only telephone reminders. Therefore, both data
collection groups will receive text message reminders. Hard copy mailings, e-mails, and text messages will be
used to maintain ongoing contact with sample members in both data collection groups, prior to and throughout
data collection.
CATI calling. Sample members in the default group will not receive any CATI calling if they are in the
Experiment 2 experimental group. Sample members in the default group will receive light outbound CATI
calling at 8 weeks (during the Production phase) if they are in the Experiment 2 control group.a demographic
group with a low response rate (i.e., the potential to affect nonresponse bias) at the time CATI calling begins.
Light CATI involves a minimal number of phone calls, used mainly to prompt web response (as opposed to
regular CATI efforts that involve more frequent phone efforts, with the goal to locateof locating sample
members and encourageencouraging their participation). All cases in the aggressive group will receive earlier
(beginning during the Early Completion phase) and more intense telephone prompting than eligible cases in the
default group.
Incentive boosts. Researchers have used incentive boosts as a nonresponse conversion strategy for sample
members who have implicitly or explicitly refused to complete the survey (e.g., Groves and Heeringa 2006;
Singer and Ye 2013). These boosts are especially common in large federal surveys during their nonresponse
follow-up phase (e.g., The Center for Disease Control and Prevention's National Survey of Family Growth) and
have been implemented successfully in other postsecondary education surveys (e.g., HSLS:09 second follow-up;
BPS:12/17; NPSAS:20). In NPSAS:20, a $10 incentive boost increased the overall response rate by about 3.2
percentage points above the projected response rate. Therefore, a $10 incentive boost increase to the BPS:20/25
baseline incentive is planned during the Nonresponse Follow-Up phase for all remaining nonrespondents in the
default data collection protocol. Remaining nonrespondents in the aggressive data collection protocol will be
offered a $20 incentive boost increase to the baseline incentive. This is because the $10 incentive boost in
NPSAS:20 did not show any effect on this group. If necessary, incentive boosts may be targeted only at certain
groups of nonrespondents to achieve response goals (e.g., targeting nonrespondents from certain states to
ensure representativeness, targeting aggressive group nonrespondents to reduce the potential for nonresponse
bias).

15

Abbreviated survey. Obtaining responses from all sample members is an important assumption of the inferential
paradigm. The leverage-saliency theory (Groves et al. 2000) and the social exchange theory (Dillman et al. 2014)
suggest that the participation decision of an individual is driven by different survey design factors or perceived
cost of participating. As such, reducing the perceived burden of participating by reducing the survey length may
motivate sample members to participate.
During the B&B:16/17 field test, prior round nonrespondents were randomly assigned to one of two groups: 1)
prior round nonrespondents who were offered the abbreviated survey during the production phase (i.e., before
the nonresponse conversion phase), and 2) prior round nonrespondents who were offered the abbreviated
survey during the nonresponse conversion phase (i.e., after the production phase). At the end of the production
phase, prior round nonrespondents who received the abbreviated survey had a higher overall response rate (22.7
percent) than those who were not offered the abbreviated during that phase (12.1 percent; t(2,097) = 3.67, p <
0.001). Further, at the end of data collection, prior round nonrespondents who were offered the abbreviated
survey during the earlier production phase had a significantly higher response rate (37 percent) than prior round
nonrespondents who were not offered the abbreviated survey until the nonresponse conversion phase (25
percent) (t(2,097) = 3.52, p = .001). These results indicate that offering an abbreviated survey to prior round
nonrespondents during the production phase (i.e., earlier in data collection) significantly increases response rates.
The B&B:08/12 and B&B:08/18 full-scale studies also demonstrated the benefit of an abbreviated survey.
Offering the abbreviated survey to prior round nonrespondents increased overall response rates of that group by
18.2 (B&B:08/12) and 8.8 (B&B:08/18) percentage points (Cominole et al. 2015). In NPSAS:20, 14.4 percent of
those offered the abbreviated survey completed it. Therefore, an abbreviated survey option will be offered to all
sample members in the BPS:20/25 full-scale study during the Nonresponse Follow-Up Phase. For the aggressive
protocol, the abbreviated survey will be offered at Week 16. For the default protocol, the abbreviated survey will
be offered as the last step in nonresponse conversion at Week 20.
Other interventions. While all BPS studies are conducted by NCES, the data collection contractor, RTI
International, has typically used thea study-specific e-mail “@rti.org”@ed.gov (or similar e-mail from
“@rti.org”) to contact and support sample members. Changing the e-mail sender to the NCES project officer or
the RTI project director may increase the perceived importance of the survey and help personalize the contact
materials, thereby potentially increasing relevance. Switching the sender during data collection also increases the
chance that the survey invitation is delivered to the sample member rather than to a spam filter.
5. Tests of Procedures or Methods
The BPS:20/25 field test included two data collection experiments focused on survey participation. The results
of these field test experiments are summarized below. For detailed results of the BPS:20/25 field test
experiments, see Appendix D.
The data collection experiments explored the effectiveness of:
1) an “address confirmation” incentive where respondents received an additional $5 if they provided
their phone number, mailing or e-mail address before the start of the survey, and
2) eliminating telephone reminders for sample members who responded to NPSAS:20 and BPS:20/22.
Results from these data collection experiments provide insight in preparation for the full-scale study regarding
the effectiveness of these interventions across three data quality indicators: survey response (operationalized
using response rates), sample representativeness (assessed across gender, ethnicity, race, and institutional
control), and data collection efficiency (operationalized as the number of the days between the start of the
experiment and survey completion).
The “address confirmation” experiment investigated the effectiveness of giving respondents an additional $5
incentive (experimental group) if, at the beginning of the survey, they provided their contact information (i.e.,
phone number and mailing or e-mail address). This up-front incentive was designed to demonstrate the
authenticity of the survey request, theoretically motivating hesitant sample members to respond. The control
group was also asked to provide the same contact information but was not offered an additional incentive. At the
16

end of data collection, the response rate of the control group (67.2 percent) was about 5 percentage points
higher than the experimental $5 group (61.9 percent), a statistically significant difference (X2 = 8.36, p < .01).
Both the experimental and control groups had similar representativeness across gender, ethnicity, race, and
institutional control. At the end of data collection, respondents in the experimental $5 group took one more day
(36.7 days) than respondents in the control group (35.9 days) to complete the survey, though the difference was
not significant (t(1,1677) = 0.47, p = 0.64). Together, these results indicate that the $5 address confirmation
incentive discouraged response rather than increasing response as theorized. Therefore, the use of an address
confirmation incentive in the BPS:20/25 full-scale data collection will not be implemented.
The second data collection experiment examined the implications of eliminating costly telephone reminders to
sample members who were deemed likely to respond to the BPS:20/25 field test based on their past response
behavior (i.e., sample members who responded to NPSAS:20 and BPS:20/22, known as the default data
collection group). Eight weeks into data collection, nonresponding sample members in the default data collection
group were randomly assigned to either 1) receive telephone prompting as in past BPS administrations (the
control group), or 2) not receive any telephone prompting (the experimental group). At the end of data
collection, the response rate of the control group that received telephone reminders (48.7 percent) was about 15
percentage points higher than the experimental “no telephone reminder” group (33.8 percent), a statistically
significant difference (X2 = 21.2, p < .001). Both the experimental and control groups had similar
representativeness across gender, ethnicity, race, and institutional control. At the end of data collection,
respondents in the experimental “no telephone reminder” group took six more days (89.7 days) than
respondents in the control group that received telephone reminders (83.6 days) to complete the survey, a
statistically significant difference (t(1,363) = -3.83, p < 0.001). Together, these results indicate that telephone
reminders, despite their cost, are still useful for encouraging response from sample members who have
responded to NPSAS and BPS survey requests in the past. To balance the utility and cost of telephone
reminders, in the BPS:20/25 full-scale study we recommend using telephone reminders only for certain sample
members in the default data collection group; specifically, sample members in the default data collection group
that belong to demographic groups with lower response rates. Encouraging response from these groups with
telephone reminders would reduce their potential for nonresponse bias.
5.6. Reviewing Statisticians and Individuals Responsible for Designing and
Conducting the Study
BPS:20/25 is being conducted by NCES. The following statisticians at NCES are responsible for the statistical
aspects of the study: Dr. David Richards, Dr. Tracy Hunt-White, Dr. Sean Simone, Dr. Elise Christopher, and
Dr. Gail Mulligan. NCES's prime contractor for BPS:20/25 is RTI International (Contract# 919900-18-C-0039),
and subcontractors include Activate Research; EurekaFacts; HR Directions; Leonard Resource Group; Research
Support Services; and Strategic Communications, Inc. The following staff members at RTI are working on the
statistical aspects of the study design: Dr. Joshua Pretlow, Dr. Jennifer Wine, Dr. Nestor Ramirez, Mr. Darryl
Cooney, Mr. Michael Bryan, Dr. T. Austin Lacy, Dr. Emilia Peytcheva, and Mr. Peter Siegel, and Dr. Jerry
Timbrook. Principal professional RTI staff not listed above, who are assigned to the study include: Ms. Ashley
Wilson, Ms. Kristin Dudley, Mr. Jeff Franklin, Ms. Chris Rasmussen, and Ms. Donna Anderson.

References  
 
Bryan, M., Cooney, D., Elliot, B., and Richards, D. (2019). 2012/17 Beginning Postsecondary Students
Longitudinal Study (BPS:12/17): Data File Documentation (NCES 2020-522). U.S. Department of Education.
Washington, DC: National Center for Education Statistics. Retrieved from https://nces.ed.gov/pubsearch
Callegaro, M. Ayhan, O., Gabler, S., Haeder, S., & Villar, A. (2011). Combining landline and mobile phone
samples. A dual frame approach. Working Papers 2011/13. Gesis Leibniz-Institut fuer Sozialwissenschaften.
Griggs, A., Powell, R., Keeney, J., Waggy, M., Harris, K., Halpern, C. and Dean, S. (2019) Research Note: A
Prenotice Greeting Card's Impact on Response Rates and Response Time, Longitudinal and Life Course Studies,
17

10(4): 421-432.
Groves R.M., and Heeringa, S.G. (2006). Responsive Design for Household Surveys: Tools for Actively
Controlling Survey Errors and Costs. Journal of the Royal Statistical Society Series A-Statistics in Society, 169(3):
439-457.
Schober, M.F,. Conrad, F.G., Antoun, C., Ehlen, P., Fail, S., Hupp, A.L., Johnston, M., Vickers, L., Yan, Y.H.,
and Zhang, C. (2015). Precision and Disclosure in Text and Voice Interviews on Smartphones. PLoS ONE
10(6): e0128337. doi:10.1371/journal.pone.0128337
Singer, E. and Ye, C. (2013). The Use and Effects of Incentives in Surveys. Annals. Annals of the American
Academy of Political and Social Science, 645(1): 112-141.

18


File Typeapplication/pdf
File TitleMicrosoft Word - Part B BPS25 FS Collection v21_tracked
AuthorCarrie.Clarady
File Modified2024-10-28
File Created2024-10-28

© 2024 OMB.report | Privacy Policy