2020/25 Beginning Postsecondary Students (BPS:20/25) Full-Scale Study
Supporting Statement Part B
OMB # 1850-0631 v.21
Submitted by
National Center for Education Statistics
U.S. Department of Education
Contents
B. Collection of Information Employing Statistical Methods 3
a. BPS:20/25 Full-scale Sample 3
3. Methods for Maximizing Response Rates 4
a. Tracing of Sample Members 4
b. Training for Data Collection Staff 5
e. Refusal Aversion and Conversion 7
4. BPS:20/25 Full-scale Data Collection Procedures 7
5. Tests of Procedures or Methods 10
6. Reviewing Statisticians and Individuals Responsible for Designing and Conducting the Study 11
Tables
This submission requests clearance for the 2020/25 Beginning Postsecondary Students Longitudinal Study (BPS:20/25) full-scale data collection materials and procedures. BPS:20/25 is the second follow-up of a sample of students who began their postsecondary education during the 2019-20 academic year, a sample drawn from the 2019-20 National Postsecondary Student Aid Study (NPSAS:20). For details on the NPSAS:20 sampling design see NPSAS:20 Supporting Statement Part B (OMB# 1850-0666 v.25). Specific plans for the BPS:20/25 data collection are provided below.
The respondent universe for BPS:20/25 consists of all students who began their postsecondary education for the first time during the 2019-20 academic year at any Title IV-eligible postsecondary institution in the United States. The BPS:20/25 full-scale sample will include students from the NPSAS:20 full-scale sample who were confirmed as 2019-20 academic year first-time beginner students based on survey, institution, or other administrative data.
The BPS:20/25 full-scale sample will be constructed from the BPS:20/22 full-scale sample. The BPS:20/22 sample was composed of students who began their postsecondary education for the first time during the 2019-20 academic year at any Title IV-eligible postsecondary institution in the United States. Approximately 26,470 respondents to the NPSAS:20 full-scale student surveys self-identified as FTBs. In addition, to have full population coverage of the BPS sample, a subsample of approximately 10,860 students who were identified by the NPSAS sample institution or administrative data to be potential FTBs were selected. This subsample consisted of a combination of NPSAS:20 survey nonrespondents and administrative data-only sample members who were NPSAS:20 study respondents.
As shown in Table 1 below, the starting BPS:20/22 full-scale sample size was approximately 37,330, with an eligible sample of about 34,240. Eligibility was determined using screener responses and administrative data. Deceased individuals are also removed from eligible individual counts. An unweighted response rate of 65 percent was observed among the eligible sample members, which yielded approximately 22,320 responding FTBs.
Table 1. BPS:20/22 full-scale study eligibility and unweighted response rates by base-year outcome status: 2020-22
|
Number
|
Eligible1 |
|
Respondents |
||
NPSAS:20 full-scale outcome |
Number |
Rate |
Number |
Rate2 |
||
Total |
37,330 |
34,240 |
92% |
|
22,320 |
65% |
NPSAS Survey respondents |
26,470 |
25,230 |
95% |
|
18,900 |
75% |
NPSAS Survey nonrespondents |
5,510 |
4,470 |
81% |
|
980 |
22% |
NPSAS Administrative-only students |
5,350 |
4,540 |
85% |
|
2,450 |
54% |
1
Sample member eligibility was determined during the student survey or
from institutional records in the absence of a student survey.
Individuals found to be deceased during data collection have also
been removed from the total eligible sample members.
2
Unweighted response rate.
NOTE: Sample sizes rounded to the
nearest 10. Percentages are based on the unrounded count of eligible
students within the row under consideration. Detail may not sum to
totals because of rounding.
SOURCE: U.S. Department of
Education, National Center for Education Statistics, 2020/22
Beginning Postsecondary Students Longitudinal Study (BPS:20/22).
BPS:20/25 will be the second follow-up data collection, conducted three years after BPS:20/22. Table 2 shows the distribution of the 34,240 sample members determined to be eligible for BPS:20/25 data collection, survey response status, and identifies groups that will be fielded (included in data collection activities with an objective of obtaining a complete survey) in BPS:20/25. These 34,240 sample members are the same individuals identified as eligible students in Table 1.
Table 2. BPS:20/25 sample member disposition, by NPSAS:20 and BPS:20/22 response status: 2020–22
Group |
NPSAS:20 data collection outcomes |
BPS:20/22 Respondent |
Count |
Field in BPS:20/25 Full-scale |
Total |
|
|
34,240 |
|
1 |
NPSAS Survey respondents |
Yes |
18,900 |
Yes |
2 |
NPSAS Survey nonrespondents |
Yes |
980 |
Yes |
3 |
NPSAS Administrative-only students |
Yes |
2,450 |
Yes |
4 |
NPSAS Survey respondents |
No |
6,330 |
Yes |
51 |
NPSAS Survey nonrespondents |
No |
3,490 |
No |
61 |
NPSAS Administrative-only students |
No |
2,100 |
No |
1
Groups 5 and 6 will not be fielded as they did not respond to either
the NPSAS:20 survey or the BPS:20/22 survey.
NOTE: Sample sizes
rounded to the nearest 10. Detail
may not sum to totals because of rounding.
SOURCE:
U.S. Department of Education, National Center for Education
Statistics, 2020/22 Beginning Postsecondary Students Longitudinal
Study (BPS:20/22).
While BPS:20/25 sample members who did not respond to either the NPSAS:20 student survey nor the BPS:20/22 survey are eligible for BPS:20/25, these sample members (groups 5 and 6 in Table 2) will not be fielded as they have responded at very low rates in previous administrations. This results in approximately 5,590 BPS:20/25 sample members not being fielded. Instead, they will be treated as study nonrespondents for purposes of response rate calculation and accounted for with weight adjustments.
Table 3 presents the BPS:20/25 sampled and expected responding individuals, by NPSAS:20 outcome and BPS:20/22 response status. Based on administrative data, all BPS:20/25 sample members are considered eligible. The response rate estimates are based on the BPS:20/22 data collection and the BPS:12 longitudinal cohort that followed the NPSAS:12 data collection (Bryan et al. 2016).
Table 3. Expected BPS:20/25 full-scale study response rates by base-year and BPS:20/22 outcome status: 2020–22
NPSAS:20 data collection outcomes |
BPS:20/22 Respondent |
Sample Size |
Expected Response Rate |
Expected Completes |
Total |
|
34,240 |
0.59 |
20,020 |
NPSAS Survey respondents |
Yes |
18,900 |
0.81 |
15,310 |
NPSAS Survey nonrespondents |
Yes |
980 |
0.55 |
540 |
NPSAS Administrative-only students |
Yes |
2,450 |
0.75 |
1,830 |
NPSAS Survey respondents |
No |
6,330 |
0.37 |
2,340 |
NPSAS Survey nonrespondents1 |
No |
3,490 |
0.00 |
0 |
NPSAS Administrative-only students1 |
No |
2,100 |
0.00 |
0 |
1
Expected response rate for these groups is zero as they will not be
fielded for data collection.
NOTE: Sample sizes rounded to the
nearest 10. Detail may not sum to totals because of rounding.
SOURCE:
U.S. Department of Education, National Center for Education
Statistics, 2020/22 Beginning Postsecondary Students Longitudinal
Study (BPS:20/22).
Achieving high response rates in the BPS:20/25 full-scale study will depend on successfully identifying and locating sample members and being able to contact them and gain their cooperation. As was used successfully in prior NCES longitudinal studies, shortly before data collection begins, we will send an initial contact mailing/e-mail to remind sample members of their inclusion in the study.
To yield the maximum number of located cases with the least expense, an integrated tracing approach with the following elements will be implemented. Advance tracing activities, which will occur prior to the start of data collection, include initial batch database searches, such as to the National Change of Address (NCOA) databases, for cases with sufficient contact information to be matched. To handle cases for which contact information is invalid or unavailable, additional advance tracing through proprietary interactive databases will expand on leads found.
Hard copy mailings, e-mails, and text messages will be used to maintain ongoing contact with sample members, prior to and throughout data collection. The contacting materials, which will be developed with a design appealing to sample members in 2025, are provided in Appendix C. The data collection mailing to sample members will include a letter announcing the start of data collection, requesting that the sample member complete the web survey, and including a toll-free number, the study website address, a Study ID and password, and a study brochure. We will send a similar e-mail and text message mirroring the information provided in the mailing.
Sample members will have a variety of means to provide updated contact information and contact preferences. Students can use an Update Contact Information page on the secure BPS:20/25 website to provide their contact information, including cell phone number, as well as provide contacting preferences with respect to phone calls, mailings, e-mails, and text messages. Help Desk calls and e-mails providing information about a sample member’s text message preferences will be monitored and the sample member’s data record updated as soon as the information becomes known.
The telephone locating and surveying stage includes calling all available telephone numbers and following up on leads provided by parents and other contacts.
The pre-intensive batch tracing stage consists of the LexisNexis SSN and Premium Phone batch searches that will be conducted between the telephone locating and surveying stage and the intensive tracing stage.
Once all known telephone numbers are exhausted, a case will move into the intensive tracing stage during which tracers will conduct interactive database searches using all known contact information for a sample member. With interactive tracing, a tracer assesses each case on an individual basis to determine which resources are most appropriate and the order in which each should be used. Sources that may be used, as appropriate, include credit database searches, such as Experian, various public websites, and other integrated database services.
Other locating activities will take place as needed, including a LexisNexis e-mail search conducted for nonrespondents toward the end of data collection.
Telephone data collection will include supervisors and interviewers. Training programs for these staff members are critical to maximizing response rates and collecting accurate and reliable data.
Team supervisors, who are responsible for all supervisory tasks, will attend their own project-specific training, in addition to the interviewer training. They will receive an overview of the study, background and objectives, and the data collection instrument through a question-by-question review. Supervisors will also receive training in the following areas: providing direct supervision during data collection; handling refusals; monitoring interviews and maintaining records of monitoring results; problem resolution; case review; specific project procedures and protocols; reviewing reports generated from the ongoing Computer Assisted Telephone Interviewing (CATI); and monitoring data collection progress.
Training for interviewers is designed to help staff become familiar with and practice using the CATI case management system and the survey instrument, as well as to learn project procedures and requirements. Particular attention will be paid to quality control initiatives, including refusal avoidance and methods to ensure that quality data are collected. Interviewers will receive project-specific training on telephone interviewing and answering questions from web participants regarding the study or related to specific items within the survey. Bilingual interviewers will receive a supplemental training that will focus on Spanish contacting and interviewing procedures. At the conclusion of training, all data collection staff must meet certification requirements by successfully completing a certification interview. This evaluation consists of a full-length interview with project staff observing and evaluating interviewers, as well as an oral evaluation of interviewers’ knowledge of the study’s Frequently Asked Questions.
Surveys will be conducted using a single web-based survey instrument for both web (including mobile devices) and CATI data collection. Control of data collection activities will be accomplished through a CATI case management system, which is equipped with the numerous capabilities, including: on-line access to locating information and histories of locating efforts for each case; a questionnaire administration module with full “front-end cleaning” capabilities (i.e., editing as information is obtained from respondents); sample management module for tracking case progress and status; and automated scheduling module which delivers cases to interviewers. The automated scheduling module incorporates the following features:
Automatic delivery of appointment and call-back cases at specified times. This reduces the need for tracking appointments and helps ensure the interviewer is punctual. The scheduler automatically calculates the delivery time of the case in reference to the appropriate time zone.
Sorting of non-appointment cases according to parameters and priorities set by project staff. For instance, priorities may be set to give first preference to cases within certain sub-samples or geographic areas; cases may be sorted to establish priorities between cases of differing status. Furthermore, the historic pattern of calling outcomes may be used to set priorities (e.g., cases with more than a certain number of unsuccessful attempts during a given time of day may be passed over until the next time period). These parameters ensure that cases are delivered to interviewers in a consistent manner according to specified project priorities.
Restriction on allowable interviewers. Groups of cases (or individual cases) may be designated for delivery to specific interviewers or groups of interviewers. This feature is most commonly used in filtering refusal cases, locating problems, or foreign language cases to specific interviewers with specialized skills.
Complete records of calls and tracking of all previous outcomes. The scheduler tracks all outcomes for each case, labeling each with type, date, and time. These are easily accessed by the interviewer upon entering the individual case, along with interviewer notes.
Flagging of problem cases for supervisor action or supervisor review. For example, refusal cases may be routed to supervisors for decisions about whether and when a refusal letter should be mailed, or whether another interviewer should be assigned.
Complete reporting capabilities. These include default reports on the aggregate status of cases and custom report generation capabilities.
The integration of these capabilities reduces the number of discrete stages required in data collection and data preparation activities and increases capabilities for immediate error reconciliation, which results in better data quality and reduced cost. Overall, the scheduler provides an efficient case assignment and delivery function by reducing supervisory and clerical time, improving execution on the part of interviewers and supervisors by automatically monitoring appointments and call-backs, and reducing variation in implementing survey priorities and objectives.
The
survey will employ a web-based instrument and deployment system,
which has been in use since NPSAS:08. The system provides multimode
functionality that can be used for self-administration, including on
mobile devices, CATI, or data entry. The survey instrument can be
found in Appendix E.
In
addition to the functional capabilities of the case management system
and web survey instruments described above, our efforts to achieve
the desired response rate will include using established procedures
proven effective in other large-scale studies. These include:
Providing multiple response modes, including mobile-friendly self-administered and interviewer-administered options.
Offering incentives to encourage response.
Assigning experienced CATI interviewers who have proven their ability to contact and obtain cooperation from a high proportion of sample members.
Training the interviewers thoroughly on study objectives, study population characteristics, and approaches that will help gain cooperation from sample members.
Maintaining a high level of monitoring and direct supervision so that interviewers who are experiencing low cooperation rates are identified quickly and corrective action is taken.
Making every reasonable effort to obtain a completed interview at the initial contact, but allowing respondent flexibility in scheduling appointments to be interviewed.
Thoroughly reviewing all refusal cases and making special conversion efforts whenever feasible.
Implementing and assuring participants of confidentiality procedures, including:
Requiring respondents to answer security questions before obtaining and resuming access to the survey;
Restricting the ability for the respondent to view survey responses from prior log in sessions (i.e., no ability to use navigation buttons to go to “Previous” survey questions from another log in session); and
The survey automatically logging out of a session after 10 minutes of inactivity.
For the BPS:20/25 full-scale collection, the Spanish language survey will be offered to approximately 400 sample members based on their use of Spanish surveys in one or both of the prior surveys (NPSAS:20 or BPS:20/22). If a Spanish survey administration is selected the respondent will receive the abbreviated web survey, rendering in Spanish.
Recognizing and avoiding refusals is important to maximize the response rate. We will emphasize this, and other topics related to obtaining cooperation during interviewer training. Supervisors will monitor interviewers intensely during the early days of outbound calling and provide retraining as necessary. In addition, the supervisors will review daily interviewer production reports produced by the CATI system to identify and retrain any data collectors who are producing unacceptable numbers of refusals or other problems.
Refusal conversion efforts will be delayed for at least 1 week to give the respondent time after the initial refusal. Attempts at refusal conversion will not be made with individuals who become verbally aggressive or who threaten to take legal or other action. Refusal conversion efforts will not be conducted to a degree that would constitute harassment. We will respect a sample member’s right to decide not to participate and will not impinge on this right by carrying conversion efforts beyond the bounds of propriety.
BPS:20/25
full scale data collection will use two distinct data collection
groups and three main data collection phases. This approach builds
upon the designs implemented in other longitudinal studies where it
has contributed to maximizing response rates and minimizing the
potential for nonresponse bias (e.g., BPS:12/14, BPS:12/17, BPS:
20/22, 2016-17 Baccalaureate and Beyond Longitudinal Study
(B&B:16/17), and B&B:08/18). In BPS:20/25, we plan to
implement differential treatments based on prior round response
status, an approach that was successfully implemented in the
BPS:20/25 field test.
For
the BPS:20/25 full-scale design, we will use the following two data
collection groups:
Default Group: Any sample member that responded to all NPSAS:20 and BPS:20/22 survey requests (total n = 20,620), including:
NPSAS:20 and BPS:20/22 survey respondents, excluding BPS:20/22 final partials
NPSAS:20 administrative only cases who were also BPS:20/22 survey respondents, excluding BPS:20/22 final partials
Aggressive Group: NPSAS:20 survey nonrespondents, BPS:20/22 survey nonrespondents, or BPS:20/22 survey final partials (total n = 8,030). The goal of this treatment is to convert reluctant sample members (i.e., those who have not responded to a previous survey request) to participate in the study as early in data collection as possible.
Table 4 below presents the type and timing of interventions to be applied in the full-scale data collection by groups and protocol and is described in more detail in the next section.
Table 4. BPS:20/25 full-scale data collection protocols by data collection phase and group assignment
Data Collection Group Assignments |
||
|
Default Group |
Aggressive Group |
Sample |
|
|
Data Collection Protocols (and approximate schedule) |
||
Prior to data collection |
|
|
Early completion phase (Weeks 1-4) |
|
|
Production Phase (Weeks 5-10) |
|
|
Nonresponse Follow-Up (Weeks 11+) |
|
|
Total incentives |
|
|
For incentives, the baseline incentive for the prior year respondents in the default group will be $30, plus a $10 boost postpaid incentive (see incentive boosts section below). This yields a maximum total incentive of $40 for sample members in the default group.
The
baseline incentive for sample members in the aggressive group will be
$45. An experiment conducted in BPS:12/14 showed that a $45 baseline
incentive yielded the highest response rates; however, this was
underpowered to detect
differences from $30 in the lower propensity response groups (Wilson
et al. 2015). Nonetheless, the $30 baseline incentive offered to
these sample members in prior studies was not sufficient to encourage
response (i.e., n = 50 NPSAS:20 field
test nonrespondents and n = 623 BPS:20/22 field test nonrespondents).
Therefore, we recommend implementing a higher baseline incentive
given that the $30 baseline incentive was not enough to encourage
these sample members to respond in prior years. Further, the $40
BPS:20/22 field test incentive yielded a completion rate of only 23
percent among sample members in the aggressive group, while a $45
baseline incentive for the aggressive group in the BPS:20/22
full-scale yielded a completion rate of 38 percent and
a 41 percent completion rate in the BPS:20/25 field test.
The baseline incentive will be paid in addition to a possible $20
boost postpaid incentive (see incentive boosts section below). The
maximum possible total incentive is $65 in this aggressive data
collection protocol.
Beyond the baseline
incentives, both survey data collection protocols employ similar
interventions, although the timing of these interventions differs
across groups: interventions occur sooner in the aggressive protocol.
Data
Collection Protocol Design Elements
Greeting card.
The first mailing that individuals in the default and aggressive data
collection protocols will receive is a greeting card expressing
gratitude for being part of the study and announcing the upcoming
data collection. Greeting cards have been shown to significantly
increase response rates in longitudinal studies (Griggs et al. 2019)
and we will use this method as a precursor to the invitation letter
for both groups. The greeting card will be mailed a few weeks in
advance of data collection, upon OMB approval.
Reminders.
Text messaging has been shown to significantly increase response
rates in different survey modes (e.g., Callegaro et al. 2011; Schober
et al. 2015). Results from the BPS:20/22 field test showed that the
response rate for sample members who received only text message
reminders was not statistically significantly different from the
response rate of sample members who received only telephone
reminders. Therefore, both data collection groups will receive text
message reminders. Hard copy mailings, e-mails, and text messages
will be used to maintain ongoing contact with sample members in both
data collection groups, prior to and throughout data collection.
CATI
calling. Sample members in the
default group will receive light outbound CATI calling at 8 weeks
(during the Production phase) if they are in a demographic group with
a low response rate (i.e., the potential to affect nonresponse bias)
at the time CATI calling begins. Light CATI involves a minimal number
of phone calls, used mainly to prompt web response (as opposed to
regular CATI efforts that involve more frequent phone efforts, with
the goal of locating sample members and encouraging their
participation). All cases in the aggressive group will receive
earlier (beginning during the Early Completion phase) and more
intense telephone prompting than eligible cases in the default
group.
Incentive boosts.
Researchers have used incentive boosts as a nonresponse conversion
strategy for sample members who have implicitly or explicitly refused
to complete the survey (e.g., Groves and Heeringa 2006; Singer and Ye
2013). These boosts are especially common in large federal surveys
during their nonresponse follow-up phase (e.g., The Center for
Disease Control and Prevention's National Survey of Family Growth)
and have been implemented successfully in other postsecondary
education surveys (e.g., HSLS:09 second follow-up; BPS:12/17;
NPSAS:20). In NPSAS:20, a $10 incentive boost increased the overall
response rate by about 3.2 percentage points above the projected
response rate. Therefore, a $10 incentive boost increase to the
BPS:20/25 baseline incentive is planned during the Nonresponse
Follow-Up phase for all remaining nonrespondents in the default data
collection protocol. Remaining nonrespondents in the aggressive data
collection protocol will be offered a $20 incentive boost increase to
the baseline incentive. This is because the $10 incentive boost in
NPSAS:20 did not show any effect on this group. If necessary,
incentive boosts may be targeted only at certain groups of
nonrespondents to achieve response goals (e.g., targeting
nonrespondents from certain states to ensure representativeness,
targeting aggressive group nonrespondents to reduce the potential for
nonresponse bias).
Abbreviated survey. Obtaining responses from all sample members is an important assumption of the inferential paradigm. The leverage-saliency theory (Groves et al. 2000) and the social exchange theory (Dillman et al. 2014) suggest that the participation decision of an individual is driven by different survey design factors or perceived cost of participating. As such, reducing the perceived burden of participating by reducing the survey length may motivate sample members to participate.
During the B&B:16/17 field test, prior round nonrespondents were randomly assigned to one of two groups: 1) prior round nonrespondents who were offered the abbreviated survey during the production phase (i.e., before the nonresponse conversion phase), and 2) prior round nonrespondents who were offered the abbreviated survey during the nonresponse conversion phase (i.e., after the production phase). At the end of the production phase, prior round nonrespondents who received the abbreviated survey had a higher overall response rate (22.7 percent) than those who were not offered the abbreviated during that phase (12.1 percent; t(2,097) = 3.67, p < 0.001). Further, at the end of data collection, prior round nonrespondents who were offered the abbreviated survey during the earlier production phase had a significantly higher response rate (37 percent) than prior round nonrespondents who were not offered the abbreviated survey until the nonresponse conversion phase (25 percent) (t(2,097) = 3.52, p = .001). These results indicate that offering an abbreviated survey to prior round nonrespondents during the production phase (i.e., earlier in data collection) significantly increases response rates. The B&B:08/12 and B&B:08/18 full-scale studies also demonstrated the benefit of an abbreviated survey. Offering the abbreviated survey to prior round nonrespondents increased overall response rates of that group by 18.2 (B&B:08/12) and 8.8 (B&B:08/18) percentage points (Cominole et al. 2015). In NPSAS:20, 14.4 percent of those offered the abbreviated survey completed it. Therefore, an abbreviated survey option will be offered to all sample members in the BPS:20/25 full-scale study during the Nonresponse Follow-Up Phase. For the aggressive protocol, the abbreviated survey will be offered at Week 16. For the default protocol, the abbreviated survey will be offered as the last step in nonresponse conversion at Week 20.
Other interventions. While all BPS studies are conducted by NCES, the data collection contractor, RTI International, has typically used a study-specific e-mail @ed.gov (or similar e-mail from “@rti.org”) to contact and support sample members. Changing the e-mail sender to the NCES project officer or the RTI project director may increase the perceived importance of the survey and help personalize the contact materials, thereby potentially increasing relevance. Switching the sender during data collection also increases the chance that the survey invitation is delivered to the sample member rather than to a spam filter.
The BPS:20/25 field test included two data collection experiments focused on survey participation. The results of these field test experiments are summarized below. For detailed results of the BPS:20/25 field test experiments, see Appendix D.
The data collection experiments explored the effectiveness of:
1) an “address confirmation” incentive where respondents received an additional $5 if they provided their phone number, mailing or e-mail address before the start of the survey, and
2) eliminating telephone reminders for sample members who responded to NPSAS:20 and BPS:20/22.
Results from these data collection experiments provide insight in preparation for the full-scale study regarding the effectiveness of these interventions across three data quality indicators: survey response (operationalized using response rates), sample representativeness (assessed across gender, ethnicity, race, and institutional control), and data collection efficiency (operationalized as the number of the days between the start of the experiment and survey completion).
The “address confirmation” experiment investigated the effectiveness of giving respondents an additional $5 incentive (experimental group) if, at the beginning of the survey, they provided their contact information (i.e., phone number and mailing or e-mail address). This up-front incentive was designed to demonstrate the authenticity of the survey request, theoretically motivating hesitant sample members to respond. The control group was also asked to provide the same contact information but was not offered an additional incentive. At the end of data collection, the response rate of the control group (67.2 percent) was about 5 percentage points higher than the experimental $5 group (61.9 percent), a statistically significant difference (X2 = 8.36, p < .01). Both the experimental and control groups had similar representativeness across gender, ethnicity, race, and institutional control. At the end of data collection, respondents in the experimental $5 group took one more day (36.7 days) than respondents in the control group (35.9 days) to complete the survey, though the difference was not significant (t(1,1677) = 0.47, p = 0.64). Together, these results indicate that the $5 address confirmation incentive discouraged response rather than increasing response as theorized. Therefore, the use of an address confirmation incentive in the BPS:20/25 full-scale data collection will not be implemented.
The second data collection experiment examined the implications of eliminating costly telephone reminders to sample members who were deemed likely to respond to the BPS:20/25 field test based on their past response behavior (i.e., sample members who responded to NPSAS:20 and BPS:20/22, known as the default data collection group). Eight weeks into data collection, nonresponding sample members in the default data collection group were randomly assigned to either 1) receive telephone prompting as in past BPS administrations (the control group), or 2) not receive any telephone prompting (the experimental group). At the end of data collection, the response rate of the control group that received telephone reminders (48.7 percent) was about 15 percentage points higher than the experimental “no telephone reminder” group (33.8 percent), a statistically significant difference (X2 = 21.2, p < .001). Both the experimental and control groups had similar representativeness across gender, ethnicity, race, and institutional control. At the end of data collection, respondents in the experimental “no telephone reminder” group took six more days (89.7 days) than respondents in the control group that received telephone reminders (83.6 days) to complete the survey, a statistically significant difference (t(1,363) = -3.83, p < 0.001). Together, these results indicate that telephone reminders, despite their cost, are still useful for encouraging response from sample members who have responded to NPSAS and BPS survey requests in the past. To balance the utility and cost of telephone reminders, in the BPS:20/25 full-scale study we recommend using telephone reminders only for certain sample members in the default data collection group; specifically, sample members in the default data collection group that belong to demographic groups with lower response rates. Encouraging response from these groups with telephone reminders would reduce their potential for nonresponse bias.
BPS:20/25 is being conducted by NCES. The following statisticians at NCES are responsible for the statistical aspects of the study: Dr. David Richards, Dr. Tracy Hunt-White, Dr. Sean Simone, Dr. Elise Christopher, and Dr. Gail Mulligan. NCES's prime contractor for BPS:20/25 is RTI International (Contract# 919900-18-C-0039), and subcontractors include Activate Research; EurekaFacts; HR Directions; Leonard Resource Group; Research Support Services; and Strategic Communications, Inc. The following staff members at RTI are working on the statistical aspects of the study design: Dr. Joshua Pretlow, Dr. Jennifer Wine, Mr. Darryl Cooney, Mr. Michael Bryan, Dr. T. Austin Lacy, Dr. Emilia Peytcheva, Mr. Peter Siegel, and Dr. Jerry Timbrook. Principal professional RTI staff not listed above, who are assigned to the study include: Ms. Ashley Wilson, Ms. Kristin Dudley, Mr. Jeff Franklin, Ms. Chris Rasmussen, and Ms. Donna Anderson.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Modified | 0000-00-00 |
File Created | 2024-11-06 |