83C Change Memo

BB20 Calibration Change Memo v.11.docx

2016/20 Baccalaureate and Beyond (B&B:16/20) Full-Scale Study

83C Change Memo

OMB: 1850-0926

Document [docx]
Download: docx | pdf

Memorandum United States Department of Education

Institute of Education Sciences

National Center for Education Statistics


DATE: August 4, 2020


TO: Robert Sivinski, OMB


THROUGH: Carrie Clarady, OMB Liaison, NCES


FROM: Tracy Hunt-White, Team Lead, Postsecondary Longitudinal and Sample Surveys, NCES


SUBJECT: 2016/20 Baccalaureate and Beyond Longitudinal Study (B&B:16/20) Calibration Experiment Update Change Request (OMB# 1850-0926 v.11)

The 2016/20 Baccalaureate and Beyond Longitudinal Study (B&B:16/20) is conducted by the National Center for Education Statistics (NCES), within the U.S. Department of Education (ED). B&B is designed to follow a cohort of students who completed the requirements for their bachelor’s degree during the 2015-16 academic year. B&B examines students’ education and work experiences after they complete a bachelor’s degree, with a special emphasis on the experiences of schoolteachers. The request to conduct the B&B:16/20 full-scale study was approved by OMB on May 1, 2020 (1850-0926 v.9), with the last update approved on June 7, 2020 (1850-0926 v.10). Data collection started in July 2020 and is scheduled to end in March 2021.

The B&B-eligible cohort is initially identified in the National Postsecondary Study Aid Study (NPSAS). The first cohort (B&B:93) was identified in NPSAS:93 and consisted of students who received their bachelor’s degree in the 1992–93 academic year. NPSAS:93 provided the base-year data and students were surveyed in 1994 for the initial follow-up. The B&B:93 cohort was surveyed again in 1997 and 2003. The second cohort (B&B:2000) was selected from NPSAS:2000, which became the base year for a single B&B:00/01 follow-up. The third cohort (B&B:08) was selected from NPSAS:08, which became the base year for follow-up data collections in 2009, 2012, and the final follow-up in 2018. The fourth cohort (B&B:16) was selected from NPSAS:16, which is the base year for follow-up data collections in 2017, 2020, and 2026 (anticipated). The B&B:93 and B&B:08 cohorts included transcript collections. Please note that the B&B:08/18 field test and the B&B:16/17 full-scale study were in data collection at the same time. To accommodate this overlap in timing, B&B cohorts prior to B&B:16 are approved under OMB# 1850-0729 while the B&B:16 cohort is approved under OMB# 1850-0926.

This request includes an update of the data collection design of the main B&B:16/20 study based on the results of the Calibration Experiment which investigated the use of two different prepaid incentives forms. This request does not introduce significant changes to the estimated respondent burden or the costs to the federal government. The following revisions were made to Part A and Part B.


Modifications to Part A. Section1a (Purpose of this Submission)


Revisions were made to the following bullet on page 2.

  • Part B describes the first calibration experiment which compares participation rates and sample representativeness resulting in two different ways of sending a $2 prepaid incentives – mailing cash, or including an index card announcing a $2 prepaid PayPal incentive. The results of that experiment were analyzed and reported in Part B.4 (Purpose of this Submission). The results of that experiment will be analyzed and reported in a change memorandum, for expedited review if possible, to be submitted by August 2020.




Modification to Part B. Section 4 (Tests of Procedures and Methods)


Revisions were made in the “B&B:16/20 Calibration Experiment (Revised May 2020; Updated August 2020)” subsection starting on page 6.


The following changes were made:


The proposed experimental period for the experiment is two weeks starting in early July 2020, after which we will analyze the results to determine which approach to recommend for the main data collection starting August 2020. Results will be submitted to OMB via a change memorandum by August 2020. The final decision will be driven by the overall difference in response rates and representativeness. If the Treatment Group yields a similar or higher response rate (not at the expense of sample representativeness), we will implement the PayPal prepaid incentive data collection for the main data collection. Alternatively, if the Control Group yields a higher response rate, while not jeopardizing sample representativeness, we will implement the cash prepaid incentive in the main data collection.


The results of the calibration experiment at the end of the experimental evaluation period are as follows.


Response Rates. Comparing the B&B:16/20 calibration sample response rates for the Control Group (cash; AAPOR RR11=22.1 percent) and the Treatment Group (PayPal; 20.3 percent) using a two-tailed z-test yields no statistically significant differences in response rates between the two group (z = -1.26, p = 0.21). This finding is promising in that announcing the $2 prepaid PayPal incentive using an index card that stands out produces similar response rates as a $2 cash prepaid incentive.


Representativeness. In addition to monitoring response rates, we conducted nonresponse bias analyses to assess the representativeness of the responding sample for the cash and the PayPal group. Table 2 displays summary measures for the demographic distributions by group for the responding sample, as well as the overall sample including nonresponding cases. Comparing the responding sample composition with the overall sample composition shows the magnitude of nonresponse bias. For example, the overall sample in the Control Group consists of 57.4 percent females. At the end of the calibration evaluation period the responding sample overrepresents females by 7.7 percentage points with a total of 65.1 percent females.


The table shows that the two groups do not yield samples with a different demographic composition compared to their overall sample estimates and suggests no differential nonresponse bias except for age. A formal two-sided z-test shows that we fail to reject the null hypothesis of no difference in all instances so far except for age (z = -2.38, p = 0.017). The PayPal incentive is significantly more effective in the age groups around 29 and younger compared to 30 and older resulting in a significantly younger respondent sample in the Treatment Group.


Table 2: Sample composition by experimental condition


Control Group

Cash

Treatment Group

PayPal

Age (mean)

Respondent Sample

31.4

29.8

Overall Sample (n=3,080)1

31.6

31.3

Female (in percent)

Respondent Sample

65.1

61.6

Overall Sample (n=3,060)1

57.4

55.7

White (in percent)

Respondent Sample

78.1

78.3

Overall Sample (n=3,110)1

73.2

74.0

Hispanic (in percent)

Respondent Sample

14.1

12.4

Overall Sample (n=3,060)1

15.3

13.2

Employment (in percent)

Respondent Sample

92.3

93.7

Overall Sample (n=1,920)

89.6

92.4

1 Sample sizes for the overall differ due to missing data.

Note: Results exclude ineligible cases. Partial interviews are considered nonrespondents for analytic purposes.

Source: U.S. Department of Education, National Center for Education Statistics, 2016/20 Baccalaureate and Beyond (B&B:16/20)


Overall, while there is no statistically significant difference in response rates between the $2 cash prepaid incentive and the $2 PayPal prepaid incentive, there is a statistically significant difference in the resulting sample composition when it comes to age: The Treatment Group results in a statistically significantly younger respondent sample. Given the differential effectiveness of the PayPal incentive among the younger and older sample members, for the B&B:16/20 main data collection (Aggressive Protocol) we recommend proceeding with the incentive design for the Control Group ($2 cash prepaid incentive) for individuals aged 30 and older and with the incentive design for the Treatment Group ($2 PayPal prepaid incentive) for individuals aged 29 and younger.



Modification to Part B. Section 4 (Tests of Procedures and Methods)


The following revisions were made in the “Data Collection Protocol Design Elements” subsection:


  • Revision #1 was made to update the data collection protocols to incorporate the results of the calibration experiment (see pages 9-10).


  • Revision #2 was made to remove reference to the original Calibration Experiment 1 which was to test envelope design (see page 11). This change should have been made in the previous change memo when the Calibration Experiment 1 was redesigned (see OMB# 1850-0926 v.10).


  • Also, with the addition of “Table 2: Sample composition by experimental condition”, the table on p. 8, “B&B:16/20 full-scale data collection protocols by data collection phase and group assignment” has been renamed to Table 3, and all references in the text have been updated to reflect the new table numbering.


Revision #1 (pages 9-10)


Prepaid incentive. Cash prepaid incentives have been shown to significantly increase response rates in both interviewer-administered as well as self-administered surveys and hence reduce the potential for nonresponse bias (e.g., Church 1993; Cantor et al. 2008; Goeritz 2006; Medway and Tourangeau 2015; Messer and Dillman 2011; Parsons and Manierre 2014; Singer 2002). During the Early Completion Phase in the B&B:16/17 field test, prepaid incentives ($10 via check or PayPal) in combination with telephone prompting also significantly increased response rates by 4.4 percentage points in the aggressive protocol implemented for prior round nonrespondents. Given these positive findings combined with general recommendations in the literature (e.g., Singer and Ye 2013), B&B:16/20 will send a small prepaid incentive of $2 in the data collection announcement letter to all sample members in the aggressive protocol (ever nonrespondents and B&B:16/17 abbreviated respondents - see also Calibration Experiment 1 discussion). Additionally, as a potential nonresponse conversion strategy, sample members in the default protocol (double respondents) may receive a prepaid incentive of $2 as a final attempt to receive their full survey. This amount has been shown to effectively increase response rates at more efficient field costs compared to other prepaid incentives (e.g., Beebe et al. 2005; Millar and Dillman 2011; Tourangeau et al. 2013).


Results of the calibration experiment show that, while there is no statistically significant difference in response rates between the $2 cash prepaid incentive and the $2 PayPal prepaid incentive, there is a statistically significant difference in the resulting sample composition when it comes to age, as the Treatment Group results in a statistically significantly younger respondent sample. Therefore, individuals aged 30 and older will proceed with the incentive design from the calibration experiment control group ($2 cash prepaid incentive) and individuals aged 29 and younger will proceed with the incentive design from the calibration experiment treatment group ($2 PayPal prepaid incentive) for the B&B:16/20 main data collection in the aggressive protocol.


Sample members who receive a cash prepaid incentive and have with “good” address information will receive the $2 prepaid incentive as a cash incentive included in the mail. Sample members who are supposed to receive a cash prepaid incentive but for whom no good address information exists will receive the $2 prepaid incentive via PayPal to their best-known e-mail address (e.g., in the B&B:16/17 full-scale cohort 47% of all respondents chose to receive their incentive via PayPal, and 46% of the B&B:08/18 full scale cohort). PayPal was successfully used for prepaid incentives in B&B:16/17 field test, B&B:08/18, and BPS:12/17. Once B&B:16/20 staff obtain good contacting information for a sample member, a $2 cash incentive will be mailed out if the sample member has not yet claimed the $2 PayPal offer and completed the survey (similar to the B&B:08/12 full-scale responsive design experiment). All data collection announcements related to interventions will be designed to stand out (see Calibration Experiment 1 discussion).



Revision #2 (page 11)


More specifically, we propose to compare the effectiveness of a $2 prepaid incentive, sent with a reminder letter informed by the envelope design results from Calibration Experiment 1, to that of a $10 promised flash incentive2 which will temporarily increase the baseline incentive from $30 to $40 if the full survey is completed within two weeks of the reminder. This experiment is conditional on the response rate achieved toward the end of Production Phase II and will be utilized if the response rate is 70% or lower (see discussion below)

  • Treatment Group 1 will receive a $2 prepaid incentive with a reminder letter designed according to Calibration Experiment 1 outcomes.


Modification to Part C. References (see page 13)


The following citation referenced in the calibration experiment results was added:


American Association for Public Opinion Research. 2016. Standard Definitions Final Dispositions of Case Codes and Outcome Rates for Surveys. Retrieved 05/07/2020: https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf.

1 Unless noted otherwise all response rates reported refer to the response rate 1 (RR1) as defined by the standards of the American Association for Public Opinion Research (AAPOR 2016). The RR1 is the number of complete interviews (excluding partial interviews) divided by the number of complete and partial interviews plus all non-interviews (excluding confirmed ineligible).

2 Offering the flash incentive increased response rates among the B&B:08/18 full-scale double respondents by 2.2 percentage points.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleMemorandum United States Department of Education
Authoraudrey.pendleton
File Modified0000-00-00
File Created2021-01-13

© 2024 OMB.report | Privacy Policy