APPENDIX K
2023 National Survey of College Graduates Text Message Experiment Report (DRAFT)
Note: The U.S. Census Bureau has reviewed this data product to ensure appropriate access, use, and disclosure avoidance protection of the confidential source data used to produce this product (Data Management System (DMS) number: P-7533594, Disclosure Review Board (DRB) approval number: CBDRB-FY25-POP001-0003).
2.4.2 Operational Follow-up Workload 7
4.1.1 Response Rates and Final Mode Distributions 9
4.1.2 Difference in Differences 10
4.1.3 Demographic Characteristics 11
4.2 Operational Follow-up Workload 13
4.2.1 Opt-Outs and Undeliverable Rate 13
4.2.2 Web Instrument Logins 14
Appendix B: Minimum Detectable Differences Equation and Definitions 22
Appendix C: Demographic Variables 23
Appendix D: Weighted Demographic Respondent Distributions 25
Appendix E: Unweighted Demographic Respondent Distributions 30
Table of Tables
Table 1: Sample sizes for the text message experiment 2
Table 2: Data collection contacts during text message period 3
Table 3: Weighted Response Rates 9
Table 4: Unweighted Response Rates 9
Table 5: Weighted Final Response Mode Distributions 10
Table 6: Unweighted Final Response Mode Distributions 10
Table 7: Difference-in-Differences of Response Rates 11
Table 8: Weighted Respondent Distributions for Age Group 12
Table 9: Unweighted Respondent Distributions for Age Group 13
Table 10: Opt-out Rates and Undeliverable Rates 13
Table 11: Proportion of Respondents that Did and Did Not Opt-Out 14
Table 12: Proportion of Cases That Logged into the Web Instrument 14
Table 13: Average Cost of CATI Call Per Case 15
Table 14: Average Cost of Mailing and Estimated Total Cost of Mailings for Weeks 18 and 23 16
Table 15: Disposition Codes for Eligible and Ineligible Respondents 20
Table 16: Demographic Variables 23
Table 17: Weighted Respondent Distributions for Race 25
Table 18: Weighted Respondent Distributions for Sex 25
Table 19: Weighted Respondent Distributions for Citizenship Status 26
Table 20: Weighted Respondent Distributions for Disability Status 26
Table 21: Weighted Respondent Distributions for Highest Degree 26
Table 22: Weighted Respondent Distributions for Hispanic Origin 27
Table 23: Weighted Respondent Distributions for Broad Occupation Category 27
Table 24: Weighted Respondent Distributions for Oversample Indicator 28
Table 25: Weighted Respondent Distributions for Science & Engineering Status 29
Table 26: Weighted Respondent Distributions for Work Status 29
Table 27: Unweighted Respondent Distributions for Race 30
Table 28: Unweighted Respondent Distributions for Sex 30
Table 29: Unweighted Respondent Distributions for Citizenship Status 31
Table 30: Unweighted Respondent Distributions for Disability Status 31
Table 31: Unweighted Respondent Distributions for Highest Degree 32
Table 32: Unweighted Respondent Distributions for Hispanic Origin 32
Table 33: Unweighted Respondent Distributions for Broad Occupation Category 32
Table 34: Unweighted Respondent Distributions for Oversample Indicator 34
Table 35: Unweighted Respondent Distributions for Science & Engineering Status 34
Table 36: Unweighted Respondent Distributions for Work Status 34
Executive Summary
The purpose of the 2023 National Survey of College Graduates (NSCG) text message experiment was to measure the impact of sending text message reminders on response and follow-up workload. This experiment had two conditions: a control group that did not receive text messages and a treatment group that received text messages.
Results showed that receiving text messages did not result in earlier response, or a higher final response rate. Additionally, there was no incremental change in response following each text. The demographic makeup of respondents showed that texting did not have an effect for any demographics of interest, even for the younger generation.
We found that the texting operation had a relatively low impact on follow-up workload. We saw a low undeliverable rate and a low opt-out rate; this low opt-out rate is encouraging because it suggests that even after multiple text messages, we were not doing any harm by sending text messages. Using the cost data available for this analysis, there was evidence of a modest cost savings from mailing fewer Week 18 paper questionnaires to the text message experiment group. Though the text message reminders showed no difference in response for this experiment, texting could be a less expensive potential replacement for the costly Week 18 mailing.
We note that the first text message was not sent until three months of data collection had elapsed, because the original goal was to use texting to replace CATI calls. If there is interest in broadening the goals of text messaging, more experimentation of texting earlier in the cycle would be beneficial. For instance, texting before the first paper questionnaire was sent could yield response earlier, thus reducing the universe for a costly paper questionnaire mailing.
For future research, we could experiment with the timing of when text messages are sent as well as the content of the message. We found that only a small percentage of cases logged into the web instrument the same day a text was sent, after 5pm. Additionally, variations on the wording of a text message could be tested, as well as how frequently to send them. For future cycles, we recommend sending text message reminders before the first paper questionnaire is sent to elicit response before a costly mailing.
The purpose of the 2023 National Survey of College Graduates (NSCG) text message experiment was to determine if sending text message reminders could replace Computer Assisted Telephone Interview (CATI) calls. However, CATI calls were inadvertently suspended for both experimental groups, so we could not determine whether text messages could replace CATI calls. We focused our analysis to measure the impact of sending text message reminders on response and follow-up workload throughout the period of data collection in which texts were sent. This report documents the results of the 2023 text message experiment and recommendations for data collection procedures for future cycles1.
The NSCG is a repeated cross-sectional survey, conducted every two years, designed to provide data on the number and characteristics of individuals with a college degree living in the United States. The U.S. Census Bureau implements the survey on behalf of the National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF).
The 2023 NSCG sample consisted of approximately 161,000 new and returning cases that had previously responded to the American Community Survey (ACS). Data collection spanned 26 weeks and used a multi-mode approach of self-administered web or paper questionnaires and Computer-Assisted Telephone Interviewing.
Currently, the NSCG contacts the sample cases by mail, phone, and email (when available). With declining response rates, particularly among the younger population, the NSCG is seeking new ways to reach sample cases. Though email is widely used by most adults, unrecognized or unwanted emails are easy to filter and delete without reading. While the younger generation still uses email, their preferred communication mode is text messaging (June, 2021). Research within the last decade shows messaging can help increase response rates when combined with other contact modes, such as email (Kanticar & Marlar, 2017; De Bruijne & Wijnant, 2014; Mavletova & Couper 2014). Research also suggests that younger and non-white individuals are more likely to consent to receive text messages, potentially addressing a hard-to-reach demographic (McGeeney & Yan, 2016). Additionally, the Census Bureau’s Household Pulse Survey found text messages have been more successful than email at eliciting response (Fields, Childs & Eggleston, 2021).
The current Census Bureau policy only permits text messages to be sent to individuals who have previously opted-in to receive them2. Therefore, a checkbox to opt into receiving text messages in future cycles was included at the end of the 2021 NSCG survey. Approximately 35 percent of respondents opted in to receive text messages in 2021 for the 2023 cycle (Bottini, Satisky & Heimel, 2022).
This 2023 text message experiment was intended to determine whether texting has promise in increasing response, both overall and for younger sample cases. If it does, texting could become a regular part of data collection, coupled with continued experimentation to find optimal text timing and wording, in conjunction with other data collection modes.
This section details the experimental and operational design, research questions, and methods that were used to answer them. The main goal was to measure the impact of text message reminders on response rates and follow-up workload.
This experiment had two conditions: a control group that did not receive text messages and a treatment group that received text messages. The experiment was limited to returning sample members who opted in to receive text messages in 2021, did not respond by CATI in 2021, and did not report a CATI preference in 2021.3
Of the 29,500 eligible sample cases, the majority were used in the text message group, to maximize the number of cases receiving texts. A systematic random sample of approximately 27,500 cases were selected for the text message group and 2,000 cases were selected for the control group. Table 1 below summarizes the experimental study groups with their respective sample sizes.
Table 1: Sample sizes for the text message experiment
Text Message |
Estimated Sample Size |
Experimental Group |
Sent |
27,500 |
Treatment |
Not Sent |
2,000 |
Control |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
The assigned sample sizes allow us to detect a minimum detectable difference (MDD) of approximately four percentage points for comparisons of response rates. The MDD calculations assume a 50 percent response rate in each group and use an alpha value of 0.10. Given the sample size of the control group, meaningful differences will not always be identified as statistically significant. We will consider and discuss meaningful differences as well as statistically significant differences. Appendix B provides the MDD equation and definitions.
Sample members received five mailouts and (if eligible) three email message reminders before the Week 12 mailout and accompanying email reminder, which directly preceded the first text message. Table 2 shows all the contacts during the timeframe when text messages were outgoing.
At the beginning of data collection, four text messages were planned: one weekly from the 13th to 16th week of data collection. Due to low response rates and the convenient nature of the text messages, a fifth text message was sent in the 20th week.
The control and treatment groups did not receive telephone reminders over the time when the first four text messages were sent. However, the CATI operation started sending telephone reminders in September; nonresponding cases became eligible for CATI calls on September 11th. The cases that received the fifth text message on October 10th likely received telephone contacts between September 11th and October 10th, as well as the scheduled email and mail reminders.
Table 2: Data collection contacts during text message period
Event |
Date (2023) |
Universe |
Week 12 mailout |
August 10 |
Control and Treatment |
Week 12.5 email reminder |
August 15 |
Control and Treatment |
Week 13 text message sent |
August 17 |
Treatment |
Week 14 text message sent |
August 24 |
Treatment |
Week 14.5 email reminder |
August 29 |
Neither Group |
Week 15 text message sent |
August 31 |
Treatment |
Week 16 text message sent |
September 7 |
Treatment |
Week 16 mailout |
September 7 |
Control and Treatment |
Week 16.5 CATI Nonresponse Follow-up |
September 11 |
Control and Treatment |
Week 18 mailout* |
September 21 – 25 |
Control and Treatment |
Week 18.5 email reminder |
September 26 |
Control and Treatment |
Week 20 mailout |
October 5 |
Control and Treatment |
Week 20.5 text message sent |
October 10 |
Treatment |
Week 20.5 email reminder |
October 10 |
Control and Treatment+ |
Week 23 mailout |
October 24 |
Control and Treatment |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
* Due to a machine issue at the National Processing Center, this mailout took multiple days.
+ The treatment group received this email reminder only if they opted out from receiving text messages. If they did not opt out, they were sent a text message.
The first four text messages contained the exact same content, using fills for User Id and Password.
Reminder to complete the National Survey of College Graduates at https://respond.census.gov/nscg . Your User ID to complete the survey is [USER_ID] and password is [PASSWORD]. Call 1-888-262-5935 for help. Reply STOP to cancel. Message rates may apply.
However, the fifth text message was edited so that the telephone assistance line appeared before the survey response URL. It was hypothesized that emphasizing the telephone number could appeal to respondents who were hesitant to click a link.
Reminder to complete the National Survey of College Graduates. To complete, call 1-888-262-5935 or go online at https://respond.census.gov/nscg . Your User ID to complete the survey is [USER_ID] and password is [PASSWORD]. Reply STOP to cancel.
Qualtrics, the online survey software, was used to send the text messages. Before each scheduled text message, a file of recipients was uploaded to the Qualtrics website; the file contained the recipient’s name, cell phone number, User ID and Password, plus a control number that is used internally for case management.
Each text message was scheduled to send at 5pm Eastern Time. This scheduled time was determined so that texts were not sent too early or too late for all sampled cases in different time zones. Qualtrics sends text segments4 at a rate of three per second. Given the number of recipients and number of segments, the entire process took over an hour to complete (the full length of time is unknown). Qualtrics does not record the exact time when each text message was sent.
When planning the analysis for this experiment, we were interested in the following research questions to measure the effect of sending text message reminders:
What was the impact on response?
Were overall response rates higher when text messages were sent?
How did the timing of response and final mode distributions compare between the treatment and control groups?
Was the demographic makeup of respondents different between the treatment and control groups?
What were the operational follow-up workload impacts throughout the text message period?
After each text message, what proportion of sample cases opted out of receiving additional text messages?5
Did the cases that opted out of receiving text messages ultimately respond to the NSCG?
What proportion of texts were undeliverable?
Did text messages lead to lower data collection costs, by reducing the number of follow-up CATI calls and mailings to nonrespondents?
After each text message, what proportion of cases logged into the web survey instrument that day?
The following section outlines the methods that were used to answer each research question. We used experimental base weights where appropriate to make inferences about the NSCG target population. Chi-square tests and t-tests were used when answering research question 1 and significance was determined with an alpha value of 0.10. For response rates, our research question aimed to determine whether sending text message reminders resulted in higher response rates, so we conducted one-sided t-tests to compare the experimental groups. All other t-tests used two-sided p-values. For testing significant differences in demographic distributions, we used a Bonferroni-adjusted alpha level to account for multiple comparisons if the demographic group yielded a significant Chi-square test p-value.
The small sample size of the control group led to some cases having very large weights. To mitigate the imbalance of weights, we report weighted and unweighted results for Research Question 1. Analysis of the components in research question 2 were unweighted6.
We verified and tested the output using double programming, a verification process in which multiple staff develop program code independently to produce results. This practice helps ensure the quality of deliverables.7
To determine if the text message reminders led to higher response rates, we calculated the final weighted response rate using Equation 1 in Appendix A. Additionally, we calculated unweighted response rates using Equation 2 in Appendix A. The difference between the two response rate formulas is that the unweighted response rate does not take into account unknown eligibility or ineligibles8. These cases are considered nonrespondents.
We also looked to determine whether the group that received text messages responded earlier than the group that did not. As a baseline, we calculated response rates for each experimental group before the week 12 mailout, just before the texts were sent. Next, we calculated the response rates before the week 16 mailout, after the first four texts were sent. Finally, we calculated the response rates before week 21, after the fifth text message reminder was sent. All three time points were compared between the experimental groups to determine whether text messages led to earlier response. Additionally, we compared the final response mode distributions between the treatment and control groups.
We also conducted a difference-in-differences analysis to compare changes in response rates over time between the treatment and control group. We used the response rates before week 12, week 16, week 21, and the final response rates, and computed the difference-in-differences as follows:
Calculated the after-before difference in response rates for the treatment group (TA – TB).
Calculated the after-before difference in response rates for the control group (CA - CB).
Calculated the difference between the difference in response rates for the treatment group (TA - TB) and the difference for the control group (CA - CB).
This is the difference-in-differences: (DD) = (TA - TB) - (CA - CB).
To determine if a text message reminder impacted the demographic makeup of respondents differently for certain subpopulations, we compared the demographic distributions of respondents between the control and treatment groups. As younger generations are more likely than older generations to rank text messaging as their most used communication method (Pogue, 2015), we were interested to see if the text message reminders resulted in different response rates by generation; age of the respondent was grouped into three categories (see Appendix C for all demographic variables). We performed chi-square tests on the distributions of all sample members, regardless of response status, to determine if there were any differences between experimental groups before data collection started. Next, we performed the same chi-square tests on the distributions of respondents. Significant differences in the demographic makeup of respondents were only considered if no significant differences were found between the distributions of all sample members. If the chi-square test found a significant difference in the distribution of respondents for a demographic characteristic, then the proportion of respondents in each subcategory were compared between the experimental groups using pairwise t-tests with a Bonferroni-adjusted alpha level for multiple comparisons.
To determine the impact of sending text messages on the follow-up workload, we focused on treatment cases that were sent at least one text message. Of those cases, we used data from Qualtrics to understand how many people opted-out of receiving subsequent text messages and how many texts were not successfully delivered. We also determined whether cases that opted out of receiving texts ultimately responded.
From a cost-savings perspective, we were interested in how many fewer CATI calls and mailings were administered to treatment cases compared to control cases. To determine if sending text messages led to lower data collection costs, we calculated average cost estimates using data provided by the Associate Director for Demographic Programs (ADDP) NSCG team and the National Processing Center (NPC). Specifically, we calculated the average number of mailings and phone calls in each treatment group and multiplied those by the cost of a mailing and call, respectively, and took the difference. This provided a measure of average cost savings.
Finally, we used the web survey
paradata to provide the proportion of cases that logged into the web
survey instrument after the text message reminder was sent on that
day.9
These analyses will help with operational improvements for sending
text messages in future cycles.
Small sample sizes may limit the ability to identify statistically significant differences; therefore, we will consider and discuss meaningful differences as well.
In the initial experimental design, text messages were meant to replace phone calls in the treatment group. Specifically, cases in the control group were supposed to receive CATI calls during the text messaging timeframe and cases in the treatment group were not. However, CATI was inadvertently suspended for both experimental groups, meaning no CATI calls were made between weeks 12 and 16. This also resulted in the control group receiving fewer overall contacts than the treatment group.
Some aspects of text messages, such as the message content or the time of delivery, were not able to be tested as part of this experiment. Thus, this experiment is not a full assessment of the utility or possible future success of text messages in NSCG data collection.
Cost estimates were based on the data that were available and do not consider other fixed costs such as labor, programming, developing, management, etc. Cost estimates are to be used for generalizations.
In this section, we present the results of the experimental groups for the text message study.
To measure the impact on response, we calculated the response rates before the texting period started, after the fourth text message, after the fifth text message, and the final response rates, along with the final mode distributions. We then conducted a difference-in-differences analysis to compare changes in response rates over time between the treatment and control group. Finally, we looked at whether text message reminders impacted the demographic makeup of respondents differently for certain subpopulations.
First, we calculated the response rates before weeks 12, 16, and 21 for both experimental groups. Our research question aimed to determine whether sending text message reminders resulted in higher response rates, so we conducted one-sided t-tests to compare the experimental groups. Table 3 shows that the group that received texts had a response rate of 66.7 percent at week 12, the week before texts were sent. This response rate was significantly higher compared to the group that did not receive texts. So, the group that received texts already had a significantly higher response rate before the text message period began and continued to yield significantly higher response rates throughout data collection, including the final response rate. The weighted response rates, standard errors, and t-test p-values for the experimental groups are in Table 3 below.
Table 3: Weighted Response Rates
|
Text Sent Response Rate (SE) |
No Text Response Rate (SE) |
p-value |
Week 12 |
66.7 (0.6) |
61.4 (2.8) |
0.0296* |
Week 16 |
72.7 (0.6) |
67.1 (2.8) |
0.0245* |
Week 21 |
77.1 (0.6) |
72.7 (2.6) |
0.0466* |
Final |
79.6 (0.5) |
75.2 (2.5) |
0.0418* |
Source: U.S. Census Bureau 2023 National Survey of College Graduates Text Message Experiment
*Statistically significant at the alpha = 0.10 level; t-tests are one-sided.
While the text message group showed significantly higher weighted response rates, we also calculated the unweighted response rates to account for the extremely high weights that existed in the group that did not receive texts. Looking at the unweighted response rates before weeks 12, 16, and 21, as well as the unweighted final response rates, we did not find any significant differences between the experimental groups. The graph of the unweighted collection rates also supports this finding. The unweighted response rates, standard errors, and t-test p-values for the experimental groups are in Table 4 below and the graph with the unweighted collection rates over the data collection period can be found in Appendix A.
Table 4: Unweighted Response Rates
|
Text Sent Response Rate (SE) |
No Text Response Rate (SE) |
p-value |
Week 12 |
64.8 (0.3) |
65.1 (1.1) |
0.3898 |
Week 16 |
71.4 (0.3) |
70.2 (1.0) |
0.1228 |
Week 21 |
76.2 (0.3) |
75.7 (1.0) |
0.3230 |
Final |
78.9 (0.2) |
78.5 (0.2) |
0.3492 |
Source: U.S. Census Bureau 2023 National Survey of College Graduates Text Message Experiment
Note: t-tests are one-sided.
We also compared the final response mode distributions between the treatment and control groups. The weighted final mode distributions produced a significant chi-square test statistic; however, the unweighted final mode distributions showed no differences between the experimental groups. The weighted and unweighted final response mode distributions are in Tables 5 and 6 below.
Table 5: Weighted Final Response Mode Distributions
Experimental Group |
Mode |
Frequency |
Percent (SE) |
Text Sent |
Mobile |
1,100 |
6.3 (0.4) |
CATI |
400 |
2.0 (0.2) |
|
Web |
20,000 |
91.7 (0.4) |
|
No Text |
Mobile |
80 |
6.6 (1.4) |
CATI |
30 |
5.5 (2.6) |
|
Web |
1,500 |
87.9 (2.8) |
Source: U.S. Census Bureau 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.0437
Table 6: Unweighted Final Response Mode Distributions
Experimental Group |
Mode |
Frequency |
Percent (SE) |
Text Sent |
Mobile |
1,100 |
5.0 (0.1) |
CATI |
400 |
1.9 (0.1) |
|
Web |
20,000 |
93.1 (0.2) |
|
No Text |
Mobile |
80 |
5.0 (0.6) |
CATI |
30 |
1.9 (0.3) |
|
Web |
1,500 |
93.0 (0.6) |
Source: U.S. Census Bureau 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.9922
We conducted a difference-in-differences analysis to compare changes in the weighted response rates over time between the treatment and control group. The difference-in-differences value was tested against zero. Table 7 below shows the difference-in-differences do not indicate that the text message reminders increased response rates throughout the texting period any more than the standard data collection activities. We also calculated the difference-in-differences between the week 12 response rates and the final response rates to examine the change in response rates from before the text messages were sent out to the end of data collection and found no significant difference (p-value = 0.6610).
Table 7: Difference-in-Differences of Response Rates
Experimental Group |
Week 12 RR |
Week 16 RR |
Difference |
Difference-in-Differences |
p-value |
Text Sent |
66.7 (0.6) |
72.7 (0.6) |
6.0 |
0.3 (1.1) |
0.7613 |
No Text |
61.4 (2.8) |
67.1 (2.8) |
5.7 |
||
Experimental Group |
Week 16 RR |
Week 21 RR |
Difference |
Difference-in-Differences |
p-value |
Text Sent |
72.7 (0.6) |
77.1 (0.6) |
4.3 |
-1.3 (1.9) |
0.4998 |
No Text |
67.1 (2.8) |
72.7 (2.6) |
5.6 |
||
Experimental Group |
Week 21 RR |
Final RR |
Difference |
Difference-in-Differences |
p-value |
Text Sent |
77.1 (0.6) |
79.6 (0.5) |
2.5 |
<0.1 (0.9) |
0.9550 |
No Text |
72.7 (2.6) |
75.2 (2.5) |
2.5 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Note: Estimates may not sum due to rounding; t-tests are two-sided.
Next, we looked at whether text message reminders impacted the demographic makeup of respondents differently for certain subpopulations. First, we performed weighted chi-square tests on the demographic distributions of all sample members, regardless of response status, to determine if any differences existed between the experimental groups before data collection started. We found no significant differences between the experimental groups before data collection started for almost all demographic variables; however, there was a meaningful difference of approximately four percentage points between the experimental groups for males and females. The group that received texts had a higher percentage of males than the group that did not receive texts, and the group that did not receive texts had a higher percentage of females than the group that did receive texts.
We performed the same weighted chi-square tests on the demographic distributions of respondents. After adjusting for multiple comparisons, only sex was significantly different between the experimental groups. We’re reluctant to conclude this as true significant difference as there was already a meaningful difference of four percentage points between the experimental groups for males and females before any treatments were administered. Although age group and race showed no significant differences after applying the Bonferonni-adjusted alpha level, there were some meaningful differences larger than 4 percentage points between the experimental groups for these two variables. The text message group had a higher percent of respondents aged 40-54 and 55-75 compared to the control group and the control group had a higher percent of black respondents compared to the text message group. Tables of all weighted demographic respondent distributions can be found in Appendix D.
When looking at the unweighted demographic distributions of respondents, there is further evidence of no statistically significant differences between the demographic makeup of respondents. Tables of all unweighted demographic respondent distributions can be found in Appendix E.
We were very interested to see if the text message reminders resulted in different response distributions by generation. Tables 8 and 9 below provide the weighted and unweighted distributions of respondents by age group categories for the two experimental groups. Although the weighted chi-square test indicated there was a significant difference between age groups for the experimental groups, after the Bonferroni-adjusted alpha level of 0.03, there were no comparisons that indicated a significant difference. Additionally, it does not appear the text message reminders helped increase response for the youngest age group in this experiment.
Table 8: Weighted Respondent Distributions for Age Group
|
Text Sent |
No Text |
|
||
Age Group |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
p-value |
17-39 |
9,300 |
29.4 (0.7) |
650 |
28.2 (2.2) |
0.6329 |
40-54 |
5,400 |
30.1 (0.7) |
450 |
35.7 (2.7) |
0.0478 |
55-75 |
6,900 |
40.5 (0.7) |
500 |
36.1 (2.5) |
0.0831 |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
|
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.0884; t-tests are two-sided
Table 9: Unweighted Respondent Distributions for Age Group
|
Text Sent |
No Text |
|
||
Age Group |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
p-value |
17-39 |
9,300 |
43.2 (0.3) |
650 |
42.0 (1.2) |
0.3412 |
40-54 |
5,400 |
24.8 (0.3) |
450 |
27.1 (1.1) |
0.0456 |
55-75 |
6,900 |
31.9 (0.3) |
500 |
30.8 (1.2) |
0.3664 |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
|
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.1218; t-tests are two-sided
In this section, we present results from Research Question 2. To determine the impact of sending text messages on the operational follow-up workload, we focused this analysis on treatment cases that were sent at least one text message. The analysis of cost includes both experimental groups, to determine if text messages lead to lower data collection costs, by reducing the number of follow-up CATI calls and mailings to nonrespondents.
We used the output from Qualtrics and found that few respondents opted out after receiving a text and there was a relatively stable undeliverable rate. Qualtrics assigns a status for each text message: ‘Message Sent’, ‘Message Failed’, or ‘Message Soft Bounce’. For this analysis, we combined ‘Message Failed’ and ‘Message Soft Bounce’ for the undeliverable rate. Table 10 provides the percent that opted out and the percent undelivered after each text message.
Table 10: Opt-out Rates and Undeliverable Rates
Date Text Message Sent |
Number of Texts Sent |
Number Opted Out |
Percent Opted Out |
Number Undelivered |
Percent Undelivered |
8/17/2023 |
8,500 |
200 |
2.1% |
400 |
5.0% |
8/24/2023 |
7,700 |
100 |
1.5% |
300 |
4.0% |
8/31/2023 |
7,300 |
100 |
1.5% |
400 |
5.4% |
9/7/2023 |
7,000 |
100 |
1.6% |
300 |
4.3% |
10/10/2023 |
5,700 |
60 |
1.1% |
250 |
4.3% |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Additionally, we investigated whether cases that opted out of receiving text messages ultimately responded to the NSCG. Table 11 shows that out of the sample cases who opted out after receiving the text messages, approximately 21.7 percent ultimately responded to the survey, while out of the sample cases who never opted out, approximately 37.6 percent responded. After testing this difference against zero, this was a significant difference. Cases that did opt out of receiving text messages responded at a lower rate.
Table 11: Proportion of Respondents that Did and Did Not Opt-Out
Opted Out |
Sample Size |
Number of Respondents |
Percent (SE) |
p-value |
Yes |
600 |
150 |
21.7 (1.7) |
<0.0001* |
No |
7,900 |
3,000 |
37.6 (0.5) |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
*Statistically significant at the alpha = 0.1 level; t-test is two sided.
We used the web survey paradata to calculate the proportion of cases that logged into the web instrument after the text message was sent on that day and found that only a small percentage of cases did so.
Table 12: Proportion of Cases That Logged into the Web Instrument
Date of Text |
Number of Texts Sent |
Number of Web Logins after 5pm |
Percent |
8/17/2023 |
8,500 |
250 |
2.7 |
8/24/2023 |
7,700 |
90 |
1.1 |
8/31/2023 |
7,300 |
70 |
0.9 |
9/7/2023 |
7,000 |
60 |
0.9 |
10/10/2023 |
5,700 |
50 |
0.9 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Finally, we calculated the average cost savings per case using the average number of phone calls and the average number of mailings for each group. We limited this analysis to cases that had not responded by the time the first text was sent out on August 17th.
Table 13 shows that the average CATI cost per case was $13.83 for the text message group and $14.59 for the control group. The control group, the group that did not receive texts, needed a higher average number of calls per case and thus cost $0.75 more per case to conduct CATI operations.
Table 13: Average Cost of CATI Call Per Case
Experimental Group |
Sample Size |
Average Number of Phone Calls |
Average Cost Per Case |
Difference Between Average Cost Per Case |
Text Sent |
8,900 |
2.69 |
$13.83 |
$0.75 |
No Text |
650 |
2.84 |
$14.59 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Note: A CATI call attempt cost $5.14 per call
For the estimated cost of mailings, we did not include any mailings that were sent before the texting period began. The old cohort received the week 18 mailing and the week 23 mailing, which cost $7.40 and $0.78 per mailing, respectively. The week 18 mailing included a paper questionnaire while the week 23 mailing only included a letter. Table 14 shows that for the week 18 mailing, the group that did not receive texts cost $0.49 more per case. For the week 23 mailing, the group that did not receive texts cost $0.02 more per case. If we were to estimate the total cost of administering each mailing to all sample cases that hadn’t responded by the start of the text messaging period, there is an estimated cost savings for the group that received the texts of $4,651.03 for the week 18 mailing, and an estimated cost savings of $200.47 for the week 23 mailing.
Table 14: Average Cost of Mailing and Estimated Total Cost of Mailings for Weeks 18 and 23
|
Week 18 |
Week 23 |
||
Text Sent |
No Text |
Text Sent |
No Text |
|
Sample Size |
8,900 |
650 |
8,900 |
650 |
Number of Mailings |
6,400 |
500 |
5,800 |
450 |
Average Cost Per Case |
$5.32 |
$5.81 |
$0.51 |
$0.53 |
Difference Between Average Cost |
$0.49 |
$0.02 |
||
Estimated Total Cost of Mailing |
$50,616.05 |
$55,267.07 |
$4,807.79 |
$5,008.25 |
Difference Between Total Cost of Mailing |
$4,651.03 |
$200.47 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Note: The week 18 mailing cost $7.40 per mailing. The week 23 mailing cost $0.78 per mailing.
It appears there could be a cost savings with sending text message reminders, especially in later mailings of the paper questionnaire. When we transition into the Data Ingest for the Collection Enterprise (DICE) system in the future, costs could change significantly. The DICE Web Standards Team is updating the Design Guidelines for U.S. Census Bureau Text Message Notifications annually, but modifications will be incorporated into the system as needed (DICE Web Standards Team, 2024). We are monitoring how the policy will influence future text message efforts.
Due to the effect the large weights had on the weighted final response rates, we draw conclusions based on the unweighted response rate results. The unweighted results showed that the group that received text messages did not result in earlier response, or a higher final response rate. Additionally, there was no incremental change in response following each text. The unweighted demographic makeup of respondents showed that texting did not have an effect for any groups of interest, including the younger generation.
We found that the texting operation had a relatively low impact on follow-up workload. We saw a low undeliverable rate and a low opt-out rate; this low opt-out rate is encouraging because it suggests that even after multiple text messages, we were not frustrating people or doing any harm. In fact, some cases still responded after opting out, suggesting they did not want to receive additional text messages, but were not put off by the operation. Using the cost data available for this analysis, there was evidence of a modest cost savings from mailing fewer Week 18 paper questionnaires to the text message experiment group, despite the low impact of text messaging on the follow-up workload.
We note that the first text message was not sent until three months of data collection had elapsed because the original goal was to use texting to replace CATI calls. If there is interest in broadening the goals of text messaging, especially now that CATI operations might not be part of future NSCG data collection, more experimentation of texting would be beneficial. For instance, texting before the first paper questionnaire was sent could reduce the universe of a costly paper questionnaire mailing.
For future research, we could experiment with the timing of when text messages are sent, or the content of the message. We found that only a small percentage of cases logged into the web instrument the same day a text was sent, after 5pm. Recent research showed that texts sent earlier in the day can lead to higher response (Nichols, Feuer, Olmsted-Hawala & Gliozzi, 2024). Additionally, variations on the wording of a text message could be tested, as well as how frequently to send them.
Bottini, C., Satisky, B., & Heimel, S. (2022). 2021 NSCG Contact Strategy Report.
De Bruijne, M., & Wijnant, A. (2014). Improving response rates and questionnaire design for mobile web surveys. Public Opinion Quarterly, 78(4), 951-962.
DICE Web Standards Team (2024). Design Guidelines for U.S. Census Bureau Text Message Notifications.
Fields, J., Childs, J., & Eggleston, C. (2021). Reaching a Household Sample During a Pandemic; Contact Strategies for the Experimental Rapid-Response Household Pulse Survey. Presented at the 2021 American Association for Public Opinion Research Annual Conference.
June, S. (2021). Could Gen Z Free the World From Email? The New York TImes.
Kanticar, K., & Marlar, J. (2017). Novelty of Text Messages as Reminders for Web Surveys: Does it Last? Presented at the 2017 American Association for Public Opinion Research annual meeting. New Orleans, LA, May 18-21, 2017.
Mavletova, A., & Couper, M. P. (2014). Mobile web survey design: scrolling versus paging, SMS versus e-mail invitations. Journal of Survey Statistics and Methodology, 2(4), 498-518.
McGeeney, K., & Yan, H. Y. (2016). Text Message Notifications for Web Surveys. Pew Research Center.
Nichols, E., Feuer, S., Olmsted-Hawala, E., & Gliozzi, R. (2024). Using a Text-To-Web Methodology for Measuring User Satisfaction with the Online 2020 Census. U.S. Census Bureau. Accessed October 18, 2025 at https://www.census.gov/library/working-papers/2024/adrm/rsm2024-03.html.
Pogue, D. (2015, March). Is Messaging Going to Kill E-mail? Scientific American, p. 27.
We calculated the overall weighted response rates10 using Equation 1:
Equation 1: Weighted Response Rate
Response Rate = where,
ER: Eligible Respondent
ENR: Eligible Nonrespondent
e: Estimated proportion of cases with unknown eligibility (UE) expected to be eligible.
The proportion of cases with unknown eligibility expected to be eligible (e) will be estimated using the following equation:
where, IE (Ineligible cases) are cases that were eligible for the initial NSCG mailing but, after responding, were deemed ineligible for the survey.
This weighted response rate used eligible respondents in the numerator (final disposition codes between 50 and 54 in Error: Reference source not found). The denominator also included eligible respondents as well as eligible nonrespondents (final disposition greater than or equal to 94 in Error: Reference source not found) and an estimate of the proportion of unknown eligibility cases expected to be eligible (cases classified with unknown eligibility are final disposition codes between 80 and 89 in Error: Reference source not found). This proportion was estimated using the sum of respondents and nonrespondents divided by the sum of all sampled persons (including those deemed ineligible with final disposition codes between 60 and 79 in Error: Reference source not found) then multiplied by the sum of unknown eligibility.
We calculated the unweighted response rates using Equation 2:
Equation 2: Unweighted Response Rate
Response Rate =
The unweighted response rate does not take into account unknown eligibility or ineligibles. These cases are considered nonrespondents.
Table 15: Disposition Codes for Eligible and Ineligible Respondents
Status |
Disposition Code |
Description |
Eligible Respondents |
50 |
Eligible complete – mail |
51 |
Eligible complete – CATI |
|
52 |
Eligible complete – web |
|
54 |
Eligible complete – TQA incoming call interview via CATI |
|
Ineligibles |
60 |
Emigrant – mail |
61 |
Emigrant – CATI |
|
62 |
Emigrant – web |
|
64 |
Emigrant – incomplete (TQA / locating / correspondence) |
|
65 |
Temporarily institutionalized |
|
67 |
Terminally ill / permanently institutionalized |
|
68 |
Over 75 years old |
|
69 |
Deceased |
|
70 |
Degree ineligible – no baccalaureate or higher degree earned |
|
71 |
Frame ineligible – earliest degree earned after ACS interview year |
|
78 |
Duplicate |
|
79 |
Other confirmed ineligible |
|
Unknown Eligibility |
80 |
Unable to locate |
81 |
SPV failure – wrong sampled person (FINAL) |
|
82 |
Language / hearing barrier |
|
83 |
Noncontact – eligibility unknown |
|
84 |
Temporarily ill / absent and unable to confirm eligibility |
|
85 |
Final refusal and unable to confirm eligibility |
|
86 |
Congressional refusal and unable to confirm eligibility |
|
87 |
Unable to confirm eligibility and/or confirm reached correct SP |
|
89 |
Other nonresponse and unable to confirm eligibility |
|
Eligible Nonrespondents |
94 |
Eligible and temporarily ill / absent |
95 |
Eligible and final refusal -- CATI |
|
96 |
Eligible and congressional refusal |
|
97 |
Eligible and missing critical complete items |
|
99 |
Other confirmed eligible nonresponse |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Figure 1.Unweighted Cumulative Completion Rates
To calculate the minimum detectable difference between two response rates with fixed sample sizes, we used the formula from Snedecor and Cochran (1989) for determining the sample size when comparing two proportions.
where:
d = minimum detectible difference
a* = alpha level adjusted for multiple comparisons
Za*/2 = critical value for set alpha level assuming a two-sided test
Zb = critical value for set beta level
p1 = proportion for group 1
p2 = proportion for group 2
D = design effect due to unequal weighting
n1 = sample size for a single treatment group or control
n2 = sample size for a second treatment group or control
The alpha level of 0.10 will be used in the calculations. The beta level was included in the formula to inflate the sample size to decrease the probability of committing a type II error. The beta level was set to 0.10.
Table 16: Demographic Variables |
|||
Variable |
Range |
Type |
Description |
Age group
|
1-3 |
Categorical, ordinal |
1=17 to 39 2=40 to 54 3=55-75 |
Race
|
1-4 |
Categorical, nominal |
1=White 4=AIAN or NHPI 2=Black 3=Asian |
Highest Degree
|
1-3 |
Categorical, ordinal |
1= Bachelor’s or professional degree 2= Master’s degree 3= Doctorate degree |
Science and engineering status |
1,2 |
Categorical, binary |
1 = S&E degree or S&E occupation 2 = no S&E degrees nor S&E occupation |
Citizen status at birth flag
|
1,2 |
Categorical, binary |
1=U.S. citizen at birth 2=Not a U.S. citizen at birth |
Disability status
|
1,2 |
Categorical, binary |
1 = at least moderate difficulty in at least one functional activity area 2 = no more than slight difficulty in any functional activity area |
Hispanic origin flag
|
1,2 |
Categorical, binary |
1= Hispanic 2= Not Hispanic |
Broad occupation group
|
18 categ. |
Categorical, nominal |
11 = mathematical scientists 12 = computer and information scientists 20 = life scientists 30 = physical scientists 40 = social scientists, except psychologists 41 = psychologists 50 = engineers 61 = S&E-related health occupations 62 = S&E-related non-health occupations 71 = postsecondary teacher in an S&E field 72 = postsecondary teacher in a non-S&E field 73 = secondary teacher in an S&E field 74 = secondary teacher in a non-S&E field 81 = non-S&E high interest occupation, S&E FOD 82 = non-S&E low interest occupation, non-S&E FOD 83 = non-S&E occupation, non-S&E FOD 91 = not working, S&E FOD or S&E previous occupation 92 = not working, non-S&E FOD and non-S&E previous occupation or never worked |
Young graduate oversample group eligibility indicator |
1,2 |
Categorical, binary |
1 = S&E case that has earned a bachelor’s or master’s degree in the last five years 2 = non-S&E case or S&E case that has not earned a bachelor’s or master’s degree in the last five years |
Sex |
1,2 |
Categorical, binary |
1=Male 2=Female |
Work status
|
1,2,3 |
Categorical, nominal |
1=Employed 2=Unemployed 3=Not in the labor force |
Table 17: Weighted Respondent Distributions for Race
|
Text Sent |
No Text |
|
||
Race |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
p-value |
White |
15,000 |
80.2 (0.5) |
1,100 |
76.5 (2.8) |
0.1960 |
Black |
1,800 |
8.4 (0.4) |
150 |
13.2 (2.9) |
0.0966 |
Asian |
4,000 |
9.0 (0.3) |
300 |
8.2 (1.1) |
0.5146 |
AIAN/NHPI |
650 |
2.4 (0.3) |
50 |
2.1 (0.8) |
0.6868 |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
|
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.0848
Note: Bonferroni-adjusted alpha level = 0.025 for multiple comparisons
Table 18: Weighted Respondent Distributions for Sex
|
Text Sent |
No Text |
|
||
Sex |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
p-value |
Male |
12,500 |
48.6 (0.6) |
900 |
41.9 (2.8) |
0.0191* |
Female |
9,300 |
51.4 (0.6) |
700 |
58.1 (2.8) |
0.0191* |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
|
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.0176
*Statistically significant at the Bonferroni-adjusted alpha = 0.05 level
Table 19: Weighted Respondent Distributions for Citizenship Status
|
Text Sent |
No Text |
||
Citizenship Status |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Born in the U.S., Puerto Rico, etc. or born abroad of a U.S. Citizen parent |
16,000 |
86.7 (0.4) |
1,200 |
85.2 (2.3) |
Naturalized or not a U.S. Citizen |
5,400 |
13.3 (0.4) |
400 |
14.8 (2.3) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.5068
Table 20: Weighted Respondent Distributions for Disability Status
|
Text Sent |
No Text |
||
Disability Status |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
At least moderate difficulty in one functional activity area |
1,400 |
5.7 (0.3) |
100 |
4.7 (1.0) |
No more than a slight difficulty in any functional activity area |
20,000 |
94.3 (0.3) |
1,500 |
95.3 (1.0) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.3570
Table 21: Weighted Respondent Distributions for Highest Degree
|
Text Sent |
No Text |
||
Highest Degree |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Bachelor’s or professional degree |
11,000 |
65.0 (0.6) |
800 |
65.8 (2.3) |
Master’s degree |
8,200 |
30.1 (0.6) |
600 |
29.1 (2.2) |
Doctorate degree |
2,600 |
4.9 (0.2) |
200 |
5.0 (0.8) |
Total |
21,500 |
100.0 |
1,600 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.8890
Table 22: Weighted Respondent Distributions for Hispanic Origin
|
Text Sent |
No Text |
||
Hispanic Origin |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Hispanic |
2,500 |
9.4 (0.4) |
200 |
9.4 (1.6) |
Not Hispanic |
19,000 |
90.6 (0.4) |
1,400 |
90.6 (1.6) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.9648
Table 23: Weighted Respondent Distributions for Broad Occupation Category
|
Text Sent |
No Text |
||
Occupation Category |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Mathematical scientists |
400 |
0.5 (0.1) |
30 |
0.5 (0.2) |
Computer and information sciences |
2,000 |
6.0 (0.3) |
150 |
4.5 (0.6) |
Life scientists |
950 |
1.3 (0.1) |
70 |
1.2 (0.3) |
Physical scientists |
600 |
0.7 (0.1) |
30 |
0.4 (0.1) |
Social scientists, except psychologists |
450 |
0.6 (0.1) |
30 |
0.3 (0.1) |
Psychologists |
250 |
0.4 (<0.1) |
<15 |
D |
Engineers |
2,200 |
2.4 (0.1) |
150 |
3.0 (0.5) |
S&E-related health occupations |
1,300 |
8.6 (0.4) |
90 |
11.2 (2.9) |
S&E-related non-health occupations |
1,300 |
3.4 (0.2) |
100 |
3.1 (0.6) |
Postsecondary teacher in an S&E field |
750 |
1.3 (0.1) |
60 |
1.0 (0.2) |
Postsecondary teacher in a non-S&E field |
350 |
0.9 (0.1) |
30 |
0.9 (0.3) |
Secondary teacher in an S&E field |
600 |
1.4 (0.1) |
50 |
1.8 (0.6) |
Secondary teacher in a non-S&E field |
250 |
1.5 (0.2) |
<15 |
D |
Non-S&E high interest occupation, S&E FOD |
3,500 |
12.5 (0.4) |
250 |
12.5 (1.4) |
Non-S&E low interest occupation, non-S&E FOD |
1,400 |
6.6 (0.4) |
100 |
4.5 (0.8) |
Non-S&E occupation, non-S&E FOD |
1,900 |
28.0 (0.7) |
150 |
27.0 (2.5) |
Not working, S&E FOD or S&E previous occupation |
2,800 |
13.3 (0.4) |
200 |
13.9 (1.5) |
Not working, non-S&E FOD and non-S&E previous occupation or never worked |
650 |
10.7 (0.5) |
40 |
12.9 (2.3) |
Total |
21,500 |
100.0 |
1,600 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.2758
Table 24: Weighted Respondent Distributions for Oversample Indicator
|
Text Sent |
No Text |
||
Oversample Indicator |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
S&E case that has earned a bachelor’s or master’s degree in the last five years |
1,500 |
5.7 (0.3) |
100 |
7.1 (2.8) |
Non-S&E case, or S&E case that has not earned a bachelor’s or master’s degree in the last five years |
20,000 |
94.3 (0.3) |
1,500 |
92.9 (2.8) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.5854
Table 25: Weighted Respondent Distributions for Science & Engineering Status
|
Text Sent |
No Text |
||
S&E Status |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
S&E degree or occupation |
18,500 |
58.7 (0.7) |
1,400 |
57.8 (3.0) |
No S&E degrees or S&E occupation |
3,000 |
41.3 (0.7) |
200 |
42.2 (3.0) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.7807
Table 26: Weighted Respondent Distributions for Work Status
|
Text Sent |
No Text |
||
Work Status |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Employed |
19,000 |
80.6 (0.6) |
1,400 |
80.5 (2.2) |
Unemployed |
400 |
3.1 (0.3) |
40 |
2.3 (0.8) |
Not in the labor force |
2,000 |
16.3 (0.5) |
150 |
17.2 (2.1) |
Total |
21,500 |
100.0 |
1,600 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.6611
Table 27: Unweighted Respondent Distributions for Race
|
Text Sent |
No Text |
||
Race |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
White |
15,000 |
70.2 (0.3) |
1,100 |
70.6 (1.2) |
Black |
1,800 |
8.3 (0.2) |
150 |
8.3 (0.7) |
Asian |
4,000 |
18.6 (0.3) |
300 |
17.8 (1.0) |
AIAN/NHPI |
650 |
3.0 (0.1) |
50 |
3.4 (0.4) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.6858
Table 28: Unweighted Respondent Distributions for Sex
|
Text Sent |
No Text |
||
Sex |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Male |
12,500 |
56.8 (0.3) |
900 |
56.6 (1.3) |
Female |
9,300 |
43.2 (0.3) |
700 |
43.4 (1.3) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.8757
Table 29: Unweighted Respondent Distributions for Citizenship Status
|
Text Sent |
No Text |
||
Citizenship Status |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Born in the U.S., Puerto Rico, etc. or born abroad of a U.S. Citizen parent |
16,000 |
75.1 (0.3) |
1,200 |
74.6 (1.1) |
Naturalized or not a U.S. Citizen |
5,400 |
24.9 (0.3) |
400 |
25.4 (1.1) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.7088
Table 30: Unweighted Respondent Distributions for Disability Status
|
Text Sent |
No Text |
||
Disability Status |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
At least moderate difficulty in one functional activity area |
1,400 |
6.4 (0.2) |
100 |
6.6 (0.6) |
No more than a slight difficulty in any functional activity area |
20,000 |
93.6 (0.2) |
1,500 |
93.4 (0.6) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.7320
Table 31: Unweighted Respondent Distributions for Highest Degree
|
Text Sent |
No Text |
||
Highest Degree |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Bachelor’s or professional degree |
11,000 |
50.2 (0.3) |
800 |
50.5 (1.3) |
Master’s degree |
8,200 |
37.9 (0.3) |
600 |
37.9 (1.2) |
Doctorate degree |
2,600 |
12.0 (0.2) |
200 |
11.6 (0.8) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.9067
Table 32: Unweighted Respondent Distributions for Hispanic Origin
|
Text Sent |
No Text |
||
Hispanic Origin |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Hispanic |
2,500 |
11.4 (0.2) |
200 |
11.2 (0.8) |
Not Hispanic |
19,000 |
88.6 (0.2) |
1,400 |
88.8 (0.8) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.8371
Table 33: Unweighted Respondent Distributions for Broad Occupation Category
|
Text Sent |
No Text |
||
Occupation Category |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Mathematical scientists |
400 |
1.8 (0.1) |
30 |
1.7 (0.3) |
Computer and information sciences |
2,000 |
9.2 (0.2) |
150 |
9.2 (0.7) |
Life scientists |
950 |
4.4 (0.1) |
70 |
4.5 (0.5) |
Physical scientists |
600 |
2.7 (0.1) |
30 |
2.2 (0.4) |
Social scientists, except psychologists |
450 |
2.1 (0.1) |
30 |
2.0 (0.4) |
Psychologists |
250 |
1.1 (0.1) |
<15 |
D |
Engineers |
2,200 |
10.4 (0.2) |
150 |
10.9 (0.8) |
S&E-related health occupations |
1,300 |
5.9 (0.2) |
90 |
5.7 (0.6) |
S&E-related non-health occupations |
1,300 |
6.2 (0.2) |
100 |
6.6 (0.6) |
Postsecondary teacher in an S&E field |
750 |
3.4 (0.1) |
60 |
3.7 (0.5) |
Postsecondary teacher in a non-S&E field |
350 |
1.5 (0.1) |
30 |
1.7 (0.3) |
Secondary teacher in an S&E field |
600 |
2.7 (0.1) |
50 |
2.9 (0.4) |
Secondary teacher in a non-S&E field |
250 |
1.1 (0.1) |
<15 |
D |
Non-S&E high interest occupation, S&E FOD |
3,500 |
16.4 (0.3) |
250 |
16.8 (0.9) |
Non-S&E low interest occupation, non-S&E FOD |
1,400 |
6.6 (0.2) |
100 |
6.1 (0.6) |
Non-S&E occupation, non-S&E FOD |
1,900 |
8.8 (0.2) |
150 |
8.4 (0.7) |
Not working, S&E FOD or S&E previous occupation |
2,800 |
12.9 (0.2) |
200 |
13.2 (0.9) |
Not working, non-S&E FOD and non-S&E previous occupation or never worked |
650 |
2.9 (0.1) |
40 |
2.8 (0.4) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.9542
Table 34: Unweighted Respondent Distributions for Oversample Indicator
|
Text Sent |
No Text |
||
Oversample Indicator |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
S&E case that has earned a bachelor’s or master’s degree in the last five years |
1,500 |
7.1 (0.2) |
100 |
6.6 (0.6) |
Non-S&E case, or S&E case that has not earned a bachelor’s or master’s degree in the last five years |
20,000 |
92.9 (0.2) |
1,500 |
93.4 (0.6) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.4437
Table 35: Unweighted Respondent Distributions for Science & Engineering Status
|
Text Sent |
No Text |
||
S&E Status |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
S&E degree or occupation |
18,500 |
86.3 (0.2) |
1,400 |
86.5 (0.9) |
No S&E degrees or S&E occupation |
3,000 |
13.7 (0.2) |
200 |
13.5 (0.9) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.8777
Table 36: Unweighted Respondent Distributions for Work Status
|
Text Sent |
No Text |
||
Work Status |
Frequency |
Percent (SE) |
Frequency |
Percent (SE) |
Employed |
19,000 |
89.0 (0.2) |
1,400 |
89.5 (0.8) |
Unemployed |
400 |
1.9 (0.1) |
40 |
2.3 (0.4) |
Not in the labor force |
2,000 |
9.1 (0.2) |
150 |
8.2 (0.7) |
Total |
21,500 |
100.0 |
1,600 |
100.0 |
Source: U.S. Census Bureau, 2023 National Survey of College Graduates Text Message Experiment
Chi-square p-value = 0.3403
1 The U.S. Census Bureau has reviewed this data product to ensure appropriate access, use, and disclosure avoidance protection of the confidential source data used to produce this product (Data Management System (DMS) number: P-7533594, Disclosure Review Board (DRB) approval number: CBDRB-FY25-POP001-0003).
2 We used a third party to send out text messages and current laws require a prior relationship to exist with the recipient before they can be texted.
3 The cases that responded to NSCG by CATI in 2021 were scheduled to be called earlier in 2023 data collection than cases that hadn't responded by CATI. Since the original goal of this experiment was to see if we could use texts to replace phone calls, we wanted to structure the data collection as consistently as possible for all cases.
4 A text segment is generally 160 characters.
5 Sample cases could opt out of receiving additional text messages by replying STOP to the text reminder.
6 Analysis of research question 2 was limited to sample cases that received at least one text. Limiting the universe required the creation of new weights that would total the entire population, which we did not have.
7 For disclosure purposes, the SAS code used for programming and verifying results will be saved on the M drive under the DSMD Survey Methodology area folder.
8 Equation 1 requires use of all replicate weights, so Equation 2 was used to calculate unweighted response rates and standard errors.
9 We only considered cases that logged into the web instrument after 5pm on the same day a text message was sent. We recognize that text messages could still have an effect if cases logged into the web instrument the morning after a text message was received, but for the purpose of this analysis, we only considered cases that logged into the web instrument on the same day a text message was sent.
10 We used experimental base weights from the appropriate weight file.
11 Due to rounding rules for reporting data, distributions may not always add to reported total.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Christine Bottini (CENSUS/DSMD FED) |
File Modified | 0000-00-00 |
File Created | 2025-01-16 |