2011 ACS Internet Tests - Results from First Test in April 2011

Attachment Rpt1 -- 2011 ACS Internet Tests - Results from First Test in April 2011.pdf

The American Community Survey

2011 ACS Internet Tests - Results from First Test in April 2011

OMB: 0607-0810

Document [pdf]
Download: pdf | pdf
March 28, 2012
2012 AMERICAN COMMUNITY SURVEY RESEARCH AND EVALUATION REPORT MEMORANDUM SERIES
#ACS12-RER-13-R1
DSSD 2012 AMERICAN COMMUNITY SURVEY MEMORANDUM SERIES
#ACS12-MP-01-R1

MEMORANDUM FOR

ACS Research and Evaluation Steering Committee

From:

David C. Whitford /Signed/
Chief, Decennial Statistical Studies Division

Prepared by:

Jennifer Tancreto
Chief, ACS Data Collection Methods Branch
Decennial Statistical Studies Division

Subject:

Revised-2011 American Community Survey Internet Tests: Results from
First Test in April 2011

Attached is the final American Community Survey Research and Evaluation report “2011 American
Community Survey Internet Tests: Results from First Test in April 2011.” The Internet tests focused on
evaluating the feasibility of providing an Internet response mode to addresses sampled for the American
Community Survey. The main objective of the tests was to determine the best way to present the
Internet mode in the ACS mailings to maximize self-response. This report summarizes the results of the
first ACS Internet Test conducted in April 2011.
In March 2012, this report was revised to make minor changes to item nonresponse rates for three
questions (Age/DOB, Hispanic Origin, and Race) provided in Table 13 on page 21. The item nonresponse
rates for these variables were lowered by 0.1 to 0.4 percentage points. The overall conclusions for this
table have not changed.
If you have any questions about this report, please contact Jennifer Tancreto at 301-763-4250.
Attachment

cc:
ACS Research and Evaluation Team
Debbie Griffin
(ACSO)
Todd Hughes
Debbie Klein
Andrew Roberts
Brian Wilson
Kathy Ashenfelter
(CSM)
Temika Holland
Beth Nichols
Victor Quach
Arnold Jackson
(DIR)
Nancy Bates
Frank Vitrano
Mary Ann Chapin
(DMD)
Justin McLaughlin
Alan Berlinger
(DSCMO)
Tim Gilbert
(DSD)

Tony Tersine
(DSSD)
Michael Bentley
Mary Davis
Steven Hefter
Joan Hill
Rachel Horwitz
Brenna Matthews
Michelle Ruiter
Jennifer Tancreto
Mary Frances Zelenak
David Johnson
(SEHSD)
Scott Boggess
Bob Kominski
Enrique Lamas
(POP)
Colleen Hughes
Anne Ross
Janice Valdisera

American Community Survey Research and Evaluation Program
February 22, 2012
Revised March 28, 2012

2011 American Community
Survey Internet Tests: Results
from First Test in April 2011
FINAL REPORT

Jennifer Guarino Tancreto, Mary Frances Zelenak,
Mary Davis, Michelle Ruiter, Brenna Matthews
Decennial Statistical Studies Division

Intentionally Blank

TABLE OF CONTENTS
EXECUTIVE SUMMARY ..................................................................................................................... v
1. BACKGROUND ............................................................................................................................ 1
1.1 Motivation for the April 2011 American Community Survey Internet Test ....................... 1
1.2 Previous Testing .................................................................................................................. 1
2. METHODOLOGY ........................................................................................................................ 3
2.1 Experimental Treatments ................................................................................................... 3
2.2 Stratification........................................................................................................................ 5
2.3 Research Questions ............................................................................................................ 6
2.4 Design of the ACS Internet Survey ...................................................................................... 6
2.5 Follow-up Interview ............................................................................................................ 8
2.6 Analysis Design.................................................................................................................... 8
3. LIMITATIONS .............................................................................................................................. 9
3.1 Data for Incomplete Internet Responses ............................................................................ 9
3.2 No Replacement Questionnaire Mailing to Internet Cases Considered “Sufficient
Partial Interviews” .................................................................................................................... 9
3.3 No CATI Nonresponse Follow-up for Experimental Panels ................................................ 9
3.4 Rates of Cases Failing the Automated Clerical Edit Review (Flagged for Failed Edit
Follow-up (FEFU)).................................................................................................................... 10
3.5 Analysis Universe .............................................................................................................. 10
3.6 Variability in Monthly Mailing Schedule ........................................................................... 10
3.7 Item Nonresponse Rates ................................................................................................... 11
4. RESULTS .................................................................................................................................... 11
4.1 Does offering an Internet option change the total self-administered (including mail
and Internet) response rate? .................................................................................................. 11
4.2 Are the Internet usage rates statistically different by notification strategy? .................. 15
4.3 Did the rate of accessing the Internet instrument and subsequent break-offs differ
among notification strategies? ............................................................................................... 16
4.4 How do item nonresponse rates differ between Internet and mail responses as well
as notification strategies? ..................................................................................................... 18
4.5 Are there differences in the demographics of Internet respondents and mail
respondents? Across notification strategies?........................................................................ 23
4.6 How does the speed of receiving Internet responses compare to mail responses? ....... 28
4.7 How many households returned multiple responses? ..................................................... 30
4.8 What were the perceptions of the information contained in the mail materials? .......... 31

ii

5. Cost Effectiveness of the Notification Treatments .................................................................. 32
6. SUMMARY ................................................................................................................................ 32
7. NEXT STEPS............................................................................................................................... 32
Acknowledgements....................................................................................................................... 33
References .................................................................................................................................... 33
Appendix A: 2011 ACS Internet Test Mail Materials ................................................................... A-1
Appendix B: Item Nonresponse Rates by Mode and Treatment (excluding Internet break-offs
that were not “sufficiently complete”) ........................................................................................ B-1

iii

LIST OF TABLES
Table 1. Timing and Content of ACS Internet Test Mailings ........................................................... 4
Table 2. Sample Sizes (addresses) for the ACS Internet Test Notification Strategies Test ............. 6
Table 3. Comparisons Across Treatments (for each stratum) ........................................................ 9
Table 4. Self-Administered Response Rates and Internet Response Rates by Notification
Strategy and Stratum (through April 28, 2011) ............................................................................ 12
Table 5. Differences in Self-Administered Response Rates by Notification Strategy and Stratum
(through April 28, 2011)................................................................................................................ 12
Table 6. Self-Administered Response Rates and Internet Response Rates (excluding Internet
break-offs that were insufficient partials) by Notification Strategy and Stratum (through April
28, 2011) ....................................................................................................................................... 14
Table 7. Differences in Self-Administered Response Rates (excluding Internet break-offs that
were insufficient partials) by Notification Strategy and Stratum (through April 28, 2011) ......... 14
Table 8. Internet Usage Rates by Notification Strategy and Stratum (through April 28, 2011)... 15
Table 9. Differences in Internet Usage Rates by Notification Strategy and Stratum
(through April 28, 2011)................................................................................................................ 15
Table 10. Internet Access Rates, Break-off Rates, and Percent of Break-offs that Returned a Mail
Form by Notification Strategy and Stratum (through May 31, 2011) .......................................... 17
Table 11. Differences in Internet Access Rates, Break-off Rates, and Percent of Break-offs that
Returned a Mail Form by Notification Strategy and Stratum (through May 31, 2011) ............... 17
Table 12. Item Nonresponse Rates for Selected Questions by Mode and Stratum
(for Households that Responded by April 28, 2011) .................................................................... 19
Table 13. Item Nonresponse Rates for Selected Questions by Notification Strategy
(for Households that Responded by April 28, 2011) .................................................................... 21
Table 14. Item Nonresponse Rates for Selected Questions by Notification Strategy (excluding
Internet break-offs that were insufficient partials) (for Households that Responded by April 28,
2011) ............................................................................................................................................. 22
Table 15. Demographic Characteristics for Respondent (Person 1) for Internet and Mail Returns
(excluding Control) in Targeted Stratum (for Households that Responded by April 28, 2011) ... 24
iv

Table 16. Demographic Characteristics for Respondent (Person 1) for Internet and
Mail Returns (excluding Control) in Not Targeted Stratum (for Households that Responded by
April 28, 2011) ............................................................................................................................... 25
Table 17. Demographic Characteristics of Responding Households by Notification Strategy in
Targeted Stratum (for Households that Responded by April 28, 2011) ....................................... 27
Table 18. Demographic Characteristics of Responding Households by Notification Strategy in
Not Targeted Stratum (for Households that Responded by April 28, 2011) ................................ 28
Table 19. Multiple Return Rates by Notification Strategy and Stratum (through May 31, 2011) 30
Table 20. Differences in Multiple Return Rates by Notification Strategy and Stratum (through
May 31, 2011) ............................................................................................................................... 31
Table B-1. Self-Administered Response Rates and Internet Response Rates by Notification
Strategy and Stratum (through May 31, 2011)............................................................................ B-1
Table B-2. Differences in Self-Administered Response Rates by Notification Strategy and
Stratum (through May 31, 2011) ................................................................................................. B-1

LIST OF FIGURES
Figure 1. Example of the Web Design Features for a Screen in the ACS Internet Survey .............. 8
Figure 2. Graph of cumulative daily check-in rates for Targeted Stratum .................................. 29
Figure 3. Graph of cumulative daily check-in rates for Not Targeted Stratum ........................... 30

v

EXECUTIVE SUMMARY
Test Objective
Currently, the American Community Survey (ACS) collects data using three modes: mailout/mailback of
a paper questionnaire, Computer-Assisted Telephone Interview and Computer-Assisted Personal
Interview. Sampled addresses receive the mail questionnaire first and are later contacted by ComputerAssisted Telephone Interview and then Computer-Assisted Personal Interview as part of nonresponse
follow-up operations. The United States Census Bureau conducted two ACS Internet tests in 2011, one
in April and one in November, to evaluate the feasibility of providing a fourth response mode, an
Internet mode, to addresses selected for the ACS. The main objective of the tests was to determine the
best way to present the Internet mode in the ACS mailings to maximize self-response. This report
discusses the results from the first (April) test. The results from the second test will be available in the
spring of 2012.

Methodology
The test studied “Choice” and “Push” strategies for notifying sampled addresses about the Internet
mode. Households in the Choice strategy received a survey questionnaire and could choose between
mail and Internet to respond. We tested two Choice strategies – a Prominent Choice and a Not
Prominent Choice. In the Prominent Choice, the web option was noticeably advertised in all mailings as
an alternative to the paper questionnaire. In the Not Prominent Choice, the web option appeared only
in an inconspicuous place on the front of the paper questionnaire for those specifically looking for it.
The Not Prominent Choice treatment was designed to combat response decreases seen in other studies,
including the 2000 ACS Internet test (Griffin et al., 2001), when two response mode options were
provided.
The Push strategy directed households to use the Internet first before later providing the paper
questionnaire in a nonresponse follow-up mailing. We experimented with the length of time between
sending the request to respond online and sending the nonresponse follow-up paper questionnaire–
three weeks (Push Regular) versus two weeks (Push Accelerated).
The Control group was the April 2011 ACS production sample. These cases only received a paper
questionnaire and did not have the opportunity to respond online.
We stratified the sample for this test so we could compare the effectiveness of the notification
strategies among different segments of the population. We stratified tracts into two groups, Targeted
and Not Targeted. The Targeted group consisted of tracts containing households that we expected to
use the Internet at a higher rate. The remaining tracts were in the Not Targeted group.

Research Questions and Results
Does offering an Internet option change the total self-administered response rate?
As of the end of the first month of data collection (when we normally identify the Computer-Assisted
Telephone Interview nonresponse follow-up workload):

vi

The Push Accelerated treatment produced the highest self-administered response rate among the
notification strategies, and achieved a 2.6 percentage point increase over the Control in the
Targeted stratum.
The Prominent Choice treatment obtained the nominally highest response rate, but was not
significantly higher than Push Accelerated, Not Prominent Choice, or Control in the Not Targeted
stratum.
These findings remain when we excluded Internet break-offs that provided an insufficient amount of
data from the pool of respondents.
Are the Internet usage rates statistically different by notification strategy?
As expected, significantly more households responded by Internet in the Push treatments than the
Choice treatments in both strata.
Also, there were significantly more cases that used the Internet in the Prominent Choice treatment
compared to the Not Prominent Choice in both strata, due to differences in the distinction of the
Internet offer between treatments.
Did the rate of accessing the Internet instrument and subsequent break-offs differ among notification
strategies?
Significantly more households accessed the online survey in the Push treatments than the Choice
treatments in both strata. Also, more households accessed the online survey in the Prominent
Choice compared to the Not Prominent Choice.
There were significantly more households that broke-off the online survey in the Push treatments
compared to the Choice treatments in both strata.
How do item nonresponse rates differ between Internet and mail responses as well as notification
strategies?
Internet break-offs negatively impacted item nonresponse measures for Internet returns,
particularly among the questions in the later part of the survey (detailed person section).
The treatments where item nonresponse (in the detailed person section) was most affected by
Internet break-offs were those with the highest concentration of Internet responses, the Push
Regular and Push Accelerated treatments.
The differences we observed in item nonresponse rates due to Internet break-offs have prompted
further consideration on how we should handle Internet break-offs, specifically whether we should
treat them as respondents or nonrespondents. Removing Internet break-offs that were
insufficiently complete helps reduces the item nonresponse rates for the Push treatments, but the
rates still suffer for the questions in the detailed person section.

vii

Are there differences in the demographics of Internet respondents and mail respondents? Across
notification strategies?
In both strata, Internet respondents were more likely to be younger, Asian, non-Black, “other” race,
with higher education and living in larger households than mail respondents. They were also more
likely to speak a language other than English at home.
The characteristics of responding households in the Prominent and Not Prominent Choice strategies
look similar to those in the Control. The characteristics of Push Accelerated households are mostly
in line with those in the Choice and Control, except that they tend to be younger and more
educated.
How does the speed of receiving Internet responses compare to mail responses?
In both strata, Internet returns by far surpassed mail returns early in the data collection period,
giving the Push treatments a response rate advantage for the first two weeks of data collection.
Once mail returns started accumulating, the Choice and Control treatments surpassed response in
the Push treatments for a short period of time. However, the early mailing of the paper
questionnaire in the Push Accelerated treatment allowed response to catch-up to the other
treatments (and eventually, it surpassed those rates in the Targeted stratum).
How many households returned multiple responses?
Very few households (1 percent or less) responded more than once across all notification strategies.
There were no significant differences across the treatments.
What were the perceptions of the information contained in the mail materials?
We conducted a telephone follow-up interview to measure why respondents chose the mode they
used to respond or why some chose not to respond at all. Nichols (forthcoming) found that not all
ACS respondents in the notification strategies knew about the mode choice. Not knowing about the
other mode option was cited more by mail respondents than Internet respondents.
Otherwise, there did not seem to be any messages specific to the mailing messages or strategies
that motivated respondents to choose one mode over the other. Most choices of mail over web
were made based on inability to access Internet, computer issues, or simply preference for the
paper questionnaire.

viii

1. BACKGROUND
1.1 Motivation for the April 2011 American Community Survey Internet Test
There are many Federal mandates and initiatives that promote the use of electronic data collection. The
Paperwork Reduction Act of 1995 seeks to minimize the paperwork burden on individuals, businesses,
institutions, and governments resulting from information collected by or for the Federal government.
The Government Paperwork Reduction Act of 1998 requires Federal agencies to provide individuals or
organizations the option to submit information electronically, when feasible. Moreover, the U.S.
President’s Management Agenda for fiscal year 2002 listed “expanded electronic government” as one of
five government-wide initiatives to make it easier for citizens and businesses to interact with the Federal
government. In fact, the E-GOV Act of 2002 promotes the use of “web-based Internet applications… to:
(1) enhance the access to and delivery of Government information and services; or (2) bring about
improvements in Government operations.” (http://www.whitehouse.gov/omb/e-gov/)
Even in the absence of these mandates and initiatives, using the Internet to collect survey data seems to
make good business sense given the promises of efficiency it offers. Other than the one-time
development cost, the cost of an Internet survey is low compared to a mail survey where there are
printing, postage and data capture costs. Moreover, web survey responses are generally available
quicker than responses from a mail survey without the lag time from mailing back the questionnaire and
capturing the responses (Brady et al., 2004).
Additionally, the Internet offers technological advantages over mail data collection that may improve
data quality, such as real time edit checks, automated navigation through skip patterns, and tailored
name fills. Lastly, Internet use has become more common as people use it for shopping, financial
transactions, gathering information, and communicating. In fact, Internet penetration in the home
reached 71 percent in the United States in 2010 (U.S. Department of Commerce, 2011).
Since Internet access is not universal, and there are known demographic differences between those who
do and do not have access (U.S. Department of Commerce, 2010; U.S. Department of Commerce, 2011),
Internet data collection is currently better suited as part of a mixed mode design for general population
surveys rather than as a sole mode of data collection. Mixed mode data collection is an increasingly
popular way to achieve high levels of response in a cost effective manner (de Leeuw, 2005). Previous
research has indicated that respondents have mode preferences (Groves et al., 1979), and thus, a mixed
mode design would seemingly accommodate those preferences (Dillman et al., 1988). Given this theory,
coupled with the movement towards using the web for everyday activities, it seems reasonable to
believe that a web option would improve, or at least maintain, survey response rates in a mixed mode
design.

1.2 Previous Testing
Internet mode experiments have shown mixed results with respect to response rates, even within the
Census Bureau alone. Offering a concurrent choice of response modes as part of a test for the decennial
census has shown to be promising. In the Census 2000 Response Mode and Incentive Experiment, the
offer of Internet as an alternative response mode boosted response by more than two percentage
points over households that were not offered a response mode alternative (Schneider et al., 2005).

1

Also, in the 2003 National Census Test, providing a choice of response modes had no impact on
response compared to offering the paper questionnaire alone (Brady et al., 2004). In the presence of a
choice, some households that would have typically responded by mail simply shifted their response to
either Internet or Interactive Voice Response.
Just months after the Census 2000 Internet experiment, the Census Bureau tested introducing an
Internet option for the ACS as an alternative to the mail questionnaire. Unlike the response boost
observed in the Census 2000 experiment, the ACS study found that offering the Internet as a response
option actually decreased the overall response rate by 5.8 percentage points (Griffin et al., 2001).
Many recent studies outside of the Census Bureau show similar findings to the ACS study: simultaneous
response mode choices lead to a decrease in response. Smyth et al., (2010) and Gentry et al. (2008) saw
a decrease in response rates as a result of offering respondents a choice between responding by mail or
Internet in studies of small towns and communities and a radio listening diary, respectively. Lesser
(2010) also found lower response rates for a multiple mode option, as compared to mail only, for two
surveys covering the population of Oregon as well as a study covering boat owners in Oregon. It
appears the level of Internet literacy of the survey population is not a factor, as Millar et al. (2011)
observed a decrease in response from a mode choice among college undergraduates. Thus, while the
web may be cheaper and more attractive to some respondents, introducing a concurrent web option
into a mixed mode survey has not generally proved beneficial from a response rate perspective (Couper
et al., 2008).
This pattern of decreasing response in the presence of mode choices is puzzling. While we might expect
that providing more choices gives respondents the opportunity to choose their preferred mode, the
mode choice adds complexity to the response process, which may divert attention away from the task at
hand ultimately leading to nonresponse (Dhar, 1997). Additionally, the transition from a mail survey
invitation to an Internet response might require people to place the invitation aside until they are
online, and ultimately they forget about the task.
Some studies have also examined the possibility of providing an Internet option as the first mode in a
sequential multi-mode design. In the 2003 National Census Test, households that were pushed to use
electronic modes (Internet or Interactive Voice Response) first were significantly less likely to respond
compared to the households that could only respond by paper (Brady et al., 2004). Similarly, in the
2005 National Census Test, households that were pushed to use the Internet at a nonresponse follow-up
mailing were significantly less likely to respond (by about 3.7 percentage points) than those that
received the paper questionnaire (Bentley et al., 2006). Those studies were implemented several years
ago, and we know Internet access and usage is expanding, so it is a methodology still worth considering
since it holds a lot of cost savings potential. In fact, Millar et al. (2011) found among college
undergraduates that pushing to the web in initial mailings resulted in lower response compared to mail
only, but once a mail questionnaire was offered response rates were not significantly different from
those where mail was the only mode offered.

2

2. METHODOLOGY
Currently, the ACS collects data using three modes across a three-month period: mailout/mailback of a
paper questionnaire, Computer-Assisted Telephone Interview (CATI), and Computer-Assisted Personal
Interview (CAPI). Sampled addresses receive the mail questionnaire first (month 1) and are later
contacted via CATI (month 2) and then CAPI1 (month 3) as part of nonresponse follow-up to mail.
The April 2011 ACS Internet Test is one of two ACS Internet tests conducted in 2011 that were designed
to evaluate the feasibility of providing a fourth response mode, Internet, to addresses sampled for the
ACS. The main objective of these two tests was to determine the best way to present the Internet
response mode in the ACS mailing pieces to maximize self-response. The results of this first test aided in
the design of the second test, the November 2011 ACS Internet Test, and the results from that test will
help make the ultimate decision of what method will go into ACS production.
The April 2011 ACS Internet test took place in April and May 2011, and was designed to test introducing
a web response option in the mail month of data collection for the April ACS production sample. Thus,
most metrics presented in this report are based on responses received by the end of the first month
(April), which is the mail data collection month.

2.1 Experimental Treatments
We tested different strategies for notifying sampled households about the Internet response mode
using combinations of the five ACS mailing pieces (pre-notice letter, initial questionnaire mailing,
reminder postcard, and for nonrespondents only, replacement questionnaire mailing and additional
reminder postcard). We describe each notification strategy in detail below and Table 1 shows the timing
and content of the mailings. Two of the notification strategies involved providing a concurrent choice
between a paper questionnaire and Internet survey. Additionally, two strategies pushed households to
use the Internet by removing the paper questionnaire in the first mailing. The “Push” strategies could
potentially introduce cost savings. If successful in maintaining or increasing response, these strategies
could save costs associated with printing the questionnaire, postage, data capture of paper
questionnaires, and reduced volume of replacement mailings due to faster and higher levels of
response. See Appendix A for examples of the materials for each strategy.
Prominent Choice -- Sampled addresses received survey questionnaires and households were given a
concurrent choice of completing the ACS on paper or the Internet. The Internet option was prominently
displayed in both the cover letter and questionnaire in the initial mailing package, as well as on the
reminder postcard, in the replacement questionnaire mailing and on the additional reminder postcard.
This strategy also included a new Internet instruction card in both the initial and replacement
questionnaire packages that provided the choice of response modes (paper and Internet).
Not Prominent Choice -- These sample addresses also received a survey questionnaire but the Internet
response option appeared only in a non-prominent place on the front of the questionnaire. No other
mail materials mentioned the online option, and the Internet instruction card was not provided. The
purpose of testing this strategy was to provide the Internet option to those who were looking for it
1

Mail and CATI nonrespondents and cases ineligible for the mail and CATI modes are subsampled prior to inclusion in the CAPI
operation.

3

while attempting to alleviate a respondent’s tendency to do nothing when offered response mode
choices as seen in previous studies (Millar et. al., 2011; Griffin et. al., 2001).
Push Internet on Regular Mailing Schedule -- During the initial questionnaire mailing in the two Choice
treatments, sampled addresses received a paper questionnaire. In the Push Internet strategy, sampled
addresses only received a letter and instruction card on how to complete the ACS on the Internet. The
letter mentioned the benefits of using the Internet to respond, and the instruction card provided all of
the information they would need to access the survey. Sampled addresses did not receive a paper
questionnaire until the replacement questionnaire mailing (sent to nonrespondents only) about three
weeks later. The paper questionnaire included the same prominent display of the Internet option on
the form and in the cover letter that was used in the Prominent Choice (described above). The mailing
sequence followed the same timing as ACS production.
Push Internet on Accelerated Mailing Schedule -- This strategy used the same concept as the previous
Push strategy except that the replacement questionnaire was mailed earlier (about two weeks after the
initial mailing compared to about three weeks in the regular schedule) to give nonrespondents a mail
questionnaire option sooner than the regular schedule.
Control (Mail only) -- The Control was the April 2011 ACS production sample panel. They received a
paper questionnaire, and there was no Internet option for the Control cases.
Table 1. Timing and Content of ACS Internet Test Mailings
Pre-Notice,
same
across
treatments
(Mailed
3/24/2011)

Initial Mailing
(Mailed
3/28/2011)

Prominent Choice

X

Paper and
Internet offer

Not Prominent
Choice

X

Paper and subtle
Internet offer

Push Regular

X

Internet only

Reminder for
Internet

Push Accelerated

X

Internet only

Reminder for
Internet

Control (Mail Only)

X

Paper only

Reminder for
paper

Treatment

Reminder
Postcard
(Mailed
3/31/2011)

Reminder for
paper and
Internet
Reminder for
paper

4

Nonrespondents only
Additional
Reminder
Replacement
Postcard
Mailing
(Mailed 5/5/2011
(Mailed 4/21/2011
to households for
except Push
which we had no
Accelerated)
phone number
for CATI followup)
Reminder for
Paper and Internet
paper and
offer
Internet
Paper and subtle
Reminder for
Internet offer
paper
Reminder for
Paper and Internet
paper and
offer
Internet
Paper and Internet
Reminder for
offer
paper and
(Mailed 4/14/2011)
Internet
Reminder for
Paper only
paper

2.2 Stratification
From previous research, we suspect that the likelihood of using the Internet will differ by the
characteristics of the housing units (Lugtig et al., 2011; Guarino, 2001; U.S. Department of Commerce,
2010). Therefore, we aimed to study the effect of the notification strategies among households that we
expected to be more/less likely to use the Internet. We stratified the sample for this test so we could
consider targeting the notification strategies in ACS production to different segments of the population
if we found one treatment to be more successful in a specific stratum. To accomplish this goal, we
stratified census tracts into two strata: Targeted and Not Targeted. The Targeted stratum consisted of
tracts containing households that we expected to use the Internet at a higher rate based on past
research. The remaining tracts were in the Not Targeted stratum. About one-third of the ACS universe
fell in the Targeted stratum, while two-thirds fell in the Not Targeted stratum.
The Targeted stratum was created based on research conducted for the Census Integrated
Communications Plan in preparation for the 2010 Census (U.S. Census Bureau, 2008) and results from
the Census Barriers, Attitudes and Motivators Survey (CBAMS) (Johnson, 2009). The CBAMS provided
information to evaluate the knowledge of and attitudes toward the decennial census and social issues as
well as media usage (including Internet).
The tracts in the Targeted stratum were characterized as having either a large proportion of advantaged
homeowners or single, unattached, mobile people. These tracts contained people who were, in general,
highly educated, stable, married homeowners living in single-unit houses or single, mobile renters with
higher than average education living in urban multi-units. We selected these tracts for the Targeted
stratum for two reasons. First, Internet usage statistics suggest younger, college-educated households,
with an annual income greater than $75,000 who own their homes in urban areas comprise the group of
individuals most likely to use the Internet (Couper, 2000; Brady et al., 2004; U.S. Department of
Commerce, 2010). Second, this group had the highest levels of Internet subscriptions, usage and
preference (U.S. Census Bureau, 2008).
The Not Targeted stratum received the balance of the tracts. The people that resided in these tracts
were believed to be as racially diverse or more than the national average, have the same or less
education than the national average, and have the same or lower income than the national average
(Bates et al., 2007). Moreover, these areas have lower levels of Internet subscriptions, usage and
preference (U.S. Census Bureau, 2008).
We crossed the four experimental notification strategies listed above with the two strata to create eight
experimental treatment panels as shown in Table 2. We also stratified the Control (Mail only) group, the
April 2011 ACS production sample panel, for a total of ten treatments. Each experimental treatment
group had a sample of 15,000 addresses resulting in a total of 120,000 sample addresses selected
specifically for the experiment and roughly 230,000 mailable sample addresses from ACS production for
the control. The experimental treatment samples were equally allocated to the two strata, resulting in
an oversample of addresses for the Targeted stratum. The Control (Mail only) contained a proportional
allocation to the two strata, as it is fully representative of the sample universe.

5

Table 2. Sample Sizes (addresses) for the ACS Internet Notification Strategies Test
Notification Strategy
Targeted
Control (Mail only) - ACS April Production Sample
71,585
Experimental Treatments
Choice
Prominent Choice
15,000
Not Prominent Choice
15,000
Push Internet
Regular Mailing Schedule (3 weeks)
15,000
Accelerated Mailing Schedule (2 weeks)
15,000
Subtotal of Experimental Treatments
60,000

Not Targeted
161,683

15,000
15,000
15,000
15,000
60,000

This test was designed to simulate a typical one-month mail data collection period in the ACS. There
were no CATI or CAPI nonresponse follow-up operations for the experimental treatments, but the
Control included nonresponse follow-up since it was the ACS production sample. We decided to keep
the online survey available beyond the first month so we could see whether we would get more visits or
return visits from the experimental treatment cases after we typically would have started nonresponse
follow-up by CATI. Most of the analysis in this study is limited to the first month of data collection,
before the Control cases were sent to CATI nonresponse follow-up, since we do not know what the
effect of the CATI operation would have been on the experimental treatment cases.

2.3 Research Questions
In advance of the test, we identified a series of research questions to help assess the success of the
various notification strategy treatments. We list the research questions here, and provide answers to
these questions in Section 4 of this report. The analysis for each of these research questions was
conducted separately for the Targeted and Not Targeted strata.
Does offering an Internet option change the total self-administered response rate?
Are the Internet usage rates statistically different by notification strategy?
Did the rate of accessing the Internet instrument and subsequent break-offs differ among
notification strategies?
How do item nonresponse rates differ between Internet and mail responses as well as notification
strategies?
Are there differences in the demographics of Internet respondents and mail respondents? Across
notification strategies?
How does the speed of receiving Internet responses compare to mail responses?
How many households returned multiple responses?
What were the perceptions of the information contained in the mail materials?

2.4 Design of the ACS Internet Survey
The goal in designing the online survey was to enable even novice Internet users to complete the survey.
We reviewed web survey research and consulted external web survey experts while designing the
instrument. We also conducted five rounds of usability testing on survey prototypes to improve the
design, flow and question presentation of the online survey. See Ashenfelter et al. (2011a), Ashenfelter
et al. (2011b) and Leeman et al. (forthcoming) for results of usability testing. Findings from usability
testing were incorporated into the final Internet survey design.
6

The Internet survey presented the questions in a manner similar to the other ACS data collection modes
to minimize mode effects, while taking advantage of the technology to improve data quality. This
means the survey had three sections of questions: the first section asked basic demographic questions
for all persons in the household; the second section, the housing section, asked questions about the
household; and the third section asked detailed questions about each person in the household. The
survey was available in both English and Spanish. The Internet survey maintained the self-administered
nature of the ACS paper questionnaire coupled with the automated advantages similar to the CATI and
CAPI modes in its design.
Like other federal agencies, the Census Bureau has strict information technology security to protect the
privacy and confidentiality of survey respondents. The challenge for the ACS online survey was to find a
way to meet the security requirements in a manner that was also user-friendly. Households were
provided a randomly generated 10-digit User ID on the address label of the mail materials to enter the
survey. After confirming the address for their household, respondents received a four-digit Personal
Identification Number (PIN). Respondents needed to use this PIN along with their User ID if they wished
to return to the survey at a later time. At the time they were provided with their PIN, we stressed the
importance of retaining the PIN because, in an effort to protect the information that had already been
provided at previous visits to the survey, we could not retrieve it. If respondents lost their PIN and
wanted to use the Internet to complete the survey, they had to start the survey over from scratch after
we reset their survey.
The ACS online survey maintained the look and feel of the ACS mailing pieces. Figure 1 highlights some
of the design features. The screen background was the same light green color as the mail questionnaire,
and the banner image came from a brochure in the survey mailings. The survey displayed one question
per screen to facilitate skip patterns and to keep page content short to avoid scrolling.
The online survey provided several features intended to improve data quality. Critical survey questions
were subject to soft error messages when left blank or when respondents provided inconsistent or
invalid values. The respondent could either change the response or bypass the error using the
navigation buttons to continue in the survey. Furthermore, the online survey provided topic-specific
help by a link immediately following the question, where applicable. Finally, at the end of the survey,
the respondent had the option of reviewing responses or submitting the survey without reviewing. If
respondents chose to review, they could simply review the questions and answers or they could change
their responses.
Although the focus of this test was on the effect of the notification strategies on self-response, we
analyzed paradata (i.e., data about the Internet response process) to assess the effectiveness of certain
design features, such as the error messages and help. Horwitz et al. (forthcoming) provides results from
the analysis of these paradata.

7

Figure 1. Example of the Web Design Features for a Screen in the ACS Internet Survey

For more information about the design of the Internet survey, please see Tancreto et al. (forthcoming).

2.5 Follow-up Interview
A sample of Internet respondents, mail respondents, and nonrespondents from this test were
interviewed in a CATI follow-up to collect qualitative feedback about the mailing pieces, and re-asked
certain questions to enable the study of response error for respondents. For each group, we asked a
series of qualitative questions to determine what they remembered about the mailing pieces, their
thoughts about the effectiveness of the mailing pieces, and the reasoning behind their selection of
mode (or nonresponse). We also asked if there were any privacy concerns in using the Internet.

2.6 Analysis Design
We used a three-step method for comparing the notification treatments, described in Table 3, to
maximize the testing power for each research question. In Step 1, we compared the two Choice
strategies (Not Prominent and Prominent) to each other, and the two Push strategies (Regular and
Accelerated schedule) to each other. In Step 2, we compared the Choice strategy winner to the Push
strategy winner from Step 1. In Step 3, the winner between Push and Choice was compared to the
Control. Note that the winners were determined based on specific evaluation measures for each
research question. In the event that the treatments were not significantly different at any step in the
process, the treatment with the most desirable rate was selected as the winner. At times, we extended
the statistical testing to make comparisons between the Control and another treatment of interest as
noted in the report.
All analyses used t-tests for the comparisons where the family-wise error rate was adjusted for multiple
comparisons using the Bonferroni-Holm Multiple Comparison Procedure. All results are weighted to
reflect their probability of selection into the sample.

8

Table 3. Comparisons Across Treatments (for each stratum)
Step 1
Step 2
Compare Choice Strategies
Compare Push Strategies

Compare Choice Winner
to Push Winner

Step 3
Compare Winner of Step 2
to Control

Details about the calculation of the evaluation measures are provided in the results section of this
report.

3. LIMITATIONS
3.1 Data for Incomplete Internet Responses
Internet respondents who did not complete their survey in their first session had the option of returning
at a later time. The partial data provided for those cases were not processed until the end of the data
collection period so we could keep that case open for respondents to re-enter. We did not keep interim
records of the data provided at each visit. The only data record we had for these cases was the data
that were provided by the end of the data collection period (May 31, 2011). Thus, analysis of the data
for these cases does not necessarily reflect data received at the end of first month of data collection (at
the time we created the response rates and other evaluation measures). Because we know the dates
when cases returned to the survey, we know that this issue impacts only about one percent of Internet
cases, and thus, we do not feel that this is a major limitation for this analysis.

3.2 No Replacement Questionnaire Mailing to Internet Cases Considered “Sufficient Partial
Interviews”
We intended to send the nonresponse follow-up paper questionnaire mailing to all households that had
started the online survey, but had not completed it. Unfortunately, households that provided enough
information in the online survey to be considered sufficiently complete were mistakenly not included in
that mailing. As a result, we have no way to assess the impact that mailing would have had on their
responses. This limitation impacts about 11 percent of Internet responses.

3.3 No CATI Nonresponse Follow-up for Experimental Panels
The control was the ACS production sample panel for the month of April. This panel followed the ACS
protocol of mail data collection in month one, followed by nonresponse follow-up by CATI in month two.
The experimental notification strategy treatments did not go into the CATI nonresponse follow-up
operation in month two. Thus, comparisons between the experimental treatments and the control
panel are valid only for the first month of data collection since CATI calls are known to elicit mail
response, which would affect response rate comparisons.

9

3.4 Rates of Cases Failing the Automated Clerical Edit Review (Flagged for Failed Edit Followup (FEFU))
Failed Edit Follow-up (FEFU) is an operation in the ACS where telephone interviewers contact
households that returned a paper questionnaire that requires follow-up for various reasons, including
collection of data for large households (more than five people) and households with missing data for
critical items (U.S. Census Bureau, 2009). All incoming questionnaires are run through an Automated
Clerical Edit that identifies cases that require FEFU. A significant increase in the FEFU workload for
Internet returns would add cost to ACS operations.
We did not send cases to the FEFU operation in this test, but we intended to compare the percent of
returns that would require FEFU across strategies to see if any of the notification strategies caused an
increase in the FEFU rate. Unfortunately, we became aware of inaccuracies in how the Automated
Clerical Edit was applied to Internet cases. The rates we computed were questionable at best. Thus, we
elected not to provide the rates for this test.

3.5 Analysis Universe
Most of the analyses in this report focus on responses received in the first month of data collection,
which reflects the timing when the ACS typically transitions to nonresponse follow-up by CATI. We use
the first month for most analyses because we do not know what the impact of introducing the transition
to CATI would have been on the experimental cases. Also, this test was designed to study the impact of
the Internet mode in the first month of data collection under the assumption that we would maintain
the current ACS operational design.

3.6 Variability in Monthly Mailing Schedule
The ACS mailing schedule is based on timing rules rather than calendar dates. For instance, we generally
send the initial survey questionnaire on the last Monday of the month prior to the data collection
month. We identify nonrespondents for the replacement questionnaire on the Monday three weeks
after the initial questionnaire mailing, and send the replacement questionnaire on Thursday of that
week. We start the CATI nonresponse follow-up operation on the first day of the following month.
The way in which this schedule worked for the month of April 2011 compressed the amount of time for
response before the start of the CATI operation. The CATI operation started on a Sunday (May 1, 2011),
which means the response rates for the mail month (and the nonresponse universe identification for
CATI) correspond to the last business day before the start of CATI (April 29, 2011) using all of the
responses that were returned and checked-in by the night before (April 28, 2011). This effectively
reduced the amount of time for respondents to return a paper form, and most affected the Push
Regular treatment which had only one week between the mailing of the paper questionnaire to
nonrespondents (April 21, 2011), and the date by which they had to have the form returned and
acknowledged.

10

3.7 Item Nonresponse Rates
We used unedited, raw data to compute the evaluation measures in this report. We used raw data
because we did not want edits and imputation to mask any potential problems with the data. As such,
we cannot assess the impact of the edits and imputation on the final item nonresponse rates that would
be used in ACS production.
Also, in calculating the item nonresponse rates, we looked at the presence of an answer, not at the
validity of that answer. This may give an unfair advantage to the item nonresponse rates for Internet
cases because the data we used from the mail responses had been keyed, which in many cases means
that a invalid answer (i.e. “N/A”, “Don’t Know”, “None of your business”, etc.) for a particular question
was turned into a blank response for that question. That same invalid answer in an Internet case was
not turned into a blank response, and therefore, was counted as a response. Also, when multiple
responses were marked for certain questions requiring a single response on the mail form, the
responses are blanked because we do not know the true answer. However, the Internet instrument was
programmed to allow only one answer for those questions, potentially leading to lower item
nonresponse for those items.

4. RESULTS
While any test of an Internet response option presents numerous items for analysis, our main focus in
this test was the effect of providing an Internet response option on the overall self-administered
response rates. Besides these rates, we looked at the related items to get an overall picture of the
effects of the new response mode and to gauge potential cost savings: Internet usage rates, Internet
access rates and Internet break-off rates, item nonresponse rates, demographic profiles of respondents
by mode and treatment, speed of responses, and amount of multiple returns. Again, we conducted the
analyses separately for each stratum to determine which notification strategy treatment performed best
in each stratum.

4.1 Does offering an Internet response option change the total self-administered (including
mail and Internet) response rate?
The self-administered response rate is the percent of all sampled addresses2 that provided a non-blank
mail, Internet or Telephone Questionnaire Assistance3 (TQA) response. Current ACS operations
consider a form to be non-blank (and eligible for FEFU) even if there is only minimal information
provided, specifically, a phone number or name of a household member. Thus, some Internet cases
which broke-off before completing the survey are still considered responses in these rates.
Also, both mail and Internet responses may ultimately be deemed not complete enough to be
processed, so these rates may be slightly inflated, but the rates of this are very low (about 0.1 percent)
(U.S. Census Bureau, 2010).

2

The sample was selected only from mailable cases.
The TQA process allows respondents to call a toll-free number to receive help or complete the survey. TQA responses are
included with mail responses because they usually occur during the mail data collection month.
3

11

The rates presented in this report are different from the mode-specific and overall survey response rates
that ACS publishes since we do not know the eligibility status of the addresses in the sample without
personal visit follow-up, and thus we cannot remove vacant or nonexistent units from the denominator.
Table 4 contains the self-administered response rates for each treatment and Control by strata. These
rates indicate the amount of self-response received at the time when we would normally transition to
nonresponse follow-up by CATI, after the first month of data collection (April 28, 2011). The table also
includes the percent of sampled cases that responded by Internet. Table 5 contains statistical testing of
the total self-administered response rate according to the three-step process identified in Section 2.6 for
both strata for the same time period.
Table 4. Self-Administered Response Rates and Internet Response Rates by Notification Strategy and Stratum
(through April 28, 2011)
Notification Strategy
Stratum

Control
(Mail only)

Prominent
Choice

Not Prominent
Choice

Targeted
Response Rate
38.1
38.3
37.6
(SE)
(0.2)
(0.4)
(0.4)
INT Response Rate
9.8
3.5
N/A
(SE)
(0.2)
(0.2)
Not Targeted
Response Rate
29.7
30.4
29.8
(SE)
(0.2)
(0.4)
(0.3)
INT Response Rate
6.3
2.0
N/A
(SE)
(0.2)
(0.1)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

Push
Regular

Push
Accelerated

31.1
(0.3)
28.6
(0.3)

40.6
(0.4)
28.1
(0.4)

19.8
(0.4)
17.1
(0.3)

29.8
(0.4)
17.3
(0.3)

Table 5. Differences in Self-Administered Response Rates by Notification Strategy and Stratum (through April 28,
2011)

Stratum

Compare Choice
Strategies
Difference
(Prom Best
Not Prom)

Compare Push
Strategies
Difference
(Reg Best
Accel)

Compare Best Choice
and Best Push
Difference
(Choice Best
Push)

Targeted
Estimate
0.7
-9.5*
-2.3*
Prom
Push Accel
Push Accel
(SE)
(0.5)
(0.5)
(0.6)
Not Targeted
Estimate
0.6
-10.1*
0.5
Prom
Push Accel
Prom Choice
(SE)
(0.5)
(0.5)
(0.6)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates statistical significance at α<0.1, controlling for multiple comparisons.

Compare Best Strategy
and Control
Difference
(Best Best
Control)
2.6*
(0.5)

Push Accel

0.7
(0.4)

Prom Choice

Offering the choice between Internet and mail, regardless of how prominently that choice was
advertised, achieved self-response rates that tracked closely to offering mail only, in both strata. This
result is very positive considering the substantial decrease in self-response we experienced when we
provided a choice between modes in the 2000 ACS Internet test (Griffin et al., 2001). As expected, more
cases responded by Internet in the Prominent Choice compared to the Not Prominent Choice.

12

Surprisingly, in Targeted areas, self-response rates for the Push Accelerated strategy were better than
those for the Prominent Choice (by 2.3 percentage points) and Control (by 2.6 percentage points). This
is the first test where the Census Bureau has seen a push strategy perform well in a household survey.
Moreover, the majority of respondents in the Push Accelerated treatment used Internet.
Perhaps the most unexpected finding was the strong performance of the Push Accelerated strategy in
Not Targeted areas. Self-response rates were not significantly different from the rates from the Choice
strategies or the Control4. Similar to Targeted areas, the majority of response in Push Accelerated came
from Internet.
Comparing the two Push strategies clearly shows that moving the mailing of the paper questionnaire to
nonrespondents up by one week was the key to the success of this strategy in both strata. Moving this
mailing up allowed more time for mail returns to be received before we typically begin the next stage of
data collection (nonresponse follow-up by CATI). As mentioned in the limitations, the regular ACS
operational schedule (as implemented in the month of April 2011) only provided a seven-day window
between mail out of the paper form to nonrespondents and the time when we typically begin CATI
nonresponse follow-up. This is not enough time for households that are receiving the paper form for
the first time to return a response. In fact, if we look at response rates for the Push Regular and Push
Accelerated 14 days after we mailed out the paper questionnaire to nonrespondents (May 5th and April
28th, respectively), the rates are in the same range as we would expect. Thus, the Push Regular
treatment is simply at a disadvantage because of the ACS operational schedule for the month of April
2011.
As we will discuss in Section 4.3, we observed a fair amount of Internet break-offs (cases that did not get
to the last screen of the survey) in this test. Most of these break-offs had enough data to be considered
non-blank, so they were included as responses in the rates in Table 4. However, we had some concerns
about whether Internet break-offs should be considered responses. While we include partially
complete mail returns as responses, mail respondents signify that they have completed as much
information as they are willing to provide by the sheer act of sending back the form. On the Internet,
we do not know whether households that started but did not complete their survey intended to come
back to finish it at a later time. The decision on whether to treat Internet break-offs as responses
impacts the response rate, so we wanted to study the impact to response if we removed some Internet
break-offs from the respondent pool.
First, we classified Internet break-offs by how far the respondent made it through the survey. The
survey has three main sections: basic demographic questions (age/date of birth, relationship, sex, race,
and Hispanic origin) for each person in the household, housing questions, and detailed questions about
each person. A response was deemed a “sufficient partial” when the respondent got to the first
question in the detailed questions section for the first person in the household, which is the same
criteria used for CATI/CAPI. An “insufficient partial” response did not get far enough into the survey to
become a sufficient partial.
We then recalculated the response rates in Table 4 after removing the Internet insufficient partials
(Tables 6 and 7).

4

Though not reflected in Table 5, the Push Accelerated strategy was tested against Control in the Not Targeted stratum, and
the difference was not statistically significant.

13

Table 6. Self-Administered Response Rates and Internet Response Rates (excluding Internet break-offs that were
insufficient partials) by Notification Strategy and Stratum (through April 28, 2011)
Notification Strategy
Stratum

Control
(Mail only)

Prominent
Choice

Not Prominent
Choice

Targeted
Response Rate
38.1
38.1
37.5
(SE)
(0.2)
(0.4)
(0.4)
INT Response Rate
9.6
3.4
N/A
(SE)
(0.2)
(0.2)
Not Targeted
Response Rate
29.7
30.2
29.7
(SE)
(0.2)
(0.4)
(0.3)
INT Response Rate
6.1
2.0
N/A
(SE)
(0.2)
(0.1)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

Push
Regular

Push
Accelerated

29.9
(0.3)
27.5
(0.3)

39.6
(0.4)
27.0
(0.4)

19.0
(0.4)
16.4
(0.3)

29.3
(0.4)
16.7
(0.3)

Table 7. Differences in Self-Administered Response Rates (excluding Internet break-offs that were insufficient
partials) by Notification Strategy and Stratum (through April 28, 2011)

Stratum

Compare Choice
Strategies
Difference
(Prom Best
Not Prom)

Compare Push
Strategies
Difference
(Reg Best
Accel)

Compare Best Choice
and Best Push
Difference
(Choice Best
Push)

Targeted
Estimate
0.6
-9.6*
-1.5*
Prom
Push Accel
Push Accel
(SE)
(0.5)
(0.6)
(0.6)
Not Targeted
Estimate
0.65
-10.2*
0.9
Prom
Push Accel
Prom Choice
(SE)
(0.5)
(0.5)
(0.6)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates statistical significance at α<0.1, controlling for multiple comparisons.

Compare Best Strategy
and Control
Difference
(Best Best
Control)
1.5*
(0.5)

Push Accel

0.5
(0.4)

Prom Choice

The removal of these cases affected the Push treatments most because Internet usage was very high in
these treatments. In the Push Accelerated treatment, the response rates were reduced by 0.5 to 1.0
percentage points (compared to Table 4), while the rates in the Push Regular were reduced by 0.8 to 1.2
percentage points. Nonetheless, the overall conclusion is still same; that is, Push Accelerated still has
the highest response rate in the Targeted stratum and is not different from the Prominent Choice
treatment or Control in the Not Targeted stratum.
We did not conduct CATI nonresponse follow-up on cases in the experimental treatments in this test
(control cases were included in CATI starting May 1, 2011). However, we did send the fifth mailing
piece, the additional mailing postcard, to households that did not respond by mail or Internet, and for
which we could not find a phone number.5 These cases typically receive the postcard instead of a CATI
call early in the second month of data collection (for this test, May 5, 2011). There were no remaining

5

Households that accessed the Internet, but did not provide enough data to be considered a sufficiently complete response
were mailed the additional postcard. Internet respondents who provided a sufficiently complete response were mistakenly
excluded from this postcard mailing.

14

self-response rate differences among the strategies in the Targeted stratum6 at the end of the second
month of data collection. The Prominent Choice treatment had significantly higher self-response at the
end of the data collection period than the Push Accelerated treatment in the Not Targeted stratum.
Again, these rates do not simulate the rates we would expect if the treatment cases had gone to CATI
nonresponse follow-up (see Appendix B).
The remaining analyses in this report are based on all responses, including Internet break-offs that were
insufficient partials, unless otherwise noted.

4.2 Are the Internet usage rates statistically different by notification strategy?
In Tables 4 and 6 above, we displayed the percent of sampled households that used the Internet to
respond. The Internet usage rate is a related measure that shows the percent of all responses that came
from Internet by the end of the first month of data collection (Table 8). We expected that the
Prominent Choice treatment would have more Internet response than the Not Prominent Choice since
the message about the mode choice was featured in that treatment. We also anticipated that the Push
treatments would gain more Internet response than the Choice treatments because we did not provide
a paper questionnaire until a few weeks into the data collection period. We compared the percent of
responses that came from Internet across the treatments in Table 9.
Table 8. Internet Usage Rates by Notification Strategy and Stratum (through April 28, 2011)
Notification Strategy
Stratum

Prominent
Choice

Not Prominent
Choice

Push
Regular

Push
Accelerated

Targeted
INT Usage Rate
25.7
9.4
92.0
(SE)
(0.6)
(0.4)
(0.4)
Not Targeted
INT Usage Rate
20.6
6.9
86.5
(SE)
(0.6)
(0.4)
(0.6)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

69.1
(0.6)
57.9
(0.7)

Table 9. Differences in Internet Usage Rates by Notification Strategy and Stratum (through April 28, 2011)
Stratum

Compare Choice Strategies
Difference
Best
(Prom - Not Prom)

Compare Push Strategies
Difference
Best
(Reg - Accel)

Compare Best Choice and Best Push
Difference
Best
(Choice - Push)

Targeted
Estimate
16.3*
22.9*
Prom
Push Reg
(SE)
(0.7)
(0.8)
Not Targeted
Estimate
13.8*
28.6*
Prom
Push Reg
(SE)
(0.7)
(0.9)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates statistical significance at α<0.1, controlling for multiple comparisons.

-66.4*
(0.8)

Push Reg

-65.8*
(0.9)

Push Reg

As expected, there were significantly more Internet responses in the Prominent Choice compared to the
Not Prominent Choice in both strata. In fact, the Internet usage rate for Prominent Choice was almost
6

The self-response rate for the Control (mail only) at the end of the data collection period was significantly higher than the
experimental treatments due to the fact that CATI nonresponse follow-up calls resulted in some mail returns (treatment cases
did not go to CATI). We removed the Control from Tables B-1 and B-2 since this is an unfair comparison.

15

three times higher than the rate in Not Prominent Choice. Although the difference in Internet usage
between the Choice treatments is large, it is encouraging that seven to nine percent of response came
from Internet in the Not Prominent treatment since we only advertised the online option on the paper
questionnaire in a subtle fashion. We chose to advertise on the questionnaire because we have
observed in cognitive testing that respondents tend to focus on the questionnaire and disregard the
other materials in the mailing.
We also found significantly more responses came from Internet in the Push treatments than the Choice
treatments in both strata, by as much as 40 to 65 percentage points. In fact, the majority of responses
in both Push treatments came from Internet in both strata. The motivation behind the Push treatments
was to drive response to the Internet to the extent possible, and certainly, the Push approach was
successful in doing that.
The Push Regular treatment appears to have a greater proportion of Internet response than the Push
Accelerated at the time we would identify the CATI nonresponse follow-up universe, but this difference
is confounded by the fact that overall response is much lower in the Push Regular treatment (due to the
lack of mail returns). By the end of the second month of data collection, Internet usage was marginally
significantly higher in Push Regular than Push Accelerated in the Targeted stratum (tables not shown).

4.3 Did the rate of accessing the Internet instrument and subsequent break-offs differ among
notification strategies?
We wanted to study response behavior surrounding the online survey. To do this, we computed the
following three measures:
The percent of sampled units in each treatment that accessed the online survey by the end of
the second month of data collection (May 2011);
The percent of those that accessed the survey but never reached the end of the survey (breakoff);
The percentage of those that broke-off the online survey who ultimately returned a paper
questionnaire.
Table 10 contains the access and break-off rates by treatment and strata, as well as the percent of
break-offs that returned a mail form, and Table 11 contains significance testing of these rates.

16

Table 10. Internet Access Rates, Break-off Rates, and Percent of Break-offs that Returned a Mail Form by
Notification Strategy and Stratum (through May 31, 2011)
Notification Strategy
Stratum

Prominent
Choice

Not Prominent
Choice

Push
Regular

Push
Accelerated

32.3
(0.3)
17.0
(0.5)
11.7
(1.1)

30.9
(0.4)
16.9
(0.6)
10.2
(1.1)

19.6
(0.3)
17.6
(0.7)
15.2
(1.3)

19.0
(0.3)
16.9
(0.7)
13.1
(1.5)

Targeted
Accessed
12.4
4.4
(SE)
(0.3)
(0.2)
Break-off
12.3
10.2
(SE)
(0.7)
(1.1)
Break-offs with mail return
12.7
20.9
(SE)
(2.3)
(5.0)
Not Targeted
Accessed
7.9
2.5
(SE)
(0.2)
(0.1)
Break-off
13.0
12.8
(SE)
(0.9)
(1.7)
Break-offs with mail return
11.1
12.5
(SE)
(2.4)
(4.9)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

Table 11. Differences in Internet Access Rates, Break-off Rates, and Percent of Break-offs that Returned a Mail
Form by Notification Strategy and Stratum (through May 31, 2011)
Compare Choice Strategies
Stratum

Difference
(Prom - Not Prom)

Best

Compare Push Strategies
Difference
(Reg - Accel)

Best

Targeted
Accessed
8.0*
1.4*
Prom
Push Reg
(SE)
(0.3)
(0.6)
Break-off
2.1
0.1
Not Prom
Push Accel
(SE)
(1.3)
(0.7)
Break-offs with mail return
-8.2
1.5
Not Prom
Push Reg
(SE)
(5.4)
(1.4)
Not Targeted
Accessed
5.4*
0.6
Prom
Push Reg
(SE)
(0.3)
(0.4)
Break-off
0.1
0.8
Not Prom
Push Accel
(SE)
(2.0)
(1.0)
Break-offs with mail return
-1.4
2.1
Not Prom
Push Reg
(SE)
(5.3)
(2.0)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates statistical significance at α<0.1, controlling for multiple comparisons.

Compare Best Choice
and Best Push
Difference
Best
(Choice - Push)
-19.9*
(0.5)
-6.7*
(1.2)
9.2
(5.2)
-11.8*
(0.4)
-4.0*
(1.8)
-2.7
(4.9)

Push Reg
Not Prom
Not Prom

Push Reg
Not Prom
Push Reg

As expected, significantly more households accessed the online survey in the Prominent Choice
treatment compared to the Not Prominent Choice treatment due to the differences in how we
advertised the Internet option. Similar to the Internet usage rates presented in Table 8, we also found
that a much higher percent of households accessed the Internet survey in the Push treatments than the
Choice treatments in both strata. The Push Regular treatment had a marginally significantly higher
access rate than the Push Accelerated in the Targeted stratum.

17

Next, we turned our attention to the break-off rates. The rates are within the scope of what we have
seen in other studies (Peytchev, 2009; Griffin et al., 2001; Bentley et al., 2011). We did not observe any
differences in break-off rates between the two Choice treatments or the two Push treatments in both
strata. We did find, however, that significantly more households broke-off in the Push treatments
compared to the Choice treatments. We were not surprised by this finding. Most households that were
pushed to use Internet did not see the paper questionnaire in advance of starting the online survey7, so
they may not have expected the length or content of the survey when attempting to respond. Also, it is
possible that respondents whom we pushed towards using the Internet may have not been comfortable
using the technology, which may have also led to the increased break-off rates.
Looking across treatments, approximately 10 to 20 percent of the Internet break-offs ended up
returning a mail form. We plan to look at these cases closer in future research so we can determine
what factors caused them to abandon the Internet survey and eventually respond by mail. There were
no significant differences in the rate of break-offs returning a mail form across the treatments.

4.4 How do item nonresponse rates differ between Internet and mail responses as well as
notification strategies?
The purpose of this analysis was to study question-level response behavior between the two data
collection modes and notification strategies. We first explored item nonresponse across mail and
Internet returns to compare the completeness of the returns by mode. These rates were computed on
raw, pre-edited data, so they do not reflect final ACS item nonresponse rates.
We found that the questions in the later part of the questionnaire (detailed person section) were much
more likely to suffer from item nonresponse on the Internet than mail. In fact, item nonresponse rates
for topics like place of birth, educational attainment, language spoken at home, and disability that
appear in that section of the questionnaire were almost double the rates for the mail responses. We did
find, however, that Internet item nonresponse rates were similar to (and in some cases better) than the
rates for mail responses in the earlier sections of the questionnaire (basic demographic and housing
questions).
Because Internet item nonresponse was worse in the detailed person section towards the end of the
survey, we suspected that Internet break-offs were to blame. To confirm this theory, we re-computed
the item nonresponse rates in Table 12 (see shaded column) after removing the Internet break-offs,
specifically those that did not provide enough data to be considered sufficiently complete.

7

Most Internet response in the Push treatments came in before the paper questionnaire was mailed to nonresponding
households.

18

Table 12. Item Nonresponse Rates for Selected Questions by Mode and Stratum (for Households that
Responded by April 28, 2011; standard errors in parentheses)
Targeted
Internet
Internet (excl. Insuff.
Variable

Not Targeted
Internet
Mail

Internet

(excl. Insuff.
Partials)

Mail

Partials)

Basic Demographic Questions
Age/DOB
Sex
Relationship
Hispanic Origin
Race

1.8*
(0.1)
0.4**
(0.1)
0.2**
(0.1)
1.6**
(0.1)
1.6
(0.1)

0.7
(0.1)
0.1**
(0.0)
0.0**
(0.0)
0.4**
(0.1)
0.4**
(0.1)

0.9
(0.1)
2.2
(0.1)
0.6
(0.1)
4.6
(0.2)
1.9
(0.1)

1.6*
(0.1)
0.5**
(0.1)
0.2**
(0.0)
1.4**
(0.2)
1.5**
(0.2)

0.5**
(0.1)
0.2**
(0.0)
0.0**
(0.0)
0.3**
(0.1)
0.3**
(0.1)

1.1
(0.1)
2.6
(0.1)
0.8
(0.1)
6.6
(0.3)
2.6
(0.2)

1.6*
(0.1)
2.5
(0.1)
2.6*
(0.1)
2.9*
(0.2)
2.8**
(0.2)

0.1**
(0.0)
0.6**
(0.1)
0.8**
(0.1)
0.7**
(0.1)
0.6**
(0.1)

1.2
(0.1)
2.1
(0.1)
1.5
(0.1)
1.8
(0.1)
3.5
(0.2)

1.5**
(0.2)
2.3**
(0.2)
2.6*
(0.2)
3.0
(0.2)
2.9**
(0.2)

0.0**
(0.0)
0.4**
(0.1)
0.8**
(0.1)
0.8**
(0.1)
0.7**
(0.1)

2.0
(0.2)
3.2
(0.2)
2.0
(0.2)
2.6
(0.2)
4.7
(0.2)

11.9*
(0.4)
10.8*
(0.4)
10.9*
(0.4)
13.0*
(0.5)
12.9*
(0.4)
10.4*
(0.4)

9.1*
(0.4)
8.9
(0.4)
9.0*
(0.4)
10.1*
(0.4)
10.1*
(0.4)
8.5*
(0.3)

5.7
(0.3)
8.0
(0.3)
6.9
(0.3)
6.5
(0.3)
6.3
(0.3)
7.5
(0.3)

Housing Questions
Type of Building
Number of Rooms
Number of Vehicles
Food Stamps
Tenure
Detailed Person Questions
11.6*
8.7*
4.0
(0.4)
(0.3)
(0.2)
10.3*
8.4*
5.5
Educational Attainment
(0.3)
(0.3)
(0.2)
10.6*
8.6*
4.9
Speak Another Language
(0.3)
(0.3)
(0.2)
12.6*
9.8*
4.6
Health Insurance
(0.4)
(0.3)
(0.2)
12.5*
9.7*
4.5
Difficulty Hearing
(0.4)
(0.3)
(0.2)
10.0*
8.1*
5.6
Work Last Week
(0.3)
(0.3)
(0.2)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates that mail is statistically significantly lower than Internet at α<0.1.
** Indicates that Internet is statistically significantly lower than mail at α<0.1
Place of Birth

When we excluded Internet break-offs that were insufficient partial responses from the rates, we saw
some improvement in the item nonresponse rates for Internet, but they were still higher than mail for
the detailed person questions. For the demographic and the housing sections, however, we saw
substantial improvements in the Internet item nonresponse rates. In fact, item nonresponse rates were
mostly lower for Internet returns than mail returns in the demographic and housing sections when we
removed the break-offs that were insufficient partial responses.

19

One other interesting observation from Table 12 is that the Internet achieves item nonresponse rates
that are in the same range between the Targeted and Not Targeted stratum. Mail cases, on the other
hand, trend towards having more item nonresponse in Not Targeted than Targeted. This may suggest
that using Internet has some benefit for item nonresponse in the Not Targeted stratum.
Our focus thus far has been on comparing Internet and mail responses, but we also wanted to study the
item nonresponse rates for the treatments since they contain a blend of Internet and mail responses.
Table 13 contains item nonresponse rates for each treatment when we included all Internet break-offs.
Table 14 displays item nonresponse rates when we excluded the Internet break-offs that were
insufficient partial responses.
As Table 13 shows, item nonresponse rates for each treatment, particularly among the detailed person
questions, are impacted by the amount of Internet response in that treatment. Ninety-two percent of
responses in Push Regular (in Targeted) are from Internet so the item nonresponse rates for that
treatment are most affected by the Internet break-offs, followed by Push Accelerated (of which, 69
percent is Internet response in Targeted). The Not Prominent Choice treatment, where Internet
response is only nine percent in Targeted, was least affected by the Internet break-offs.

20

Table 13. Item Nonresponse Rates for Selected Questions by Notification Strategy (for Households that Responded
by April 28, 2011; standard errors in parentheses)

Variable

Targeted
Control Not
Prom Push
(mail Prom
Choice Reg
only) Choice

Push
Accel

Control
(mail
only)

Not Targeted
Not
Prom Push
Prom
Choice Reg
Choice

Push
Accel

Basic Demographic Questions
0.8
(0.1)
2.2
(0.1)
0.6
(0.0)
4.1
(0.1)
1.9
(0.1)

0.7
(0.1)
1.9
(0.1)
0.5
(0.1)
3.6
(0.2)
1.6
(0.2)

1.0
(0.1)
1.7
(0.1)
0.5
(0.1)
3.6
(0.2)
1.6
(0.2)

1.9
(0.2)
0.6
(0.1)
0.2
(0.1)
2.0
(0.2)
1.9
(0.2)

1.7
(0.2)
0.9
(0.1)
0.3
(0.1)
2.8
(0.2)
1.8
(0.2)

1.1
(0.0)
2.5
(0.1)
0.8
(0.0)
5.9
(0.1)
2.5
(0.1)

0.9
(0.1)
2.4
(0.2)
0.8
(0.1)
5.6
(0.3)
2.4
(0.2)

1.1
(0.2)
1.9
(0.2)
0.6
(0.1)
5.4
(0.4)
2.4
(0.2)

1.9
(0.4)
0.8
(0.2)
0.3
(0.1)
2.2
(0.4)
1.8
(0.4)

1.4
(0.2)
1.3
(0.1)
0.3
(0.1)
3.5
(0.3)
1.8
(0.2)

1.4
(0.1)
2.3
(0.1)
1.7
(0.1)
1.7
(0.1)
3.3
(0.1)

0.9
(0.1)
1.8
(0.2)
1.2
(0.2)
1.7
(0.2)
2.9
(0.3)

1.1
(0.1)
1.9
(0.2)
1.8
(0.2)
1.7
(0.2)
3.2
(0.2)

1.8
(0.2)
2.9
(0.2)
2.9
(0.2)
3.4
(0.2)
3.3
(0.2)

1.8
(0.2)
2.7
(0.2)
2.4
(0.2)
2.7
(0.2)
3.2
(0.2)

2.4
(0.1)
3.3
(0.1)
2.4
(0.1)
2.5
(0.1)
4.7
(0.1)

1.4
(0.2)
2.8
(0.3)
1.6
(0.2)
2.3
(0.2)
4.2
(0.3)

1.9
(0.2)
2.8
(0.3)
2.1
(0.2)
2.5
(0.2)
4.2
(0.3)

1.9
(0.3)
2.6
(0.3)
3.0
(0.3)
3.5
(0.3)
3.7
(0.4)

2.1
(0.3)
3.1
(0.3)
2.5
(0.3)
2.9
(0.3)
3.7
(0.3)

3.2
3.8
5.5
12.4
10.6
(0.1)
(0.3) (0.3) (0.5)
(0.5)
4.7
5.0
6.4
11.1
9.9
Educational Attainment
(0.1)
(0.3) (0.3) (0.5)
(0.4)
4.0
4.4
6.0
11.2
10.0
Speak Another Language
(0.1)
(0.3) (0.3) (0.5)
(0.4)
3.7
4.4
6.2
13.5
11.5
Health Insurance
(0.1)
(0.3) (0.3) (0.5)
(0.5)
3.7
4.4
6.1
13.4
11.3
Difficulty Hearing
(0.1)
(0.2) (0.3) (0.6)
(0.5)
4.7
4.7
6.4
10.8
9.7
Work Last Week
(0.1)
(0.2) (0.3) (0.4)
(0.4)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

5.2
(0.1)
7.5
(0.1)
6.4
(0.1)
5.9
(0.1)
5.8
(0.1)
7.0
(0.2)

5.7
(0.4)
7.7
(0.4)
6.5
(0.3)
6.4
(0.4)
5.9
(0.4)
7.2
(0.4)

6.4
(0.4)
8.0
(0.4)
7.3
(0.4)
7.1
(0.4)
7.2
(0.4)
7.5
(0.5)

13.0
(0.6)
11.8
0.7)
11.8
(0.6)
14.0
(0.7)
13.9
(0.7)
11.4
(0.6)

10.3
(0.6)
10.3
(0.5)
10.1
(0.6)
11.4
(0.6)
11.3
(0.6)
9.8
(0.5)

Age/DOB
Sex
Relationship
Hispanic Origin
Race
Housing Questions
Type of Building
Number of Rooms
Number of Vehicles
Food Stamps
Tenure
Detailed Person Questions
Place of Birth

When we removed the Internet break-offs that were insufficient partial responses (Table 14), we saw
some improvement in the item nonresponse rates, particularly for the treatments heaviest in Internet
returns.

21

Table 14. Item Nonresponse Rates for Selected Questions by Notification Strategy (excluding Internet
break-offs that were insufficient partials) (for Households that Responded by April 28, 2011; standard
errors in parentheses)

Variable

Targeted
Control Not
Prom Push
(mail Prom
Choice Reg
only) Choice

Push
Accel

Control
(mail
only)

Not Targeted
Not
Prom Push
Prom
Choice Reg
Choice

Push
Accel

Basic Demographic Questions
0.8
(0.1)
2.2
(0.1)
0.6
(0.0)
4.1
(0.1)
1.9
(0.1)

0.7
(0.1)
1.9
(0.1)
0.5
(0.1)
3.5
(0.2)
1.6
(0.1)

0.8
(0.1)
1.6
(0.1)
0.5
(0.1)
3.5
(0.2)
1.5
(0.1)

0.7
(0.1)
0.2
(0.0)
0.0
(0.0)
0.7
(0.1)
0.5
(0.1)

0.9
(0.1)
0.8
(0.1)
0.3
(0.1)
1.9
(0.2)
0.8
(0.1)

1.1
(0.0)
2.5
(0.1)
0.8
(0.0)
5.9
(0.1)
2.5
(0.1)

0.9
(0.1)
2.4
(0.2)
0.8
(0.1)
5.6
(0.3)
2.4
(0.2)

0.9
(0.1)
1.9
(0.2)
0.6
(0.1)
5.2
(0.3)
2.1
(0.2)

0.6
(0.2)
0.4
(0.1)
0.1
(0.0)
0.9
(0.2)
0.4
(0.1)

0.8
(0.1)
1.1
(0.1)
0.3
(0.0)
2.8
(0.3)
1.1
(0.1)

1.4
(0.1)
2.3
(0.1)
1.7
(0.1)
1.7
(0.1)
3.3
(0.1)

0.8
(0.1)
1.7
(0.2)
1.1
(0.1)
1.6
(0.2)
2.9
(0.2)

0.9
(0.1)
1.7
(0.2)
1.6
(0.2)
1.4
(0.2)
3.0
(0.2)

0.2
(0.1)
0.9
(0.1)
0.8
(0.1)
0.9
(0.1)
0.8
(0.1)

0.6
(0.1)
1.2
(0.2)
1.0
(0.1)
1.0
(0.1)
1.5
(0.2)

2.4
(0.1)
3.3
(0.1)
2.4
(0.1)
2.5
(0.1)
4.7
(0.1)

1.3
(0.2)
2.8
(0.3)
1.6
(0.2)
2.3
(0.2)
4.1
(0.3)

1.6
(0.2)
2.5
(0.2)
1.8
(0.2)
2.1
(0.2)
3.9
(0.3)

0.4
(0.1)
0.6
(0.1)
1.0
(0.2)
1.1
(0.2)
1.3
(0.2)

1.4
(0.2)
2.1
(0.3)
1.4
(0.2)
1.7
(0.2)
2.5
(0.2)

3.2
3.6
5.0
9.1
8.2
(0.1)
(0.2) (0.3) (0.4)
(0.4)
4.7
4.9
6.1
8.9
8.4
Educational Attainment
(0.1)
(0.3) (0.3) (0.4)
(0.4)
4.0
4.3
5.7
9.0
8.5
Speak Another Language
(0.1)
(0.3) (0.3) (0.4)
(0.4)
3.7
4.3
5.7
10.3
9.2
Health Insurance
(0.1)
(0.3) (0.3) (0.4)
(0.4)
3.7
4.2
5.7
10.1
9.0
Difficulty Hearing
(0.1)
(0.2) (0.3) (0.5)
(0.4)
4.7
4.7
6.1
8.7
8.2
Work Last Week
(0.1)
(0.2) (0.3) (0.4)
(0.4)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

5.2
(0.1)
7.5
(0.1)
6.4
(0.1)
5.9
(0.1)
5.8
(0.1)
7.0
(0.2)

5.5
(0.3)
7.6
(0.4)
6.4
(0.3)
6.2
(0.4)
5.8
(0.3)
7.1
(0.4)

5.8
(0.3)
7.6
(0.4)
6.9
(0.4)
6.5
(0.4)
6.6
(0.4)
7.2
(0.5)

9.5
(0.5)
9.5
(0.6)
9.5
(0.6)
10.6
(0.6)
10.5
(0.6)
9.1
(0.5)

8.7
(0.5)
9.3
(0.5)
9.1
(0.5)
9.8
(0.5)
9.8
(0.5)
8.8
(0.5)

Age/DOB
Sex
Relationship
Hispanic Origin
Race
Housing Questions
Type of Building
Number of Rooms
Number of Vehicles
Food Stamps
Tenure
Detailed Person Questions
Place of Birth

As mentioned in the limitations section, we failed to send nonresponse mailings to Internet break-offs
that were considered sufficient partial responses, so we expect sending that mailing will help reduce
item nonresponse. It is hard to say to what extent it will help, but the second ACS Internet follow-up
test will shed light on this issue.

22

4.5 Are there differences in the demographics of Internet respondents and mail
respondents? Across notification strategies?
Previous studies have shown that the characteristics of Internet respondents differ from mail
respondents (Brady et al., 2004; Guarino, 2001; Lesser, 2010). We wanted to see if there were
differences in demographic characteristics of Internet respondents and mail respondents that suggested
differences in self-selection into response modes.
For each stratum, we grouped together all Internet respondents regardless of notification strategy. We
did the same for mail respondents across strategies (excluding control panel production cases since they
did not have the option to use the Internet). We then statistically compared selected demographic
characteristics between Internet respondents and mail respondents to see if there were differences that
may be due to respondents’ self-selection into a mode. For the person-level items, we used the
characteristics of the first person listed in the household roster (Person 1) to classify the household,
although we know from past studies that Person 1 is not always the respondent (Hill et al., 2008;
DeMaio et al., 1990).
As shown in Tables 15 and 16, compared to mail respondents, Internet respondents in both strata were
more likely to be younger, female, Asian, other race, with higher education, and more likely to speak a
language other than English at home. We also found that Internet respondents also were less likely to
be Black. Some of these demographic trends are evident in previous studies as well; particularly, age
and education have often been correlated with Internet use (Lugtig, 2011; Guarino, 2001). We also saw
that Internet respondents tend to live in larger households than mail respondents. While this may be
related to differences in how we gather household size and roster between the online survey and the
mail form, we have also seen this trend in both the 2003 and 2005 National Census Tests as well (Brady
et al., 2004; Zajac et al., 2007).
With respect to the finding that females were more likely to respond by Internet than mail, we have
evidence that this finding may be a product of the assumption that Person 1 is the respondent. On the
mail form, studies have suggested that married females sometimes list their husbands as Person 1 in the
roster, even though they are completing the survey for the household (Hill et al., 2008; DeMaio et al.,
1990). On the Internet, we asked for the name of the person completing the survey. If the respondent
indicated that they lived in the household about which we were asking, they were automatically listed as
Person 1. We believe these differences in how Person 1 is identified are driving the finding that Internet
has more female respondents than mail.
In addition to the differences between Internet and mail respondents mentioned above, we also found
some more differences specific to the Targeted stratum. In the Targeted stratum only, Internet
respondents were more likely than mail respondents to be non-White and Hispanic.

23

Table 15. Demographic Characteristics for the Respondent (Person 1) for Internet and Mail Returns (excluding
Control) in Targeted Stratum (for Households that Responded by April 28, 2011; standard errors in parentheses)
Characteristic
Age (mean)
Female

Internet
48.7
(0.1)
48.7
(0.5)

Mail
57.6
(0.2)
40.8
(0.5)

Internet – Mail
-8.9*
(0.2)
7.9*
(0.6)

86.1
(0.3)
3.7
(0.2)
0.2
(0.0)
6.2
(0.2)
0.1
(0.0)
1.6
(0.1)
2.1
(0.2)
4.9
(0.2)

89.5
(0.3)
4.1
(0.2)
0.3
(0.1)
3.8
(0.2)
0.1
(0.0)
0.7
(0.1)
1.6
(0.1)
4.2
(0.2)

-3.3*
(0.4)
-0.4*
(0.3)
-0.0
(0.1)
2.4*
(0.3)
0.1
(0.0)
0.9*
(0.1)
0.5*
(0.2)
0.7*
(0.3)

Race
White
Black
Am Ind/AK Native
Asian
Hawaiian/OPI
Other
Multiple Races
Hispanic
Education
1.8
5.8
(0.1)
(0.2)
11.9
23.4
High School Graduate
(0.3)
(0.5)
86.3
70.7
More than High School
(0.4)
(0.5)
2.66
2.28
Household Size
(0.01)
(0.01)
18.1
17.0
Renter
(0.4)
(0.4)
88.1
89.8
Only Speaks English
(0.4)
(0.3)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates statistical significance at α<0.1.
Less than High School

24

-4.0*
(0.3)
-11.5*
(0.6)
15.5*
(0.6)
0.39*
(0.02)
1.0
(0.5)
-1.7*
(0.4)

Table 16. Demographic Characteristics for Respondent (Person 1) for Internet and Mail Returns (excluding Control)
in Not Targeted Stratum (for Households that Responded by April 28, 2011; Standard Errors in parentheses)
Characteristic

Internet
48.2
(0.2)
52.4
(0.6)

Mail
58.3
(0.2)
45.6
(0.5)

Internet – Mail
-10.1*
(0.3)
6.8*
(0.8)

84.7
(0.5)
5.8
(0.3)
0.3
(0.1)
5.0
(0.3)
0.1
(0.0)
1.7
(0.2)
2.3
(0.2)
6.7
(0.3)

85.6
(0.4)
8.3
(0.3)
0.4
(0.1)
2.4
(0.1)
0.1
(0.0)
1.1
(0.1)
2.1
(0.2)
6.4
(0.2)

-0.9
(0.6)
-2.5*
(0.4)
-0.1
(0.1)
2.6*
(0.3)
-0.0
(0.0)
0.7*
(0.2)
0.2
(0.3)
0.3
(0.3)

3.8
12.2
(0.3)
(0.4)
16.3
30.3
High School Graduate
(0.4)
(0.5)
79.9
57.5
More than High School
(0.5)
(0.5)
2.55
2.11
Household Size
(0.02)
(0.01)
23.8
25.3
Renter
(0.6)
(0.5)
88.1
89.2
Only Speaks English
(0.4)
(0.3)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates statistical significance at α<0.1.

-8.4*
(0.5)
-14.1*
(0.6)
22.5*
(0.7)
0.44*
(0.02)
-1.5
(0.8)
-1.1*
(0.5)

Age (mean)
Female
Race
White
Black
Am Ind/AK Native
Asian
Hawaiian/OPI
Other
Multiple Races
Hispanic
Education
Less than High School

Tables 17 and 18 display the demographic profiles of responding households across the notification
treatments. We included all persons within the responding households for this analysis. The intention
of this analysis was to see if there was any impact of using the Internet on the characteristics of
responding households. We did not do any significance testing between estimates since we were trying
to identify trends rather than measure any specific differences.
The first trend we observed was the impact of the lower response rate (with largely Internet returns) on
the characteristics of those in the Push Regular treatment. Their characteristics looked out of sync with
those in the other strategies on some dimensions, particularly age, education, race (white, Asian, and
multiple races) and Hispanic origin. This was mostly because these results were generated at the end of
the first data collection month before most mail returns were received for this treatment. We focused
on the remaining treatments since we knew that this treatment was out of sync because of the low
response rate.

25

The characteristics of households in the two Choice treatments (Prominent and Not Prominent Choice)
and the Control appear to be close in range. The Push Accelerated characteristics are in line with those
of the Choice and Control treatments, except that Push Accelerated responding households (like the
Push Regular) appear to be younger and more educated, likely due to heavy Internet use in that
treatment. We know that ACS mail respondents tend to be older on average than respondents in the
CATI and CAPI modes (Joshipura, 2008), so moving the average age lower might be a benefit of using
Internet.
In the Targeted stratum, Push Accelerated responding households may have fewer white people, and
perhaps a few more “other” race than those in the Choice and Control treatments. In the Not Targeted
Stratum, Push Accelerated households may have less females, Hispanics and renters than the
households in the Choice and Control treatments.
While we observed some demographic trends, we are not overly concerned about the impact of the
Internet mode on the respondent pool at this stage in the data collection. First, while it is the basis for
these comparisons, mail data collection alone does not provide an accurate representation of the
characteristics of ACS survey respondents (Joshipura, 2008). We still have nonresponse follow-up
operations in CATI and CAPI to help ensure proper demographic representation.

26

Table 17. Demographic Characteristics of Responding Households by Notification Strategy in Targeted Stratum (for
Households that Responded by April 28, 2011; Standard Errors in parentheses)

Characteristic
Age (mean)
Female

Control
(Mail only)

Prominent
Choice

43.5
(0.1)
51.3
(0.2)

42.5
(0.3)
51.2
(0.3)

Not
Prominent
Choice
42.4
(0.4)
51.8
(0.3)

86.7
(0.3)
4.0
(0.2)
0.3
(0.0)
5.5
(0.2)
0.1
(0.0)
0.9
(0.1)
2.6
(0.1)
5.6
(0.2)

86.4
(0.5)
3.8
(0.3)
0.3
(0.1)
5.4
(0.3)
0.1
(0.0)
1.3
(0.2)
2.6
(0.2)
5.8
(0.3)

Push
Regular

Push Accelerated

38.6
(0.3)
50.7
(0.3)

40.9
(0.2)
51.0
(0.3)

86.5
(0.5)
3.7
(0.2)
0.3
(0.1)
5.7
(0.3)
0.0
(0.0)
1.0
(0.2)
2.6
(0.2)
5.5
(0.3)

83.2
(0.6)
3.9
(0.3)
0.2
(0.1)
7.1
(0.4)
0.1
(0.0)
2.0
(0.2)
3.6
(0.3)
6.6
(0.4)

84.9
(0.5)
3.8
(0.3)
0.2
(0.1)
6.2
(0.4)
0.0
(0.0)
1.8
(0.2)
3.0
(0.2)
6.1
(0.4)

22.4
(0.5)
18.5
(0.4)
59.1
(0.5)
2.40
(0.02)
16.9
(0.6)
87.8
(0.4)

22.6
(0.5)
14.3
(0.4)
63.0
(0.5)
2.60
(0.02)
17.4
(0.6)
88.5
(0.5)

22.1
(0.4)
17.3
(0.4)
60.6
(0.5)
2.49
(0.02)
17.9
(0.6)
87.5
(0.4)

Race
White
Black
Am Ind/AK Native
Asian
Hawaiian/OPI
Other
Multiple Races
Hispanic
Education
21.8
22.7
(0.2)
(0.5)
19.0
17.8
High School Graduate
(0.2)
(0.4)
59.2
59.5
More than High School
(0.2)
(0.5)
2.34
2.39
Household Size
(0.01)
(0.02)
16.7
18.0
Renter
(0.2)
(0.5)
87.0
87.8
Only Speaks English
(0.2)
(0.4)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
Less than High School

27

Table 18. Demographic Characteristics of Responding Households by Notification Strategy in Not Targeted (For
Households that Responded by April 28, 2011; Standard Errors in parentheses)
Characteristic
Age (mean)
Female

Control
(Mail only)
44.8
(0.1)
52.7
(0.1)

Prominent
Choice
43.6
(0.3)
53.0
(0.4)

Not Prominent
Choice
44.3
(0.3)
52.4
(0.4)

Push
Regular
38.7
(0.4)
51.6
(0.5)

Push
Accelerated
42.5
(0.3)
51.5
(0.4)

82.5
(0.2)
8.0
(0.2)
0.6
(0.0)
4.3
(0.1)
0.1
(0.0)
1.6
(0.1)
3.0
(0.1)
8.6
(0.2)

81.4
(0.7)
7.8
(0.5)
0.4
(0.1)
4.6
(0.4)
0.2
(0.1)
2.1
(0.2)
3.4
(0.3)
9.3
(0.5)

83.2
(0.6)
7.6
(0.4)
0.5
(0.1)
3.7
(0.3)
0.1
(0.0)
1.8
(0.2)
3.1
(0.3)
9.8
(0.5)

83.7
(0.8)
5.6
(0.5)
0.4
(0.1)
4.6
(0.5)
0.1
(0.1)
2.6
(0.3)
3.1
(0.3)
9.8
(0.7)

82.5
(0.8)
7.2
(0.6)
0.5
(0.1)
4.8
(0.5)
0.1
(0.0)
2.0
(0.3)
3.0
(0.2)
7.4
(0.5)

23.8
(0.5)
25.6
(0.5)
50.6
(0.6)
2.18
(0.02)
25.9
(0.8)
85.4
(0.6)

24.1
(0.6)
18.3
(0.5)
57.6
(0.8)
2.47
(0.03)
23.2
(0.8)
87.3
(0.6)

23.5
(0.5)
23.0
(0.5)
53.5
(0.6)
2.33
(0.02)
23.6
(0.7)
88.4
(0.6)

Race
White
Black
Am Ind/AK Native
Asian
Hawaiian/OPI
Other
Multiple Races
Hispanic
Education
24.2
25.5
(0.2)
(0.6)
24.8
23.8
High School Graduate
(0.2)
(0.5)
51.0
50.7
More than High School
(0.2)
(0.7)
2.18
2.24
Household Size
(0.01)
(0.02)
25.0
25.5
Renter
(0.2)
(0.7)
86.0
85.9
Only Speaks English
(0.2)
(0.6)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
Less than High School

4.6 How does the speed of receiving Internet responses compare to mail responses?
Previous studies have shown that an Internet mode leads to faster response (Brady et al., 2004). Faster
response can lead to a reduction in the volume of replacement questionnaires, ultimately reducing
associated costs.
We studied the timing of responses from each notification strategy treatment. Figure 2 displays the
daily cumulative check-in rates by notification strategy for the Targeted stratum, and Figure 3 contains
the rates for the Not Targeted stratum. As expected, Internet responses8 came in much quicker than
mail responses, as check-in rates for the Push treatments in the Targeted stratum were much higher a
week after the initial mailing than Control (mail only). Two weeks after the initial mailing, we see the
8

For comparability between mail and Internet, the check-in rates include non-blank mail responses and complete
and sufficient partial Internet responses.

28

Push treatments begin to lag behind the other treatments as mail returns are accumulating. Moving up
the paper questionnaire mailing in the Push Accelerated treatment by one week provides extra time for
mail returns, allowing the check-in rate to catch up with the Choice and mail only treatments by the end
of the first month of data collection. The lower check-in rate we observed for the Push Regular
treatment is due to the fact that the timing does not allow adequate time for households to return the
paper form.
Figure 2. Graph of cumulative daily check-in rates for Targeted Stratum

Pre-notice

Initial
quest. or
push

Reminder
postcard

Replacement
questionnaire
(push accelerated)

Replacement
questionnaire
(other treatments)

50
40
30
20
10
0

Production ACS

Prominent Choice

Not Prominent Choice

Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

29

Push (Regular)

Push (Accelerated)

Figure 3. Graph of cumulative daily check-in rates for Not Targeted Stratum
Initial
quest. or
push

Pre-notice

Replacement
questionnaire
(other treatments)

Replacement
questionnaire
(push accelerated)

Reminder
postcard

50
40
30
20
10
0

Production ACS

Prominent Choice

Not Prominent Choice

Push (Regular)

Push (Accelerated)

Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

4.7 How many households returned multiple responses?
For the various notification strategy treatments, multiple responses could be received in the following
combinations: Mail/Mail, Internet/Mail, and Internet/Mail/Mail. Respondents who completed the
survey online cannot submit more than one Internet return. We also counted as multiple responses
cases where the respondent started the survey online, did not complete the survey, and then returned a
paper questionnaire instead. The purpose of this analysis was to see if introducing the Internet
response mode in the various notification strategies impacts the amount of households that respond to
the ACS more than one time.
Very few households (one percent or less) responded more than once across all notification strategies
(Table 19). There were no significant differences in multiple return rates across the notification
strategies (Table 20).
Table 19. Multiple Return Rates by Notification Strategy and Stratum (through May 31, 2011)
Notification Strategy
Stratum

Control
(Mail only)

Prominent
Choice

Not Prominent
Choice

Targeted
Estimate
0.8
0.9
0.6
(SE)
(0.1)
(0.1)
(0.1)
Not Targeted
Estimate
0.9
0.9
0.9
(SE)
(0.0)
(0.1)
(0.1)
Source: U.S. Census Bureau, 2011 Internet Test, April to May 2011

30

Push
(Regular)

Push
(Accelerated)

1.0
(0.1)

0.9
(0.1)

0.9
(0.1)

0.9
(0.1)

Table 20. Differences in Multiple Return Rates by Notification Strategy and Stratum (through May 31, 2011)

Stratum

Compare Choice
Strategies
Difference
(Prom Best
Not Prom)

Compare Push
Strategies
Difference
(Reg Best
Accel)

Compare Best Choice
and Best Push
Difference
(Choice Best
Push)

Targeted
Estimate
0.3
0.1
-0.2
Not Prom
Not Prom
Push Accel
(SE)
(0.1)
(0.2)
(0.1)
Choice
Not Targeted
Estimate
0.0
0.0
0.0
Not Prom
Push Accel
Push Accel
(SE)
(0.2)
(0.2)
(0.2)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates statistical significance at α<0.1, controlling for multiple comparisons.

Compare Best Strategy
and Control
Difference
(Best Best
Control)
0.1
(0.1)

Not Prom
Choice

0.0
(0.1)

Tie

4.8 What were the perceptions of the information contained in the mail materials?
We instituted a telephone follow-up interview called the Attitudes and Behavior Study (ABS) to measure
why respondents chose Internet or paper to respond to the April 2011 ACS Internet Test and whether
anything specific in the mailing materials pushed respondents toward one mode over the other.
Secondly, the ABS attempted to measure why some did not respond to the ACS at all, and whether
nonresponse had to do with the multiple mode offers in the test.
The ABS data showed that not all ACS respondents in the notification strategies knew about the
reporting mode choice, even in the treatments where the mode choice was explained in multiple places.
Not knowing about the other mode option appears to be a factor for more mail respondents (37 to 47
percent across notification strategies did not know about the Internet form) compared to those who
chose the Internet (only 10 to 26 percent did not know about the paper form).
There did not seem to be any messages specific to the mailing materials or motivation strategies that
motivated respondents to choose one mode over the other. Rather, about 33 percent of Push
Accelerated respondents who chose the paper form said they did so either because they did not have
Internet access or because they had computer problems. Less than 18 percent of mail respondents in
the Prominent and Not Prominent Choice treatments mentioned those reasons for choosing paper. This
difference suggests that respondents in the Push treatments were considering the Internet reporting
option more than the other treatments, and that the reason for choosing one mode over the other had
more to do with the inability to complete an Internet form, rather than any message or preference. For
the mail respondents in the Not Prominent and Prominent Choice (who indicated they knew about the
Internet), preference for the paper form was the reason cited the most often.
From the ABS data, we did not find indications that mode paralysis, that is when offered two modes
neither are chosen, was a reason for nonresponse. Instead, the main drivers of nonresponse were the
lack of knowledge of the ACS mail package and the busy schedules of the potential respondents. No
more than 5 to 15 percent of nonresponse can be attributed to the mode choice because that is the
proportion of nonrespondents who knew about both modes. For more information from this report,
please see Nichols (forthcoming).

31

5. Cost Effectiveness of the Notification Treatments
An Internet response option is part of the future of ACS data collection. Several factors from this
experiment will help determine the cost-effectiveness of the Internet response mode, including the
response rate, speed of response as it impacts follow-up contacts, and material cost differences. The
American Community Survey Office (ACSO) that oversees production operations for the ACS is
developing a cost model into which estimates from this test will be put to determine the strategy that is
the most cost-effective. Cost analyses were ongoing at the time we published this report.

6.

SUMMARY

We evaluated the various Internet notification treatments across a variety of measures, and now we
summarize the cumulative results to determine which treatment provides the most advantages. Among
the treatments tested, the Push Accelerated strategy seems to provide a lot of benefits. First, it
increased the response rate by 2.6 percentage points over Control in the Targeted stratum, and
maintained the response rate in the Not Targeted stratum, at the time we would normally cut for
nonresponse follow-up by CATI.
In both strata, most of the response in Push Accelerated came from Internet returns. We know Internet
returns come in more quickly than mail returns. However, we also found that Internet break-offs are
harmful to the item nonresponse rates, particularly in the detailed person section of the questionnaire.
We observed some demographic trends among responding households in the Push Accelerated
treatment relative to Control and Choice treatments, namely that they appear to be younger and more
educated. We expect that using CATI and CAPI for nonresponse follow-up will help ensure proper
representation among demographic groups, similar to how CATI and CAPI operate now to compensate
for the limitations of mail data collection.
We have to reconsider the best way to handle the cases that broke-off in the Internet instrument. The
FEFU operation will help correct for some of the missing data, but we also need to consider alternative
ways to deal with cases that broke-off. Should we deviate from the way we handle mail returns, and
send these cases to nonresponse follow-up? How does that impact the associated costs? Are there
other ways to get these cases to complete the Internet survey or respond to the mail questionnaire?
We hope to see some improvement in break-offs in our second Internet test (results forthcoming)
where we sent the nonresponse follow-up paper mailing to all Internet break-offs.

7. NEXT STEPS
We fielded a follow-up ACS Internet test in November 2011 based solely on the response rate results
from this test. The goal of the November test was to test enhancements to the two strategies that had
the highest response rates, the Prominent Choice and Push Accelerated. The results from the follow-up
test will help determine which notification strategy we will use when we introduce an Internet response
option in ACS production, which will start in January 2013. The results of the November test will be
available in Spring 2012.
Internet break-offs are problematic because they cause higher item nonresponse rates for questions
that appear later in the survey. We need to find a way to encourage people to complete the survey,

32

particularly those that started the survey online. We plan to explore alternative ways of contacting
these households, including email or text reminders to come back and complete the survey. We are also
using the paradata to explore where the break-offs are occurring to see if we can identify and remedy
issues with the questions or design that are driving break-offs (Horwitz et al., forthcoming).
The paradata will also help evaluate the effectiveness of the design of the online survey. These results
can pinpoint potentially problematic questions or features. We can then use laboratory testing to drill
down the nature of the issue and test potential resolutions in the Internet survey.

Acknowledgements
We would like to thank the following Census Bureau staff for their valuable contributions and assistance
to the development and analysis of these projects: Rachel Horwitz, Megha Joshipura, Debbie Klein,
Andrew Roberts, Brian Wilson, Todd Hughes, Tony Tersine, John Studds, Chris Butler, Joe Misticelli, Brian
Ridgeway, Anne Ross, Colleen Hughes, Steve Hefter, Don Keathley, Gail Denby, Michael Coan, Kathy
Ashenfelter, Temika Holland, and Victor Quach. We would also like to thank Mick Couper and Roger
Tourangeau for their expertise in designing the ACS Internet survey.

References
Ashenfelter, K., Holland, T., Quach, V., Nichols, E., and Lakhe, S. (2011a), “ACS Internet 2011 Project:
Report for Rounds 1 and 2 of ACS Wireframe Usability Testing and Round 1 of ACS Internet Experiment
Mailing Materials Cognitive Testing,” Census Bureau Report, Survey Methodology #2012-01,
http://www.census.gov/srd/papers/pdf/ssm2012-01.pdf
Ashenfelter, K., Holland, T., Quach, V., and Nichols, E. (2011b), “Final Report for the Usability Evaluation
of ACS Online Instrument Rounds 4a and 4b,” Census Bureau draft report.
Bates, N., and Mulry, M., (2007), “Segmenting the Population for the 2010 Census Integrated
Communications Program,” October 22. http://2010.census.gov/partners/pdf/C2POMemoNo_1_10-2408.pdf
Bentley, M., and Tancreto, J., (2006), “2005 National Census Test: Self-Response Options Analysis,” 2010
Census Test Memorandum Series: 2005 National Census Test, No. 26, U.S. Census Bureau.
Bentley, M., Hill, J., Reiser, C., Stokes, S., and Meier, A. (2011), “2010 Census Quality Survey,” DSSD 2010
CPEX Memorandum Series #A-02, U.S. Census Bureau.
Brady, S., Stapleton, C.N., and Bouffard, J., (2004), “2003 National Census Test: Response Mode
Analysis,” DSSD 2003 Memorandum Series #B-02, U.S. Census Bureau.
Couper, Mick P. and Miller, Peter V. (2008), Web Survey Methods: Introduction to the Special Issue of
POQ on Web Survey Methods., Public Opinion Quarterly, 72, 5, 831-835,
http://poq.oxfordjournals.org/content/vol72/issue5/#ARTICLES.
Couper, Mick (2000), “Web Surveys: A Review of Issues and Approaches,” Public Opinion Quarterly, Vol.
64, No. 4, pp. 464-494.

33

De Leeuw, Edith, D. (2005), “To Mix or Not to Mix Data Collection Modes in Surveys,” Journal of Official
Statistic, 21, 233-255, http://www.jos.nu/Articles/abstract.asp?article=212233.
DeMaio, T.J. and Bates, N.A. (1990), “Who Fills Out the Census Form?” Proceedings of the
Survey Research Methods Section of the American Statistical Association.
Dhar, Ravi (1997), “Consumer Preference for a No-Choice Option,” Journal of Consumer Research, Vol
24, No. 2, September.
Dillman, D. A., and J. Tarnai, (1988), “Administrative Issues in Mixed Mode Surveys,” in Groves, R. M. et
al., Telephone Survey Methodology, New York: Wiley.
Gentry, R. and Good, C. (2008), “Offering Respondents a Choice of Survey Mode: Use Patterns of an
Internet Response Option in a Mail Survey,” Presentation at the Annual Conference of the American
Association for Public Opinion Research, May 15-18.
Griffin, D., Fischer, D., and Morgan, M. (2001), “Testing an Internet Response Option for the American
Community Survey,” Paper Presented at the Annual Conference of the American Association for Public
Opinion Research, May 17-20. http://www.census.gov/acs/www/Downloads/library/2001/Paper29.pdf
Groves, R., and Kahn, R. (1979), Surveys by Telephone: A National Comparison With Personal Interviews,
New York, Academic Press.
Guarino, J. (2001), “Assessing the Impact of Differential Incentives and Alternative Data Collection
Modes on Census Response,” Census 2000 Testing and Experimentation Program, July 10.
http://www.census.gov/pred/www/rpts/RMIE%20Nonresponse%20Phase.pdf
Hill, J., Lestina, F., Machowski, J., Rothhaas, C., and Roye, K. (2008), “Study of Respondents Who List
Themselves as Person 1,” Decennial Statistical Studies Division 2008 MEMORANDUM SERIES # G-09,
September 28.
Horwitz, R., Tancreto, J., and Zelenak, M.F. (forthcoming), “Use of Paradata to Assess the Quality and
Functionality of the American Community Survey Internet Instrument,” U.S. Census Bureau Report.
Johnson, K. (2009), “Census Barriers, Attitudes, and Motivators Survey Methodology Report,” C2PO
2010 Census Integrated Communications Research Memoranda Series No.8, January 6.
http://2010.census.gov/partners/pdf/C2POMemoNo8.pdf
Joshipura, M. (2008), “2005 American Community Survey Respondent Characteristics Evaluation,” DSSD
American Community Survey Research and Evaluation Memorandum Series Chapter #ACS-RE-2,
September 16. http://www.census.gov/acs/www/Downloads/library/2008/2008_Joshipura_01.pdf
Leeman, J., Fond, M., and Ashenfelter, K. (forthcoming), “Final Report of Cognitive and Usability
Pretesting of the Online Version of the Puerto Rico Community Survey in Spanish and English,” Census
Bureau draft report.

34

Lesser, V. (2010), “Does Providing a Choice of Survey Modes Influence Response?” Paper Presented at
the Annual Conference of the American Association for Public Opinion Research, May 13-16.
Lugtig, P., Lensvelt-Mulders, G., Frerichs, R., and Greven, A. (2011), “Estimating nonresponse bias and
mode effects in a mixed-mode survey,” International Journal of Market Research, Vol 53 (5).
Millar, M., and Dillman, D. (2011), “Improving Response to Web and Mixed-Mode Surveys,” Public
Opinion Quarterly, Vol . 75 (2).
Nichols, E. (forthcoming), “The 2011 American Community Survey Internet Test: Attitudes and Behavior
Study Follow up,” Census Bureau Report.
Peytchev, A. (2009) “Survey Breakoff,” Public Opinion Quarterly, Vol . 73 (1).
Schneider, S., Cantor, D., Malakhoff, L. Arieira, C., Segel, P., Nguyen, K, and Tancreto, J. (2005),
“Telephone, Internet, and Paper Data Collection Mode for the Census 2000 Short Form,” Journal of
Official Statistics, Vol. 21(1).
Smyth, J.D., Dillman, D., Christian L.M. and O'Neill, A. (2010), “Using the Internet to Survey Small Towns
and Communities: Limitations and Possibilities in the Early 21st Century,” American Behavioral Scientist,
Vol. 53(9).
Tancreto, J. G., Davis, M.C., and Zelenak, M.F. (forthcoming), “Developing an Internet Response Mode
for the American Community Survey (ACS),” Paper presented at the American Association for Public
Opinion Research Conference, May 2011.
U.S. Census Bureau (2009), “Design and Methodology, American Community Survey,” 7-5, April 2009.
http://www.census.gov/acs/www/Downloads/survey_methodology/acs_design_methodology.pdf
U.S. Census Bureau (2008), “2010 Census Integrated Communications Campaign Plan,” August 2008.
http://2010.census.gov/partners/pdf/2010_ICC_Plan_Final_Edited.pdf
U.S. Census Bureau (2010), “Response Rates and Reasons for Noninterviews (in percent) – Housing
Units”, January 2012. http://www.census.gov/acs/www/methodology/response_rates_data/index.php
U. S. Department of Commerce (2010), “Exploring the Digital Nation: Home Broadband Internet
Adoption in the United States,” November 2010.
U.S. Department of Commerce (2011), “Digital Nation: Expanding Internet Usage,” National
Telecommunications and Information Administration, February 2011.
Zajac, K., Allmang, K., and Barth, J. (2007), “2005 National Census Test: Response Mode Analysis,” 2010
Census Test Memoranda Series, Chapter: 2005 National Census Test, No. 28, March 30.

35

Appendix A: 2011 ACS Internet Test Mail Materials
I.

Prominent Internet Offer (Choice)

Page

1. Pre-Notice Letter……………………………………………………………………………… A-2
2. Initial Mailing Package……………………………………………………………………… A-3
a. Letter………………………………………………………………………………………….A-3
b. Instruction Card (Front Side – English)………………………………………..A-4
c. Instruction Card (Reverse Side – Spanish)………………………………….. A-4
d. Questionnaire Cover………………………………………………………………….. A-5
3. Reminder Postcard…………………………………………………………………………… A-6
4. Second (Replacement) Mailing Package Letter………………………………… A-7
5. Additional Reminder Postcard…………………………………………………………. A-8
NOTE: The Prominent Internet Offer (Choice) Instruction Card and Questionnaire from the First Mailing
Package was included in the Second (Replacement) Mailing Package.
II.

Not Prominent Internet Offer

Page

1. Pre-Notice Letter…………………………………………………………………………….. A-9
2. Initial Mailing Package…………………………………………………………………….. A-10
a. Letter………………………………………………………………………………………… A-10
b. Questionnaire Cover…………………………………………………………………. A-11
3. Reminder Postcard…………………………………………………………………………..A-12
4. Second (Replacement) Mailing Package Letter………………………………… A-13
5. Additional Reminder Postcard………………………………………………………….A-14
NOTE: The Not Prominent Internet Offer Questionnaire from the First Mailing Package was included in
the Second (Replacement) Mailing Package.
III.

Push Internet

Page

1. Initial Mailing Package……………………………………………………………………… A-15
a. Letter………………………………………………………………………………………….A-15
b. Instruction Card (Front Side – English)………………………………………..A-16
c. Instruction Card (Reverse Side – Spanish)………………………………….. A-16
2. Reminder Postcard…………………………………………………………………………… A-17
3. Regular Schedule - Second (Replacement) Mailing Package Letter….. A-18
4. Modified Schedule - Second (Replacement) Mailing Package Letter… A-19
5. Additional Reminder Postcard…………………………………………………………..A-20
NOTE: The Push Internet Pre-Notice Letter is the same as the Prominent Internet Offer (Choice) PreNotice Letter. Also, the Prominent Internet Offer (Choice) Instruction Card and Questionnaire from the
First Mailing Package was included in the Push Internet Second (Replacement) Mailing Packages.

A-1

Prominent Internet Offer (Choice): Pre-Notice Letter

A-2

Prominent Internet Offer (Choice): First Mailing Package Letter

A-3

Prominent Internet Offer (Choice): First Mailing Package Instruction Card (Front Side – English)

Prominent Internet Offer (Choice): First Mailing Package Instruction Card (Reverse Side – Spanish)

A-4

Prominent Internet Offer (Choice): Questionnaire Cover

A-5

Prominent Internet Offer (Choice): Reminder Postcard

A-6

Prominent Internet Offer (Choice): Second (Replacement) Mailing Package Letter

A-7

Prominent Internet Offer (Choice): Additional Reminder Postcard

A-8

Not Prominent Internet Offer: Pre-Notice Letter

A-9

Not Prominent Internet Offer: First Mailing Package Letter

A-10

Not Prominent Internet Offer: Questionnaire Cover

A-11

Not Prominent Internet Offer: Reminder Postcard

A-12

Not Prominent Internet Offer: Second (Replacement) Mailing Package Letter

A-13

Not Prominent Internet Offer: Additional Reminder Postcard

A-14

Push Internet: First Mailing Package Letter

A-15

Push Internet: First Mailing Package Instruction Card (Front Side – English)

Push Internet: First Mailing Package Instruction Card (Reverse Side – Spanish)

A-16

Push Internet: Reminder Postcard

A-17

Push Internet: Regular Schedule - Second (Replacement) Mailing Package Letter

A-18

Push Internet: Modified Schedule - Second (Replacement) Mailing Package Letter

A-19

Push Internet: Additional Reminder Postcard

A-20

Appendix B: Self-Administered Response Rates and Internet Response Rates by Notification
Strategy and Stratum (through May 31, 2011)
Table B-1. Self-Administered Response Rates and Internet Response Rates by Notification Strategy and Stratum
(through May 31, 2011)
Notification Strategy
Stratum

Prominent
Choice

Not Prominent
Choice

Targeted
Response Rate
51.6
51.5
(SE)
(0.4)
(0.4)
INT Response Rate
12.1
4.3
(SE)
(0.3)
(0.2)
Not Targeted
Response Rate
41.6
41.3
(SE)
(0.5)
(0.3)
INT Response Rate
7.7
2.4
(SE)
(0.2)
(0.1)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011

Push
Regular

Push
Accelerated

50.8
(0.4)
31.7
(0.3)

51.0
(0.4)
30.4
(0.4)

38.9
(0.4)
19.1
(0.3)

39.2
(0.4)
18.6
(0.3)

Table B-2. Differences in Self-Administered Response Rates by Notification Strategy and Stratum (through May 31,
2011)
Stratum

Compare Choice Strategies
Difference
Best
(Prom - Not Prom)

Compare Push Strategies
Difference
Best
(Reg - Accel)

Targeted
Estimate
0.2
-0.2
Prom
Push Accel
(SE)
(0.6)
(0.6)
Not Targeted
Estimate
0.3
-0.4
Prom
Push Accel
(SE)
(0.6)
(0.5)
Source: U.S. Census Bureau, 2011 ACS Internet Test, April to May 2011
* Indicates statistical significance at α<0.1, controlling for multiple comparisons.

B-1

Compare Best Choice and Best Push
Difference
Best
(Choice - Push)
0.6
(0.6)

Prom Choice

2.4*
(0.6)

Prom Choice


File Typeapplication/pdf
File Title2011 American Community Survey Internet Tests: Results from First Test in April 2011
SubjectData Collection, Data Quality, Cost, version 2
AuthorU.S. Census Bureau
File Modified2012-04-03
File Created2012-03-29

© 2024 OMB.report | Privacy Policy