Online Communications: Improving Survey Response Campaign
Study Plan
Introduction
In line with the Census Bureau’s goal to increase survey response rates through communications, the Census Bureau plans to launch a test of a targeted digital-advertising campaign. Outside of decennial years, traditional broad-based advertising methods are cost-prohibitive because of the relatively small sample size of most Census surveys compared with the size of the general population. With the advent of digital advertising strategies, however, the Census Bureau now has the opportunity to deliver promotional messages only to households within a survey sample, reducing the overall costs associated of advertising to those households. The American Community Survey (ACS) offers a large enough national sample to field a test of digital advertising and determine whether it improves response rates. This test is aiming to use the February and March 2017 ACS production samples, dividing the sample into control and experimental treatments. There will be two experimental treatments with one receiving more advertisements and one receiving fewer; the control will not receive advertisements. The purpose of this test is to study the impact of these changes on self-response behavior, overall and by sub-groups, and assess any potential savings.
Research Questions
The following research questions are intended to assess the effect of sending targeted digital advertisements to survey respondents.
Using existing digital advertising tools, what percent of households can be linked with address-specific advertising? Are there demographic differences between households that can be linked with address-specific advertising and those that cannot be linked?
Using data from the 2016 ACS Content Test, a test of new and revised questions for the ACS conducted in the spring of 2016, we will compare differences between households that can be linked to a digital profile to those that cannot.1 We will identify differences between households that can be linked in terms of their propensity to respond and use response data from the ACS to analyze select characteristics such as race/ethnicity composition, age, tenure, and educational attainment.
Do digital advertisements improve survey response rates overall and/or with sub-groups?
We will compare differences in survey response rates (the number of addresses responding to the survey divided by the number of addresses eligible to respond) between the high- and low-advertisement treatment groups to detect if additional advertising has a positive impact on response. We will also compare the differences between each treatment and the control group. These comparisons will also be made for sub-groups of the population to understand how digital advertising may affect different groups of people (for example, addresses in areas with certain characteristics known at the time of sample selection).
Based on the response differences, can we estimate cost savings from reduced workloads and/or mail expenses?
To assess cost impacts, we will compare the self-response return rate (the number of addresses responding to the survey via Internet or mail divided by the total number of addresses) between the control and each treatment at different points in the data collection cycle. We will also factor in the cost of placing the advertisements.
Costs and cost savings will vary depending on when we receive self-responses. We will consider the self-response return rates at the cut-off date for the paper questionnaire mailing and prior to the start of computer assisted telephone interviewing (CATI). An increase in households that self-respond will lead to a decrease in costs for more expensive followups including telephone interviewing and personal visits.
Do response distributions for select demographic characteristics differ among the control and two test treatments (self-response only)?
To determine whether response distributions are affected by the test treatments, we will calculate the distributions of all non-blank responses for select demographic characteristics. The distributions will be based only on self-responses. We will compare distributions between the treatment groups and between each treatment and control. If there are significant differences in the distributions, we will compare each category of the distribution (controlling for multiple comparisons using the Bonferroni-Holm or Hochberg adjustments) to better understand the change in a particular characteristic. For example, an increase in the percent of 25-44 year olds may result in a decrease in the percent of 65 and older but the number of 65 and older may remain constant.
Which digital advertisements maximize engagement (viewing or clicking) with the viewer?
The following aggregate digital advertising metrics will be reported:
impressions (i.e., the number of times an ad was displayed)
engagement metrics including counts of clicks, video ad interactions, video view length, and social engagements
If there are any changes to the advertising placement strategy during the test based on engagement metrics, the changes will be documented. During the advertising campaign, engagements as a percentage of impressions will be used to evaluate the performance of various ad messages and will inform adjustments to how often and what types of ads (for example, banner or social media) are used.
In addition, we will obtain measures on landing page2 engagement from the ad campaign. These metrics include landing page arrivals, actions taken on the landing page, average time on page, and bounce rate (percentage of people who see one page and leave the site). This information is available only in aggregate form so we will not have any information about specific households. This information will be evaluated to understand web behavior on Census.gov as it relates to the advertising campaign.
We will also look into whether there is any notable increase in contacts to the call center, types of calls received, and interviews completed as well as refusals and other interview information from computer assisted personal interviewing (CAPI). We will document any noticeable changes. Additionally, we will also look at the comments received in the Tell Us What You Think email comment form.
Potential Actions
The key research question is whether the data indicate that the cost savings from increased response is greater than the cost of advertising, for the whole sample or possibly some subgroups. If that is not true, then even if response increases, the indications would be that digital advertising does not save the ACS program money at this time.
If we find the potential for cost savings, we would identify further research and questions to determine how we could potentially implement digital advertising in the ACS production. The results from the other research questions would assist in that. For example, we would know which advertisements drew the most attention of the viewers, which groups respond more to digital advertising, and the characteristics of the households we were able to advertise to. We would also provide the results to other Census Bureau demographic surveys where digital advertising might show potential.
1 We will use data from the Content Test, rather than from the February and March 2017 production months, so that we can measure the differences without having to take into account the effects of the test treatments on the case we are advertising to. There was no advertising for the Content Test.
2 When an advertisement is clicked, the user will be directed to a Census.gov web landing page featuring general information about the value of Census’ work and a link to a page with more detailed information about being in a survey, such as https://www.census.gov/programs-surveys/are-you-in-a-survey.html
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | David A Raglin (CENSUS/ACSO FED) |
File Modified | 0000-00-00 |
File Created | 2021-01-22 |