U.S. Census Bureau
2010 Census Integrated Communications Program Evaluation
OMB Control Number 0607-XXXX
Census 2000 was the first decennial census to use a paid advertising campaign. The campaign featured use of print and broadcast media, as well as outdoor advertising, to emphasize the importance of responding to the census. Five advertising agencies were used - one to create the core message, and the others to tailor it to specific audiences. The Census Bureau also established partnerships with many diverse groups at all levels of government, both to publicize the census and to encourage participation. Numerous promotions and special events were held across the country. The available evidence suggests that the Census 2000 Partnership and Marketing Program along with other efforts aimed at improving census participation, succeeded in reversing a long-term decline in mail response rates (especially in traditionally hard-to-enumerate groups), and may also have improved cooperation with Census Bureau enumerators, helping to shorten and reduce the costs of Nonresponse Followup (NRFU) efforts.
The 2010 Census Integrated Communications Campaign (2010 Census ICC) is intended to build on the success of the Census 2000 Partnership and Marketing Program. For 2010, the Census Bureau will use an approach that integrates a mix of mass media advertising, targeted media outreach to specific populations, national and local partnerships, grassroots marketing, school-based programs, and special events. By integrating these elements with each other and with the Census Bureau’s 2010 operations, the campaign’s goal is to more effectively help ensure that everyone, especially the hard to enumerate, is reached.
The Census Bureau will use an independent evaluation of the 2010 Census ICC to determine if the campaign is achieving its goals. The purpose of the evaluation is to assess the impact of the entire campaign for paid media/advertising, partnerships, Census in Schools program, and other campaign activities. The evaluation will allow stakeholders to determine if the significant investment in the 2010 Census ICC was justified by such outcomes as reduced NRFU burden, reduced differential undercount, and increased cooperation with enumerators. The 2010 Census Integrated Communications Program Evaluation (2010 Census ICP Evaluation is designed as a multi-method approach that will increase the depth and breadth of the evidence available for the assessment and will support valid, robust, and actionable conclusions about the impact of the 2010 Census ICC. The Census Bureau has contracted with the National Opinion Research Center (NORC) at the University of Chicago to design, conduct, and analyze the 2010 Census ICP Evaluation.
A key feature of the NORC evaluation is the Paid Advertising Heavy-Up Experiment (PAHUE). For this experiment, pairs of Designated Market Areas (DMA’s) will be matched on indicators such as hard to count scores, mail return rates in Census 2000, race/ethnic populations, poverty rates, urban/rural composition, linguistic isolation population, and number of households. Once the DMAs are identified, one-half of each pair will be randomly assigned to receive a 50 percent “heavy-up” of paid advertising. Being able to exploit experimental variation in paid media exposure greatly improves the evaluation’s potential for describing the contribution of campaign components to the outcomes of interest.
Title 13, United States Code, Section 141, directs the Secretary of Commerce to conduct a decennial census of the population, and Section 193 authorizes the Secretary to conduct surveys to gather supplementary information relating to the census.
The 2010 Census ICC contract is a major public expenditure and has great potential to affect the quality and overall cost of the 2010 Census. For these reasons, a rigorous and independent evaluation of the 2010 Census ICC is essential for assessing the success of the 2010 Census and planning for the 2020 Census.
The 2010 Census ICP Evaluation must answer the critically important questions the Census Bureau has posed about effective communications for Census success, and must do this in a statistically rigorous manner, defensible to all stakeholders and concerned parties -- in the Census Bureau; in the U.S. Congress, whose membership, policies, and plans depend on the outcome of the decennial census; in other levels and entities of government; the Census Advisory Committees; and in the research community. Specifically, the evaluation must determine whether the 2010 Census ICC achieved its three primary goals: (1) increasing the mail response rate; (2) improving the overall accuracy of the 2010 Census by reducing differential undercounting of population by race/ethnicity; and (3) improving cooperation with census enumerators—all by directly and indirectly influencing public awareness, attitudes, intentions, and ultimate behaviors.
The 2010 Census ICP Evaluation approach is based partly on the Census 2000 Partnership and Marketing Program Evaluation, also conducted by NORC. That experience demonstrated the strengths of a traditional time-series survey design for measuring the impact of an integrated communications program on critical indicators of exposure, awareness, attitudes, and other predictors of census response behavior. It also revealed significant weaknesses of time-series survey data to assess the impact of an integrated communications campaign, pointing to the prospective benefits of a multi-method approach to evaluate the effectiveness of a complex, multidimensional effort to influence public participation in one of the most important civic activities that support American society and democracy.
NORC’s approach to the evaluation of the 2010 Census ICP Evaluation offers a number of major advantages compared to the methods employed in the Census 2000 Partnership and Marketing Program Evaluation, including: the involvement of a team of national leaders in research on the impact of communications on public behavior; employing a surveillance/sampling model for this project; using a hybrid survey design that combines both cross-sectional and longitudinal samples; incorporating external data sources to triangulate with survey data; and incorporating multiple quantitative and qualitative measurement and analysis methods.
Information quality is an integral part of the preliminary review of the information to be disseminated by the Census Bureau. Information quality is also integral to the information collections conducted by the Census Bureau and is incorporated into the clearance process required by the Paperwork Reduction Act.
As part of NORC’s evaluation the PAHUE will be conducted in pairs of DMAs which will be matched on indicators such as hard to count scores, mail return rates in Census 2000, race/ethnic populations, poverty rates, urban/rural composition, linguistic isolation population, and number of households. Once the DMAs are identified, one-half of each pair will be randomly assigned to receive a 50 percent “heavy-up” of paid advertising.
Study design
NORC’s proposed evaluation design involves the conduct of surveys of the general public based on probability methods that combine time-series samples with longitudinal samples to maximize the statistical power of cross-sectional estimates that will change over time and in response to 2010 Census ICC efforts. NORC will conduct hybrid (cross-sectional/longitudinal) surveys with probability samples of United States households, oversampling minority populations and other target segments. NORC will conduct its survey at three points in time. Wave 1 occurs during the earliest phases of partnership activity, in August through October 2009, to assess baseline levels of all measures of public attention and intentions that will be the focus of the 2010 Census ICC. Wave 2 occurs during the expected peak of paid media and partnership activities of the 2010 Census ICC activity from January through April 2010. Wave 3 occurs during the post-mailout period when both paid advertising and partnership are motivating people to respond to the census and encouraging cooperation with enumerators, from May through August 2010.
Exposure to components of the 2010 Census ICC will be estimated using several data sources in addition to survey data. These data sources will permit exploration of relationships between intensity of campaign activity and changes in awareness, attitudes, and intentions among the general public and key population race/ethnic. Data sources will include ratings and impressions data for the paid advertising campaign, and data from the Census Bureau’s Integrated Partner Contact Database (IPCD) for measuring partnership activity. NORC also plans to merge Census Bureau data on household participation in the 2010 Census with the survey records of households in the 2010 Census ICP Evaluation survey sample in a secure environment for a more detailed and accurate record of households’ census participation, including mailback status, cooperation with enumerators, and other indicators of census actions regarding these households’ 2010 participation. Each of these alternative data sources will be essential in corroborating, triangulating with, or providing alternative measures of exposure to the self-reported campaign exposure measures collected through surveys.
NORC will collaborate with Census Bureau staff to compile aggregate-level data on 2010 Census ICC effort and Census results to analyze the relationships between measures of planned and actual 2010 Census ICC activity (by component) and aggregate Census participation results (mail response, enumerator response, non-response) in 1990, 2000, and 2010 to identify trends over time in target segments and for hard-to-count areas.
To improve the ability of the NORC design to detect a relationship between campaign exposure and response, there will also be the PAHUE. In these studies, additional cases will be added in eight DMAsselected as matched pairs.
Combining and interpreting results from multiple analytical approaches will improve the capacity of the design to answer the key evaluation questions concerning the impact of the 2010 Census ICC and its components, and the return on investment of 2010 Census ICC resources with respect to the three primary outcomes of interest. As needed, qualitative data collection may further inform or illuminate puzzles within the analysis activity.
Evaluation objectives
The main objectives of the evaluation are to assess the extent to which the 2010 Census ICC achieved a variety of specific goals related to increased mail returns, improved accuracy through reduced differential undercount, and improved cooperation with enumerators. Specific analysis questions to be addressed by the 2010 Census ICP Evaluation include:
How effective was the overall communications strategy at contributing to improvements in response accuracy, and cooperation with enumerators?
Which elements of the paid media/advertising were reported/recalled both least and most often?
Which elements of the Partnership Data Services (PDS) Program (National and Regional) were reported/recalled both least often and most often?
How effective was the paid media/advertising campaign in changing positive and negative attitudes and beliefs about the Census?
How effective was the Partnership Data Services (PDS) Program in changing positive and negative attitudes and beliefs about the Census?
How effective was the “Census in Schools” Program in changing positive and negative attitudes and beliefs about the Census?
What impact did the 2010 Census Integrated Communications Campaign as a whole have on the likelihood of returning a Census form?
What differences in awareness, knowledge, and attitudes before, during, and after the 2010 Census Integrated Communications Campaign significantly different from those measured before, during, and after the 2000 Advertising Campaign?
What advertisements, programs and events (including breaking news events) outside of the 2010 Census Integrated Communications Campaign had an effect on respondent attitudes and behaviors?
10. What return on investment can be estimated for the 2010 Census Integrated Communications Campaign?
Analysis Overview
Perhaps the central objective of the Evaluation is to assess the relationship between campaign exposure and the outcomes of primary interest, as well as a variety of intermediate factors. Exposure to the campaign will be measured using survey data as well as supplementary data sources:
Exposure to ICP components:
The 2010 Census ICP Evaluation analyses will require measurement of 2010 Census ICC exposure or campaign ‘dosage’ as experienced by sampled households. The 2010 Census ICP Evaluation design calls for two type of exposure measurement.
Awareness/Self-reported Exposure: the 2010 Census ICP Evaluationquestionnaires include a variety of questions asking whether the sampled respondents have been exposed to such campaign components as partnership activities, paid media, earned media, or census in schools activities. Most components of campaign activity are also associated with a question that asks how much exposure the respondent has had to that component. These measures will be aggregated across components and combined with magnitude questions to construct self-reported measures of campaign exposure. Actual construction rules will depend on the data themselves (including correlation between components and variation in magnitudes reported). Especially with regard to paid media exposure, the questionnaires include items that are associated with higher levels of accuracy in reporting, such as ‘confirmed awareness’ of advertisements.
Objective measures of Exposure: Self-reported data can be subject to response error that is systematically related to attitudes toward the campaign. The 2010 Census ICP Evaluation design calls for using (external) objective measures of campaign exposure in addition to the self-reports collected in the 2010 Census ICP Evaluation questionnaires. For example, paid media exposure will be measured through gross ratings points for advertisements shown in sampled households’ geographic areas. These ratings will be collected for as many media as possible, and may be tailored to the demographic characteristics of the survey respondents. The Census Bureau has retained a contractor who tracks earned media exposure for the Bureau, developing quantitative metrics for exposure of campaign coverage, among other topics. These data will provide measures for exposure to earned media. As appropriate, similar metrics of web traffic and social networking discussions of the campaign may also be incorporated. For exposure to the partnership component of the 2010 Census ICC, the design calls for use of key summary variables from the IPCD , an operational tool being used by Bureau staff to track contacts and activities with Census Partners. Each of these data sources will provide a measure of the level of campaign activity in the sampled households’ geographic area.
The 2010 Census ICP Evaluation design will not support estimation of a ‘total’ effect of the campaign. Rather, the evaluation analyses will seek to determine the relationship, if any, between increased ‘dosage’ of campaign exposure and changes in interim or final outcomes such as knowledge, attitude, beliefs, or mailback behavior.
Figure 1 (last page) visually depicts relationships between campaign exposure, the outcomes of primary interest, and other relevant factors. The figure provides a context within which to consider the 10 objectives for the 2010 Census ICP Evaluation as stated by the Census Bureau and revised by NORC.
Additional methodological investigations:
The data collection and analysis approach proposed for the 2010 Census ICP Evaluation includes many relatively innovative elements. In addition to the core objectives of the 2010 Census ICP Evaluation as stated by the Census Bureau, a number of additional analyses will be valuable in assessing the robustness of 2010 Census ICP Evaluation results and in understanding the methodologies employed. Such analyses will include:
- assessing the concordance between self-reported and externally constructed campaign exposure
- assessing the relationship between intent to participate in the Census, reported participation in the Census, and participation in the Census as recorded by the Bureau, and
- estimating conditioning effects among panel respondents in the 2010 Census ICP Evaluation sample
One key comparison will be to compare awareness and behavior rates for each of the six race/ethnicity groups as the 2010 Census campaign progresses towards Census Day. Comparing Wave 1 and 3 data for a race/ethnicity group (500 interviews in each wave) leads to a minimum detectable effect size of 14% on a binary variable (p=.50 with a design effect of 1.5). This means that if 14% is the true (unobservable) difference, we will have 80% power using alpha = .05 to obtain a statistically significant difference. This result is conservative because it assumes no correlation among the panel cases (half of the Wave 3 cases will also have completed Wave 1 interviews; this panel aspect increases the power of our analyses).
The main objective for the PAHUE is to assess behavior (mail return rates) between the test and control DMAs as well as attitudes, opinions, and self reported advertising exposure between test and control markets. The experiment will also assess the impact of the extra media on hard-to-count clusters.
NORC plans to use primarily computer-assisted telephone interviewing (CATI) for administration of the questionnaire. CATI reduces respondent burden and produces data that can be prepared for release and analysis faster and more accurately than is the case with pencil-and-paper interviews. NORC’s dialing system supports industry-standard dialing modes as well as “hybrid dialing,” a methodological and technological advance that combines traditional preview and predictive dialing approaches to maximize cost efficiencies while maintaining quality interviewer-respondent interaction. A central benefit of this system is its ability to read and respond to a call history. Cases are dialed predictively until contact is made with a human; cases that have call notes or are otherwise likely to have had contact with a human or an answering machine are forwarded directly to an interviewer for preview before the dial is initiated. An interviewer working in the NORC system may receive either a newly connected number to interview or a case with a call history to review prior to dialing. The system increases interviewer engagement, because they deal with fewer unproductive calls. The increases in productivity range up to 25 percent, while maintaining, and in some cases actually increasing, survey response rates. Respondents also bear less burden, since fewer unnecessary calls must be made.
For cases CATI is unable to complete, in-person interviews will be conducted using paper-and-pencil questionnaires administered by trained field interviewers. These forms will be electronically scanned rather than manually keyed or data-entered. NORC also plans to offer a web questionnaire to respondents who do not complete the telephone interview.
The Census Bureau is conducting the 2010 Census ICP Evaluation as part of the 2010 Census Program for Evaluations and Experiments (CPEX), the purpose of which is to evaluate the current census and to build a foundation on which to make early and informed decisions for planning the next census in 2020. Program planners designed the 2010 CPEX to measure the effectiveness of the 2010 Census design (including operations, systems, and processes), in addition to determining how the design impacts data quality. The 2010 Census ICP Evaluation is the only evaluation of the full 2010 Census ICC within the 2010 CPEX.
The proposed data collection involves households only. No small organizations will be asked to participate in any study activities.
6. Consequences of Less Frequent Data Collection
Survey data collection will take place at three points: [1] during a point-in-time partnership activity in the fall 2009 to assess baseline levels of all measures of public attention and intentions that will be the focus of the 2010 Census ICC; [2] in winter/spring of 2010 during the peak of the paid media campaign when partnership activities have been designed to motivate response; and [3] during the Nonresponse Followup period from May through August 2010 when both paid advertising and partnership are motivating people to respond to the census and encouraging cooperation with enumerators. Survey data would ultimately be linked to Census Bureau records indicating the actual participation status of the household (mail return vs. enumerator interview vs. nonresponse as well as specific household characteristics).
NORC’s time-series and longitudinal studies will document changes over time in exposure to each component of the 2010 Census ICC and will reveal whether increases (for individuals and groups) in reported exposure are accompanied by changes in reported awareness, attitudes, and motivation compared to baseline levels measured prior to the start of the 2010 Census ICC, and ultimately to census response behavior. Multivariate analyses of survey data based on carefully specified regression models will control for key covariates that are also likely to influence census participation. Less frequent data collection would fundamentally limit our ability to reach statistically significant conclusions regarding the effectiveness of the 2010 Census ICC.
No special circumstances apply.
8. Consultations Outside the Agency
The pre-submission notice for public comment entitled, “2010 Census Integrated Communication Program Evaluation,” was published in the Federal Register March 3, 2009, (Vol. 74, No. 40, pp. 9214 – 9217). Comments were received based on the
pre-submission notice from Cynthia Taeuber who requested copies of the questionnaires for the three waves of data collection. Based on her review of the questionnaires, she responded with several questions to include: are there any estimates of the final response rate to this questionnaire; is there a resolution to the cell phone only households and the related bias; the impression is that cell phone only households is relatively higher within the priority race/ethnic communities – do we know if this is correct; whether the sub-sample will be sufficiently large to get the data you want for the target groups of interest; is there data one way or another; and are the results at the national level? Ms. Taeuber’s questions are Appendix O, and the Census Bureau’s response is included as Appendix P. Comments were also received from Andrew Reamer of the Metropolitan Policy Program of the Brookings Institution, regarding the 2010 Census ICP Evaluation. Mr. Reamer stated that they strongly support the Census Bureau’s plan to independently evaluate the efficacy of the 2010 Census ICC and agree with the proposed methodology. Mr. Reamer did note that there was no mention made of the expected final response rate, and hoped that the sample size is sufficient for gathering the expected detailed information. He also noted that no mention was made of the expected percentage of cell phone-only households in priority race/ethnic households and an unacceptable level of bias will be introduced as a result. He expressed concern as to whether the sub-sample for personal visits is sufficient. To respond to Mr. Reamer’s comments, Census is estimating that the weighted response rates we are targeting are generally between 65 percent (for the American Indian and Alaska Native sample, 66 percent for Asian, and 67 percent for Native Hawaiian and Other Pacific Islander) and 68 percent (for the core sample that includes Hispanics, non-Hispanic Blacks, and non-Hispanic Others) under the current sampling strategies. Weighting the responses is necessary due to the subsampling for face-to-face interviewing and for non-response to correct potential coverage problems. For cell phone-only households, we expect the telephone matching rate could be lower for some race/ethnicity groups (especially for the American Indian and Alaska Native sample on reservations) due to higher cell-phone-only rates and/or non-telephone household rates. Personal interviews with all six race/ethnicity groups will be conducted for a 20 percent subsample of addresses that could not be matched to a telephone number or were not interviewed by telephone despite a telephone match. For groups with a lower telephone matching rate, the percentage of interviews completed by telephone will be lower while the percentage of interviews completed by personal interviews will be higher. Mr. Reamer’s comment as to whether the sub-sample for personal visits is sufficient is that we have chosen our overall sample sizes (of housing units) based on the eligibility rates for the census tracts we have selected for each of the samples (the core sample and the three supplemental oversamples). If our assumed rates for eligibility, screening, or interviewing turn out to be too high, we will have additional replicates to provide a safety margin allowing us to meet the target sample sizes for each race/ethnicity group. Mr. Reamer’s letter is attached as Appendix Q.
In addition to the presubmission notice, other consultations are itemized below.
The design of the 2010 CPEX was presented to the Panel on the Design of the 2010 CPEX of the National Academy of Sciences (NAS) on April 30, 2007. The NAS panel provided no comments or suggestions regarding this evaluation in their interim report dated December 7, 2007. The Census Advisory Committee of Professional Associations, including the American Marketing Association provided comments on NORC’s Draft Study Design on January 29, 2009.
9. Paying Respondents
The main objectives of the evaluation are to assess the extent to which the 2010 Census ICC achieved a variety of specific goals related to increased mail returns, improved accuracy and reduced differential undercount, and improved cooperation with enumerators. To achieve these primary objectives, collecting observations related to survey participation, respondent receptiveness and overall data quality are critical to understanding how respondent behavior corresponds to actual census participation. The use of targeted incentives can serve to characterize the degree of difficulty in respondent participation and cooperation as well as further illuminate the impact incentives may have on participation across different sample groups. Within the different samples, modes and time periods featured in the 2010 Census ICP Evaluation study design, the contractor has identified a number of unique opportunities for experimentation and measurement of the use of incentives.
Three objectives have been identified when evaluating the use of incentives for the 2010 Census ICP Evaluation: (i) evaluate the use of incentives as a conversion tool; (ii) evaluate the impact of incentives offered to potential panel members across study waves; and (iii) assessing the costs and benefits associated with incentive use in general within the study, particularly in relation to face-to-face interviewing modes. Given the multiple modes of contact the CICPE will attempt, the contractor plans two experiments involving monetary incentives.
First, the 2010 Census ICP Evaluation plans to conduct an experiment with an incentive directed towards telephone respondents after a second CATI refusal. Cases eligible for this treatment would be cases that have completed the household screener, are known to meet the interview eligibility criteria for a given sample group, and after being identified as eligible, have twice refused to complete the interview (e.g., refused by saying “not interested,” “don’t have time,” etc.). Due to the schedule constraints and the need to subsample households for face-to-face interviews after two weeks of dialing, the experiment will not be in effect until after the subsampling has occurred. This will eliminate any confusion of incentives offered households in more than one mode. These respondents identified in this experiment are extremely important to the success of the survey because they represent the target for a given sample group, especially if eligible in one of the three targeted hard-to-count oversamples where the overall eligibility rate within the oversample group may be quite low.
The purpose of evaluating incentive use with telephone refusals is to assess the potential nonresponse bias that would be caused by failing to recruit those sample members into the respondent group by phone. The two refusal CATI incentive experiment will attempt to explore whether the offer of a small token incentive has an impact on averting potential refusals, or whether those still resistant are different in significant ways from the others.
For the first experiment, 70 percent of the sample will randomly be flagged as eligible for an incentive (should they refuse twice in CATI) while the remaining 30 percent of the sample will be used as a control group. In the experimental treatment group, after the second CATI refusal, the CATI interviewer will read a script offering a $10 token of appreciation upon completing the questionnaire. The control group refusals will receive additional calls without an incentive offered. Control group respondents who complete the interview will not receive an incentive. The 2010 Census ICP Evaluation project team will evaluate the results of the Wave 1 experiment to determine if a modification to the incentive strategies for Wave 2 and 3 would benefit participation rates or effectively reaching targeted oversample groups.
Second, the 2010 Census ICP Evaluation design includes an incentive experiment directed towards eligible panel participants. During Wave 2 and Wave 3 data collection, the 2010 Census ICP Evaluation plans to capitalize on initial contacts with panel sample members, as the participant has been previously identified with an associated address reducing the need to screen for eligibility prior to completing the interview. The initial mailing will include an invitation to complete a self-administered-questionnaire (SAQ)—either via paper and pencil or web. Contact attempts within the current wave will not have begun with these sample members before this packet is mailed. The purpose of the second incentive experiment is to understand the impact of incentives on response rates for the SAQ approach, and whether any positive gains can be justified on cost/benefits grounds.
Mail surveys are more burdensome on respondents than interviewer-administered questionnaires; they also tend to suffer relatively low response rates. Mail surveys require additional motivation and effort on the part of the respondent in order to comprehend, complete, and return the instrument1. Therefore, in order to achieve acceptable response rates from mailed SAQs, the contractor believes it is important to explore the use of incentives for those asked to complete the survey via mail SAQ.
For the second experiment, 70 percent of the sample will be randomly flagged to receive the incentive while the remaining 30 percent of the sample will be used as a control group. Those in the experimental treatment group who are mailed an SAQ will receive a small $2 incentive enclosed with the SAQ mailing along with a promise of an additional $10 upon return of the completed SAQ. The panel cases will not be eligible for the CATI refusal experiment. The use of a promised incentive for completion in the experimental mode is directed towards maintaining the viability of the panel sample across waves. If the analysis of Wave 2 SAQ completions is favorable, the project recommends expanding the use of the promised incentives in Wave 3.
In addition to the stated incentive experiments, the contractor's data collection strategies identify the use incentives across all sample groups for operational cost efficiencies and a potential aversion mechanism when addressing potential respondents. The current operational design looks to employ incentives primarily for interviewers working face-to-face data collection and screening of oversample groups a $20 token of appreciation. It is the believed that incentives will be most beneficial in these circumstances and will benefit the overall costs associated with fielding sample in the face-to-face environment. The current projections target approximately 900 completed interviews collected through face-to-face interviews in Wave 1, or roughly 30 percent of the overall completed cases.
The contractor believes the use of incentives in the face-to-face environment will reduce overall burden of this survey. Given the relatively low survey eligibility within the oversample groups, each completed interview requires a substantial number of contacts be made at ineligible households. Offering incentives to potential respondents will likely increase the possibility that an eligible household will complete the survey and decrease the overall number of households needed for further contact.
Finally, the contractor believes the use of incentives will reduce overall data collection costs. Again, given the relatively low eligibility rates within the oversample groups, each completed interview requires a substantial number of contacts, each with an associated cost. Particularly in the face-to-face environment, where the household receiving the incentive will be known to be eligible after extensive screening, averting a potential refusal may reduce substantial data collection effort.
The use of incentives and the incentive experiments will allow for a series of comparisons across the samples. Within the incentive experiments, the treatment (incentive) and control (non-incentive) groups will be compared on a variety of characteristics:
Response, refusal, and conversion rates: Do incentives increase cooperation?
Key questionnaire measures: Do respondents who complete the survey after receiving an incentive differ (demographically, socioeconomically, on key survey measures, etc.) from those in the similar groups who complete the survey without an incentive?
To what extent does experimental status correlate with later census behavior (among survey participants as well as non-responders who were sampled as eligible but never completed)?
The ABS model includes plans to complete face-to-face interviews with phone non-responders from all categories (including those offered and not offered an incentive); it will be possible to examine the characteristics of original non-responders in the two groups to examine the effectiveness of the incentive approach.
Given the results of the control and treatment groups, what is the expected cost reduction or increase of implementing the incentive treatment for all cases versus offering incentives to no cases? Using rate and cost information from the control and treatment group, it will be able to calculate the total cost of the survey had the incentive treatment been applied to all cases versus applied to none.
All of the data collection for the 2010 Census ICP Evaluation is being conducted under Title 13, United States Code. All respondents will be informed of the confidentiality of their responses, and that their survey responses will be linked to their 2010 Census data. The 2010 Census ICP Evaluation is in compliance with the requirements of the Privacy Act of 1974 and the Paperwork Reduction Act. All persons involved with the collection or analysis of data, or who may have incidental access to the data, which includes contractor staff, will receive Special Sworn Status. All census data will be kept in a secure environment, formally approved by the Census Bureau IT Security Office.
Respondents will receive information about privacy and confidentiality protections when they consent to participate in the study. Information about privacy and confidentiality will be repeated in the introductory comments of interviewers. All interviewers will be knowledgeable about privacy and confidentiality procedures and will be prepared to describe them in detail or to answer any related questions raised by respondents.
NORC has carefully crafted consent language that explains in simple, direct language the steps that will be taken to protect the privacy and confidentiality of the information each sample member provides. All respondents will be notified that their responses will be reported only as part of aggregate statistics across all participating sample members.
NORC’s safeguards for the security of data include: storage of printed survey documents in locked space at NORC, and protection of computer files at NORC and its subcontractors against access by unauthorized individuals and groups. Protection of the privacy and confidentiality of individuals is accomplished through multiple steps. Oral permission for the interview is obtained from all respondents, after the interviewer ensures that the respondent has been provided with a copy of the appropriate Census Bureau privacy and confidentiality information and understands that participation is voluntary. Information identifying respondents is separated from the questionnaire and placed into a separate database.
The 2010 Census ICP Evaluation includes potentially sensitive questions, such as the frequency of voting in elections, whether or not the respondent was born in the U.S., and household income. All three of these are important correlates of Census participation, and the latter two are key characteristics on the basis of which the 2010 media campaign is targeting individuals. It is necessary to collect and analyze these variables in order to adequately assess the change effected by the 2010 Census ICC, and the effectiveness of particular strategies within the campaign.
12. Estimation of Hour Burden
Table 1. Number of Respondents and Average Response Time
Instrument |
Number of Respondents |
Number of Responses per Respondent |
Average Burden Hours per Response |
Estimated Annual Burden Hours |
Screening for Wave 1 eligibility |
13,065 |
1 |
.083 |
1,089 |
Wave 1 Questionnaire |
3,000 |
1 |
.5 |
1,500 |
Screening for Wave 2 eligibility |
6,533 |
1 |
.083 |
544 |
Wave 2 Questionnaire |
3,000 |
1 |
.5 |
1,500 |
Screening for Wave 3 eligibility |
6,533 |
1 |
.083 |
544 |
Wave 3 Questionnaire |
3,000 |
1 |
.5 |
1,500 |
All Waves – Screened Households Ineligible for Interview |
26,131 |
1 |
.083 |
2,177 |
All Waves – Respondents Completing Interviews |
6,000 |
Average: 1.42 |
Average: 0.71 |
4,250 |
Table 2. Average Responses to Paid Advertising Heavy-Up Experiment (PAHUE)
Instrument |
Number of Respondents |
Number of Responses per Respondent |
Average Burden Hours per Response |
Estimated Annual Burden Hours |
Screening for Wave 1 eligibility |
2,608 |
1 |
.083 |
216 |
Wave 1 Questionnaire |
2,000 |
1 |
.5 |
1,000 |
Screening for Wave 3 eligibility |
2,608 |
1 |
.083 |
216 |
Wave 3 Questionnaire |
2,000 |
1 |
.5 |
1,000 |
All Waves – Screened Households Ineligible for Interview |
5,216 |
1 |
.083 |
532 |
All Waves – Respondents Completing Interviews |
4,000 |
Average: 1 |
Average: 0.5 |
2,000 |
13. Estimate of Cost Burden
Respondents for this survey will not incur any capital, start-up, operation and maintenance, or purchase of service costs.
14. Cost to Federal Government
The Census Bureau has contracted with the National Opinion Research Center (NORC) at the University of Chicago to design, conduct and analyze the 2010 Census ICP Evaluation at an annual estimated cost of $2.65 Million for a total cost of $8 Million over three years. This cost includes survey management, data collection, cleaning and preparation of data files for evaluation, and data analysis and report writing.
There are no previous requests to compare with the current request.
The time schedule for data collection, tabulation and report delivery to the Census Bureau is listed below.
Questionnaire Development October 2008 – April 2009
Cognitive Testing March 2009 – April 2009
Study Design October 2008 – April 2009
Sample Selection March 2009 – May 2009
Data Collection
Wave 1 August 2009 – October 2009
Wave 2 January 2010 – April 2010
Wave 3 May 2010 – August 2010
Data Processing August 2009 – September 2010
Data Analysis and Report Writing October 2009 – June 2011
Final Research Report June 2011
We will incorporate the OMB number and expiration date into the questionnaire script for the interviewers to follow.
18. Exceptions to the Certification
There are no exceptions.
Figure 1. Conceptual Model Guiding Analysis
1Church, 1993, “Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis,” Public Opinion Quarterly, 57:62-79
File Type | application/msword |
File Title | SUMMARY |
Author | bowman-marietta |
Last Modified By | soude001 |
File Modified | 2009-07-23 |
File Created | 2009-07-22 |