(OMB Control No. ####-New)
The ASPA COVID-19 Public Education Campaign will include two surveys that are administered to collect data that will inform the outcome evaluation. These surveys are the COVID-19 Attitudes and Beliefs Survey (CABS) – a longitudinal survey – and the Monthly Outcome Survey (MOS) – a cross-sectional survey.
The respondent universe for both surveys is American adults (ages 18 and older). Both surveys will be fielded using online probability-based panels that are designed to be nationally representative of the U.S. population. The CABS will be fielded using the NORC AmeriSpeak panel, and the MOS will be fielded using Ipsos KnowledgePanel. More information on the recruitment and sampling methods for each panel is provided below.
NORC AmeriSpeak:
To provide a nationally representative sample, NORC’s AmeriSpeak Panel utilizes the NORC National Sample Frame, which is constructed by NORC and represents 97% of U.S. households. NORC recruits participants in a two-stage process: initial recruitment using less expensive methods and then nonresponse follow-up using personal interviewers to recruit a more representative sample. For the initial panel recruitment, sample units are invited to join AmeriSpeak online by visiting the panel website (www.amerispeak.org) or by telephone. English and Spanish language are supported for both online and telephone recruitment. Panel invitations are communicated via a 6” x 9” prenotification postcard, a USPS recruitment package in a 9” x 12” envelope (containing a cover letter, a study brochure, and a non-contingent incentive), one follow-up postcard, and also follow-up by NORC’s telephone research center for matched sample units. The second stage of panel recruitment targets a stratified random subsample of the non-responders from the initial recruitment. Stratification is based on internal consumer panel data and stratification variables from the initial recruitment stage to increase sample representation of young adults, non-Hispanic African Americans, and Hispanics.
NORC then obtains and documents research subjects’ informed consent and agreement to the panel’s Privacy Policy and Terms and Conditions during the registration process. After registration is completed, AmeriSpeak Panel members first complete an introductory survey of about 15 minutes by web or by telephone asking questions about the household’s composition and the person’s background and interests. The introduction survey provides an initial profile of the panelist and the household. Upon completion of the registration process and introduction survey, the respondent is an “active” AmeriSpeak panelist eligible for client studies and additional NORC-conducted profile surveys. All members of the panel have provided consent as part of their registration into the panel. We will also inform all participants about their privacy rights and intended use of their data on web screens, then ask for their consent to take the survey before they begin each survey.
The CABS survey will utilize a longitudinal panel design, fielding six waves, with four months between the start of each wave. NORC will select a subsample of potential respondents from their panel for the first wave of the survey. NORC has extensive information on the impact of sample member demographics on response rates and panel attrition rates. With this information, NORC will develop a sampling plan aimed at a final set of respondents at the sixth wave that is representative of the adult population of the United States.
Ipsos KnowledgePanel:
The Ipsos KnowledgePanel contains approximately 60,000 panel members and is nationally representative. Members of the KnowledgePanel were invited to participate in the panel through address-based sampling (ABS) of U.S. households. The KnowledgePanel includes hard-to-reach populations and households without landline telephones, without internet access, or whose dominant language is Spanish. Ipsos weights the entire KnowledgePanel according to benchmarks from the March 2020 U.S. Census Bureau Current Population Survey (CPS) so that the weighted distribution of the KnowledgePanel reflects the U.S. adult population. The geodemographic dimensions used for weighting the KnowledgePanel include, but are not limited to, the following dimensions: age, race/ethnicity, education census region, household income, homeownership status, metropolitan area, Hispanic origin, and language dominance (with respect to the Spanish language).
Statistical Methodology, Estimation, and Degree of Accuracy
The first wave’s sample for the CABS survey will be selected using a stratified random sample from the NORC AmeriSpeak panel. There will be 48 strata based on age, level of education, race/ethnicity, gender, and Census region. The sample sizes in each strata will be determined based on the expected response rate for the strata, as well as the anticipated attrition rates over the life of the study. Groups with lower response rates and higher anticipated attrition will be oversampled so that the final set of respondents in wave six is expected to be representative of the United States adult population. Subsequent waves will be a census of all respondents from the prior wave.
Each wave will be weighted to account for the probability of inclusion in the AmeriSpeak panel, probability of selection into the CABS, and probability of responding to the survey. Weights will then be calibrated to Census benchmarks for key demographics. Auxiliary data on COVID-19 prevalence in respondents’ local communities will also be considered for inclusion in weighting as it may have a relationship with respondents’ probabilities of response and key survey measures.
Each wave of the MOS survey will be selected using a stratified random sample from Ipsos KnowledgePanel. Strata will be based on key demographic attributes. The sample sizes in each strata will be determined based on the expected response rate for the strata with the goal of having a final dataset that is representative of the United States adult population.
The MOS will be weighted to account for the probability of inclusion in the KnowledgePanel, probability of selection into the MOS, and probability of responding to the survey. Weights will then be calibrated to Census benchmarks for key demographics. Auxiliary data on COVID-19 prevalence in respondents’ local communities will also be considered for inclusion in weighting as it may have a relationship with respondents’ probabilities of response and key survey measures.
Both proposed surveys will be self-administered web surveys. The web survey will be optimized to ensure that survey participants can complete the questionnaire on mobile devices (e.g., cell phones, tablets, etc.). All survey questionnaires and materials will be available in Spanish.
For the CABS, potential respondents will receive an email invitation and three separate email reminders inviting them to complete the survey to increase survey response rates. For subsequent waves, respondents who completed the previous survey wave will initially receive mail postcards inviting them to participate in the upcoming wave, followed by an email invitation and three separate email reminders inviting them to complete the survey to increase survey response rates. In total, respondents will receive up to four communications in the first survey wave: one email invitation, and three email reminders. In waves two through six they will receive up to five communications each survey wave: one pre-invitation postcard in the mail, one email invitation, and three email reminders. Wave 1 will not include the pre-invitation postcard. If respondents complete a survey, they will no longer receive communication reminders until the next survey wave. If respondents ask to be removed from the study they will no longer receive communications for all waves. Fielding will last three weeks for the first wave and up to six weeks for subsequent waves. Waves two through six have longer fielding periods to ensure maximum panel retention.
For the MOS, participants will be recruited to take the survey from the Ipsos KnowledgePanel every month. Ipsos currently fields an online omnibus survey (KnowledgePanel 5K Omni) to 5,000 U.S. adults in their panel every month. Beginning in January 2021, the panel vendor will incorporate the Monthly Outcome Survey into the KnowledgePanel 5K Omni. During survey fielding, Ipsos will send email communications in English and Spanish to invite and remind participants from its KnowledgePanel to participate in their Omnibus 5k survey. The study will recruit 5,000 participants for each survey wave, totaling around 120,000 surveys and up to 60,000 participants over approximately two years. Reminders will be sent as needed to reach the targeted 5,000 completed surveys. Fielding will last one week.
NORC’s AmeriSpeak panel recruitment response rate is 34% and the cooperation rate of the initial wave is expected to be 70%. During fielding of the first wave, each sample member will receive an email invitation and up to three separate email reminders inviting them to complete the survey to increase survey response rates. Email invitations have been customized to stress the importance of COVID-19 research, while not priming participants about the campaign to mitigate response bias. Subsequent waves will sample the respondents of the previous wave and are expected to have retention rates in the range of 80% to 90%. Fielding for subsequent waves will begin with a pre-invitation postcard in the mail, and then, like the first wave, continue with one email invitation and up to three email reminders. Incentives will play an important role in encouraging cooperation and panel retention. Respondents who complete the survey will receive $10 in the initial wave. Subsequent waves will have an $18 incentive to encourage retention.
The panel recruitment response rate for Ipsos’ KnowledgePanel is 12% and the cooperation rate for each survey wave is expected to be between 55% and 60%. Sample members will receive an initial email invitation and email reminders will be sent to those who do not respond to the initial invitation. Sample members who complete the survey will receive a $1 incentive to encourage response.
Both the CABS and MOS surveys are expected to have high cooperation rates exceeding 50%. Factors impacting non-response will be assessed after each administration and addressed in survey weighting to reduce non-response bias.
For the CABS survey, the primary concern related to non-response is panel attrition. FMG and NORC will make extensive efforts to minimize panel attrition. Mitigation factors will include the use of a pre-invitation post card which will be sent via mail. This will be followed by an email invitation and up to three email reminders if the panelist has not yet completed the survey. Incentive levels will also be higher for waves two through six to encourage panelists to continue to respond. During fielding of all waves, response rates will be monitored for all sampling strata. If response rates for certain strata are below expectations, then further interventions will be considered.
Both surveys will utilize the same basic process to ensure the reliability of the survey data. The panel vendor will program the survey and will check all aspects of the programming. This will include verifying the text of the instructions, questions, and response options are correct and that the skip patterns and other aspects of survey logic are programmed appropriately. FMG will independently verify these points as well to ensure the questionnaire is programmed correctly.
During fielding both vendors will eliminate sample members who provide poor quality data by speeding through the survey, “straight-lining,” or providing incoherent responses. FMG will perform an independent evaluation to confirm the decisions to exclude respondents due to poor data quality. After data collection, the vendor will provide FMG with the raw data as collected from the respondent as well as the final data after any necessary editing has been completed. FMG will have two analysts independently verify that data edits are correct, and that the final data are as expected.
Our evaluation of ASPA COVID-19 Public Education Campaign has multiple research goals including determining:
Behavioral Factor Change: What is the change in campaign-targeted, mutable behavioral factors including COVID-19 vaccine readiness, face mask wearing, social distancing, and handwashing?
Campaign Recall: Do audiences recall exposure to recent COVID-19 campaign content?
Campaign Receptivity: To what extent do audiences trust the campaign, feel the campaign is relevant to them, and feel that the campaign is attention-grabbing?
HHS Trust/Campaign Brand Equity: To what extent do audiences trust HHS as a source of information on COVID-19? How does this compare to other information sources? Are audiences aware of, using, and receptive to the HHS website?
Knowledge: To what extent do audiences understand the vaccine development and testing process, vaccine effectiveness, and vaccine safety? To what extent can audiences accurately identify COVID-19 misinformation and rumors?
Sharing: Do audiences report sharing campaign content with others?
The CABS and MOS will enable us to assess the extent to which the ASPA COVID-19 Public Education Campaign is recognized and the information it provides is reported as trusted over the course of the campaign. This information will be crucial for understanding for whom and in which geographic areas the campaign is successful and resulting in attitude and behavior change surrounding COVID-19 preventive behavior. Perhaps more importantly, these surveys will provide a mechanism for alerting campaign leadership as to areas where the information provided is not recognized or where it is recognized but not trusted. Such data will make the best use of government funds by permitting course corrections in messaging and message placement in real time to better reach diverse subpopulations and disparate geographic regions of the United States.
In addition, both surveys will serve as a measure for COVID-19 prevention behaviors and change in those behaviors among the U. S. population as a whole as well as among key subpopulations and in different geographic regions. Measures of prevention behaviors offer both a way to ascertain campaign success in terms of actual change in targeted outcomes that are associated with fewer COVID-19 infections, as well as to point campaign leadership to areas in greater need for adjusting messaging content, delivery, or coverage.
Finally, the CABS survey provides a methodologically rigorous and extensive examination of key attitude positions, health vulnerabilities, and trust in experts that can both be evaluated over time among they American populace to show change over the course of the campaign on a person-by-person level. The evaluation of a person’s changing attitudes over time is a key component for strong inference related to the impact of the campaign as each person can serve as their own comparison point to past values which can provide higher confidence that changes observed are “true” changes and, when cross-referenced with campaign interventions and expenditure, that the Public Education Campaign was the source of the change.
Cognitive testing was conducted on both evaluation surveys prior to finalizing the survey instruments. The cognitive testing was used to refine the survey questions and improve utility. A total of seven individuals participated in the testing and debriefs for the CABS instrument, and six individuals participated in the testing and debriefs for the MOS instrument. Participants were recruited to ensure diversification by age, gender, race/ethnicity, and educational attainment.
A trained moderator conducted the cognitive interviews virtually using the Zoom platform. Participants used screen sharing to scroll through a mock online survey and were asked to use the think-aloud method to describe their thought process and response selection as they proceeded through the test survey. After completing the survey instrument, the interviewer asked participants a series of debriefing questions that focused on the survey overall and specific survey questions. The debriefing questions focused on items that participants expressed questions or confusion with during completion of the mock survey, as well as specific items and wording that the researchers had identified in advance.
All interviews were recorded to capture participant comments on the survey and any questions that required revisions. The data collection did not focus on the actual survey instrument responses but rather on how well the questions were understood and how well they conveyed the research intentions. All identifiable information was kept private to the extent allowable by law and destroyed subsequent to the conclusion of all interviews. Revisions to the questionnaire were made based on the results of the survey instrument testing.
The following persons outside of HHS contributed to, reviewed, and/or approved the design, instrumentation and sampling plan:
Name |
Affiliation |
Telephone Number |
Leah Hoffman, MPH |
Fors Marsh Group |
571-444-1694 |
Joseph Luchman, PhD |
Fors Marsh Group |
319-621-7109 |
Colin Macfarlane |
Fors Marsh Group |
571-858-3810 |
Marcus Maher |
Fors Marsh Group |
773-629-5497 |
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Supporting Statement – Part B |
Subject | HIP 2.0; Healthy Indiana Plan, CMS; Supporting Statement |
Author | Centers for Medicare & Medicaid Services |
File Modified | 0000-00-00 |
File Created | 2021-01-11 |