Memo

Online Testing of Respondent Messaging.docx

Generic Clearance for Internet Nonprobability Panel Pretesting

Memo

OMB: 0607-0978

Document [docx]
Download: docx | pdf

Online Testing of Respondent Messaging

Background


In late 2015/early 2016, researchers from the Center for Survey Measurement (CSM) conducted research on conveying legally required information to respondents in a way that is clear and easy to understand. This research explored various ways of communicating the required description of access to data collected by Title 13, as well as other language required by the Paperwork Reduction Act (PRA). This consisted of two stages, starting with a rather large qualitative data collection of our opt-in affinity panel exploring many possible options for this language, followed by a smaller-scale cognitive test of those options that seem most viable and reliable based on findings from the larger study.


The data collection with the opt-in affinity panel in this research gave us valuable cognitive testing-like feedback, suggesting that this method has potential as a component of a broader pretesting technique. It may be particularly useful for evaluating a large number of messages when it is not possible to do so with in-person cognitive testing.


Purpose of research


This submission builds on this prior research by replicating the same qualitative data collection using two different sample sources: a sample from the Census Bureau Contact Frame and a sample from Amazon Mechanical Turk. Previous research has shown that feedback may vary by sample source in this type of online unmoderated cognitive testing (e.g., Edgar 2013; Fowler et al., 2015; Cook et al., 2015). Using different sample sources may also allow us to obtain reactions from a greater number and more diverse respondent pool. We will be able to add to this literature by comparing the feedback we get on this messaging with the feedback received during the prior submission.


Timeline and Location

In September 2016, staff from the Demographic Statistical Methods Division (DSMD) and Center for Survey Measurement (CSM) will collect responses from two samples over a two-week period.

Sample and Recruitment

Two samples will be used for this research: a sample from the Census Bureau Contact Frame and a sample from Amazon Mechanical Turk. Mechnical Turk has been used for similar research by researchers at the Bureau of Labor Statistics, among other places.

Staff from the Center for Administrative Records Research and Applications (CARRA) will sample email addresses from 10,000 MAFIDS in the Contact Frame. We expect to achieve a 2% response rate, in accordance with recent studies, with a goal of 200 completes for this study. This will allow approximately 100 responses per message. CSM staff will send up to three emails through GovDelivery for this sample:

  • An initial email on a Monday,

  • A reminder email on the following Thursday (if they have not yet clicked on the link to the survey), and

  • A final reminder email on the following Monday (if they have not yet clicked on the link to the survey) with the survey closing the following Friday.

Copies of these emails are included in Attachment A. The text is identical to the text used in the prior submission.

Up to 200 respondents from the Amazon Mechanical Turk opt-in panel will have the opportunity to complete the survey over the same two-week period as the Contact Frame sample. Respondents who are interested in the task will volunteer to respond and thus respondents will not be scientifically selected. Respondents will be restricted to people living in the United States. The invitation for these respondents is shown in Attachment B.

CSM will host the survey for both samples through Survey Monkey and will collect no personally identifiable data. The questions asked and messages shown to respondents in the Survey Monkey instrument will be identical to that used in the prior OMB submission and are shown in Attachments C and D.

Methodology

Respondents in both samples will be shown nine messages from Attachment D. After each statement, respondents will be asked, “In your own words, what is this message telling you?” After responding to each of these, respondents will be asked if they have any further comments about things they liked or didn’t like about the messages they saw. Then, they will be asked some very general demographics, shown in Attachment C.



For respondents from Amazon Mechanical Turk only, respondents will be asked to create a five-digit code at the end of the survey, and then enter this same code on the Mechanical Turk website. This is to prevent Mechanical Turk respondents from stating that they completed the survey in order to receive the incentive when they did not actually do so. The code will not be used to link respondent answers in the Survey Monkey survey to respondent identity.

We will compare respondent demographics across sample sources. We will also analyze user responses and identify if any of the messages were more or less likely to be understood.



Consent and Incentive


Participants will be informed of the OMB number and the voluntary nature of the study:

Thank you for agreeing to participate in this survey. On the next screens, you will be asked to read nine short excerpts about the U.S. Census Bureau. After each excerpt, you will be asked a follow up question. Please answer each question to the best of your abilities. Remember that your answers will be kept confidential. This survey will take 10 minutes or less to complete.

Your responses will not be shared with anyone in a way that could personally identify you. Your participation in this study is voluntary. The legal authority under which this information is being collected is Title 13 U.S.C. Chapter 5 Sections 141 and 193. This data collection is approved under OMB No. 0607-0978 and the approval expires 04/30/17. This data collection uses third-party web site to collect data. This survey does not collect personally identifiable information. The results from this survey will be used to conduct primary research to enhance planning efforts for current and future surveys and censuses.



Respondents in the contact frame sample will not receive compensation for their participation. Respondents in the Amazon Mechanical Turk sample will receive $0.50 for their participation. Fielding a study on the Mechanical Turk requires payment of an incentive.


The estimated respondent burden is calculated as follows:



# of Participants Screened

Minutes per participant for Screening

Total Screening Burden

Maximum number of Participants

Minutes per participant for data collection

Total Collection Burden

Total Maximum Burden (Screening + Collection)

  1. Mechanical Turk

200

2

400

200

10

2,000

2,200

  1. Contact Frame

10,000

2

20,000

200

10

2,000

22,000

Total Burden

24,200 minutes


403.33 hours



In order to meet project deadlines for this work, we need to receive OMB approval no later than September 9, 2016.


The contact person for questions regarding data collection and study design is:


Jessica Holzberg
Demographic Statistical Methods Division

U.S. Census Bureau

Room 7H160E

Washington, D.C. 20233

(301) 763-2298

[email protected]




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJennifer Hunter Childs
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy