February 10, 2015
NOTE TO THE REVIEWER OF:
|
OMB CLEARANCE 1220-0141 “Cognitive and Psychological Research” |
FROM: |
Erica Yu Office of Survey Methods Research |
SUBJECT: |
Submission of Materials for “Measuring Respondent Burden” |
Please accept the enclosed materials for approval under the OMB clearance package 1220-0141 “Cognitive and Psychological Research.” In accordance with our agreement with OMB, we are submitting a brief description of the study.
The total estimated respondent burden for this study is 188 hours.
If there are any questions regarding this project, please contact Erica Yu at 202-691-7924.
Introduction
A common approach that survey designers take to reduce respondent burden is to reduce the objective features of burden, such as the number of questions in the survey. However, such an approach to respondent burden places respondent burden and data quality as rivalling ends in a trade-off equation, whereby reductions in burden must come at the cost of reductions in data quantity and quality.
Decision to Participate in Future Surveys
Recent research has proposed that subjective perceptions of burden may also play a role in respondent ratings of burden and associated survey outcomes (Yan, Fricker, & Tsai, 2014). Subjective factors are grounded in cognition, which need not be tied to objective factors, providing an opportunity for survey designers to explore alternative ways to reduce respondent burden. One alternative way, tested by Yu, Fricker, and Kopp (2015), showed that a respondent’s judgment of burden may be changed if his or her reference set is changed. This finding is supported by studies from cognitive psychology that have shown that human perceptions of physical and psychological phenomena are judged relative to a reference set rather than absolutely. When judgments are made relative to a reference set, it follows that different reference sets for the same stimuli may lead to different judgments. But does this difference in burden ratings translate to survey outcomes, such as a change in willingness to participate in future surveys?
Willingness to participate in a survey may be related to the respondent’s past experiences relevant to the survey request. Bem (1967) and others have hypothesized that people may retrieve information about past behavior to form such judgments. For example, when a person is deciding whether to see a new movie by a particular director, she may think back to whether she liked others of the director’s movies that she had seen. Likewise, a respondent may retrieve a memory of a past experience of participating in a survey to determine whether he wants to participate in a new survey.
Furthermore, the literature also suggests that retrieval of information about a past episode is a summary of that experience rather than a memory of each individual moment. For a movie, one would more likely remember that it received a four-star rating, rather than retrieve each moment of dialogue and action (Redelmeier & Kahneman, 1996). Likewise, respondents may refer to a summary judgment about a previous survey rather than remembering each individual question that was asked.
By integrating these separate findings regarding burden, attitude formation, and memory for past experiences, we aim to test whether it is possible to influence a respondent’s decision to respond to a future survey through a prompt for a summary judgment of the survey that does not affect the survey content itself.
Additionally, this study will explore the factors that relate to burden and how those factors affect respondent perceptions of burden. The findings from this study may support a renewed emphasis on relieving perceived burden as an alternative to reducing survey length to minimize respondent burden.
Exploring Respondent Burden
The literature on respondent burden has primarily focused on understanding the burden associated with elements of the global survey experience, such as interest in the interview, difficulty of the questions, worthwhileness of time spent, effort, and length of interview (Sharp & Frankel, 1983); interest, trust, concern, sensitivity, motivation, effort, task difficulty, and cooperativeness (Fricker, Tan, & Tsai, 2014); or length, effort, frequency of interview, and sensitivity (Bradburn, 1978). There has been little to no examination of the burden of the response process. Tourangeau, Rips, and Rasinski (TRR; 2000) proposed a process that all respondents use when responding to survey questions: comprehension, retrieval, judgment, and response. In the current research, we assume that each of these four stages can impose burden on respondents. Most research conducted at this level focuses on cognitive sources of error, such as heuristics and biases. However, a study of burden at this level should be of interest to survey designers because, as the literature on survey satisficing suggests, respondents who find individual questions burdensome may not answer those questions optimally and may satisfice instead. Research into burden at the question level can be used to inform improvements to question wording and design as well as understand potential sources of measurement error.
Methodology
Survey Content
The content of the main survey will draw upon the framework for studying the response process proposed by TRR (2000). Multiple survey items will emphasize each the four stages of the response process: comprehension, retrieval, judgment, and response. For each stage, some items will be designed to induce high levels of burden while other items low burden. In this design, each participant will see both burdensome and not burdensome items. This comparison will be probed during debriefing of the second survey, when participants will be asked to indicate which of a pair of questions was more burdensome to answer and why. Comparisons of interest include: relative burden of comprehension and response, comprehension and retrieval, judgment and retrieval. Also, two forms of retrieval will be compared (retrieval of “usual” behavior and retrieval of behavior specifically from “yesterday”).
The full list of items for Survey 1 is included in Attachment A.
Summary Judgment and Reference Points
This study is designed to test whether a summary judgment about a survey affects a decision to cooperate in a future similar survey. Participants’ objective experiences of burden in the survey will be held constant; in other words, all participants will be given the same survey. There will be one experimentally manipulated difference between participants: the final item that will be used to direct the participant’s process of forming a summarizing judgment about his or her overall experience. The aim is to create two groups for which perceptions of overall survey burden are judged relative to different reference points, resulting in differences in perceptions and ultimately differences in willingness to participate in a future similar survey. We create context effects that influence participants’ ratings.
-Group 1: Participants are told that the survey designers tried to make their experience less burdensome and that this meant some questions were rewritten to be simpler, and that the respondent’s ease was their highest priority. This information serves to re-frame the survey experience as being easier than the original, compared to no framing.
-Group 2: Participants are told that the survey designers tried to maximize data quality and that this meant some questions were rewritten to require more effort from respondents, and that data quality was their highest priority. This information serves to re-frame the survey experience as being more difficult than the original, compared to no framing.
-Group 3: Participants are asked to rate their survey experience without an explicit reference point. This serves as a control group.
This design is summarized in Table 1. We do not hypothesize that a difference in reported ratings between groups is necessarily indicative of a change in a participants’ “true” rating; there are multiple causes for a change in rating, including socially desirable reporting, which are simply modifications in the reporting of a true value rather than a change to the true value. Rather, we are interested in compelling a change in rating such that the final reported summarizing judgment is changed. Our hypothesis is that, regardless of the context or reason for the rating, the memory of the final rating will be retrieved and used when deciding whether to respond to the next survey request. In other words, participants may not remember precisely the content of the survey, just the summary of the overall experience.
At this time, it is not known whether this experimental manipulation of summary judgment is strong enough to result in detectable differences between groups. In a prior study using a similar design with neutral content only, ratings of burden were very low (1.68 average on a rating scale ranging from 1 to 5). The present study has been designed to elicit a broader range of burden ratings; however, modifications may be needed, based on the results of a pre-testing sample of participants. Although findings from pre-testing may be used to improve the instrument and ensure the effectiveness of the manipulations, the goals of the testing and overall design will remain the same. This approach will ensure that the design that maximizes the usefulness of the collected data is used.
Table 1. Study Design and Summary Judgment Wording
Group 1: |
Group 2: |
Group 3: |
The survey that you are taking right now was specifically designed with you, the respondent, in mind. Sometimes we hear from respondents that our surveys should be easier to complete and so we have rewritten some questions to be simpler for you. We are studying ways to reduce the feelings of burden that we impose on our respondents, and we hope that by simplifying some questions, your experience was an improvement over what past respondents may have felt. |
The survey that you are taking right now was designed to improve data quality. Sometimes the data we get from respondents is not very good and so we have rewritten some questions to be more demanding and require more effort from you. We are studying how best to write survey questions to ensure that we get accurate data from our respondents, and we hope that by writing some more difficult questions, the data we receive from you is improved over what we got from respondents in the past. |
[no framing] |
All participants: |
||
Please take a moment now to rate your experience taking this survey. Very positive Somewhat positive A little bit positive Neither positive nor negative A little bit negative Somewhat negative Very negative |
The full protocol for the Survey 1 is shown in Attachment A.
Definition and Measurement of Respondent Burden
After the main survey, all participants of Survey 1 will be asked debriefing questions. Explicit ratings of burden as well as measures of the components of burden will be collected. Additional objective and subjective measures of burden will be collected, including length of time spent on survey participation as measured through the web survey instrument.
The protocol for the debriefing questions of Survey 1 is shown in Attachment A.
A proxy for effort in the main survey will be collected as time spent on each page of the main survey instrument. Questions will be presented in sets of four per screen. The question order will be randomized within each page and the order of the screens will also be randomized. The minimum amount of time needed to read the questions will be calculated for each page based on the number of words, and this threshold will be used to identify participants who sped through the main survey. This variable will be used in analysis of those participants’ subsequent ratings of burden.
The content of Survey 2, which will be administered approximately one week after Survey 1, focuses on exploring conceptualizations of burden. Participants will be asked to provide open-ended responses describing their beliefs about burden.
The full protocol for Survey 2 is shown in Attachment B.
Decision to Cooperate in a Future Survey
A primary objective of this study is to test whether a summary judgment of the survey experience affects decisions to cooperate in a future survey. To test this, we will invite all participants from the first survey to participate in a second survey, which will be described as being very similar to the first survey. The survey request will be sent by email to the address provided by the participant to Amazon Mechanical Turk, approximately one week after the first study is fielded. This is the same address used by Amazon for notifications about payments through the Mechanical Turk platform, which we expect to be a valid email address. The rate at which participants begin Survey 2 will be compared across conditions and an analysis of non-response based on the demographics and debriefing ratings from Survey 1 will be conducted.
The script for the survey request is in Attachment C.
Participants
Participants will be recruited as a convenience sample from Amazon Mechanical Turk of adult U.S. citizens (18 years and older); this study is focused on internal validity rather than representativeness of any population. This research design requires a sample of 600 participants in order to sufficiently explore the range of variables of interest. These participants will be randomly assigned to the 3 groups described (a single-factor design with 200 participants per group). A prior study that successfully found main effects and was based on measuring respondent burden used a sample of 480 participants with 80 participants per group. The sample size has been increased to account for the possibility that the effect size being explored in the present study is smaller, to allow for additional analyses by subgroups based on participant rating of burden, and for attrition between Survey 1 and Survey 2.
An additional 20 participants will be recruited for an initial pilot test on Amazon Mechanical Turk. These pilot participants will be asked to complete Survey 1, to confirm that questions are worded clearly and experimental manipulations result in sufficiently different perceptions of the survey experience.
Burden Hours
For the first survey, we anticipate that the task (the term “HIT” (human intelligence task) is used by the Amazon Mechanical Turk community) will take no longer than 10 minutes, totaling 103 burden hours across 600 participants and 20 pilot test participants. The 600 participants who complete Survey 1 will be invited to participate in Survey 2. We expect that no more than 85% of participants (510 participants) will participate in the survey, which will take no longer than 10 minutes, totaling 85 burden hours. In total, the study will require 188 burden hours. The surveys will be administered completely online at the time and location of the participant’s choosing.
Payment to Participants
We will recruit 620 participants from the Amazon Mechanical Turk database (20 participants for a pilot test and 600 participants for Survey 1). Participants will be compensated $0.75 for participating in the study, a typical rate provided by Mechanical Turk for similar tasks; a total of $465.00 will be paid directly to Amazon Mechanical Turk for participant fees for Survey 1. Participants in Survey 2 will receive the same compensation; an estimated $382.50 based on an 85% response rate for Survey 2. The estimated total for participant fees is $847.50.
Data Confidentiality
Recruiting of participants will be handled by Amazon Mechanical Turk. Once participants are recruited into the study, they will be given a link to the survey, which is hosted by Qualtrics. The data collected as part of this study will be stored on Qualtrics servers. Using the language shown below, participants will be informed of the voluntary nature of the study and they will not be given a pledge of confidentiality.
This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141. This survey will take approximately 10 minutes to complete. Your participation is voluntary, and you have the right to stop at any time. This survey is being administered by Qualtrics and resides on a server outside of the BLS Domain. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. By proceeding with this study, you give your consent to participate in this study.
Attachments
Attachment A: Survey 1 content
Attachment B: Survey 2 content
Attachment C: Survey 2 request email
Attachment A: Survey 1 content
Welcome! Thanks for your interest in this study. On the following pages, you’ll be asked to respond to a survey that will ask you a variety of types of questions. Please consider the questions carefully. You will also be asked about your experience answering those questions.
This HIT is part of our research into how to improve surveys. We’re interested in your opinions and there are no right or wrong answers. We are not collecting any personally identifiable information in the survey questions. The study should take about 10 minutes.
Please do not use your browser's back button.
This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141. This survey will take approximately 10 minutes to complete. Your participation is voluntary, and you have the right to stop at any time. This survey is being administered by Qualtrics and resides on a server outside of the BLS domain. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. By proceeding, you give your consent to participate in this study.
---page break---
When looking for work, people sometimes do not accept the first job that they are offered. What would you say is the more likely reason that someone would not accept the first job they are offered?
The job doesn’t offer enough hours
The security protocols are too strict
A study of native born residents in Newland found that two-thirds of the children developed considerable levels of nearsightedness after starting school, while their illiterate parents and grandparents, who had no opportunity for formal schooling, showed no signs of this disability.
If the above statements are true, which of the following conclusions is most strongly supported by them?
(A) Only people who have the opportunity for formal schooling develop nearsightedness.
(B) People who are illiterate do not suffer from nearsightedness.
(C) The nearsightedness in the children is caused by the visual stress required by reading and other class work.
(D) Only literate people are nearsighted.
(E) One-third of the children are illiterate.
Below are several statements that may or may not represent your own beliefs or attitudes. Please consider them carefully and choose the one that you think best represents you. If more than one seems to fit, then choose the one that best fits. If none seem to fit well, please still choose one that fits best.
I am very familiar with the arguments for increases in national security
I am somewhat familiar with the arguments for increases in national security
I am not familiar with the arguments for increases in national security
Choose the option that best represents you:
I am very interested in the economy
I am somewhat interested in the economy
I am not interested in the economy
---page break---
Please think about this situation. Two scientists want to know if a certain drug is effective against high blood pressure. The first scientist wants to give the drug to one thousand people with high blood pressure and see how many of them experience lower blood pressure levels. The second scientist wants to give the drug to five hundred people with high blood pressure, and not give the drug to another five hundred people with high blood pressure, and see how many in both groups experience lower blood pressure levels. Which is the better way to test this drug?
All 1000 get the drug
500 get the drug, 500 don't
Please think about this situation. A school teacher has 20 students in her class. Half of the students go out to the playground for recess. How many students are left in the class?
None
5
10
15
How do you prefer to answer surveys- on a desktop computer or a mobile smartphone?
Desktop computer
Mobile smartphone
Thinking of movies that were released in 2010, what was your favorite?
[open text entry]
---page break---
How many hours per day do you usually spend at a computer (not including your phone)?
[numeric text entry]
Yesterday, how many times did you check your phone?
[numeric text entry]
Thinking about the last seven days, how many hours total did you spend watching television?
[numeric text entry]
In what month did you purchase the computer that you are currently using?
[choice of months or not applicable]
---page break---
Think back to a large purchase that you made – a car, a TV, or a computer, perhaps. Now, please think of another expense that you made at around the same time. What was that expense?
[open text entry]
Think back to the most recent purchase that you made, of anything of any size – groceries, gas, or something online, perhaps. Was your expense more or less than $20?
The recent purchase was more than $20
The recent purchase was less than $20
How many people, including children, live in your neighborhood?
0-50
51-100
101-150
… [continue in groups of 50]
1001 or more
About what percentage of households in your neighborhood own at least one vehicle?
0-10%
11-90%
91-100%
---page break---
Lycopene, glutathione, and glutamine are powerful antioxidants that neutralize the free radicals that are produced in the body as a result of routine bodily processes. Do these antioxidants cause aging?
Yes
No
The liver, lungs, and heart are all organs of the human body. Are these organs all necessary for regular bodily functioning?
Yes
No
Do you think that people under the age of 14 suffer any ill effects from watching programs with violence in them? By ill effects I mean increased aggression in school or at home, increased nightmares, inability to concentrate on routine chores, and so on. By violence, I mean graphic depictions of individuals inflicting physical injuries on others or on themselves, depictions of individuals wantonly damaging property or possessions, abusive behavior and language to others, and so on.
Yes
No
Do you think that teenagers spend a lot of their time using social media?
Yes
No
---page break---
Thanks for answering those questions.
Now, please think back to how you felt as you were answering the questions you just finished answering before you came to this screen – during the main survey.
How burdensome was participating in this survey?
Extremely burdensome
Very burdensome
Somewhat burdensome
A little burdensome
Not at all burdensome
---page break---
We would like to understand how your experience participating in this survey relates to other experiences in real life.
Name an activity from real life that you would give a rating of "Extremely burdensome".
[open text entry]
And please explain what makes that activity extremely burdensome:
[open text entry]
Name an activity from real life that you would give a rating of “Not at all burdensome”.
[open text entry]
And please explain what makes that activity not at all burdensome:
[open text entry]
Name an activity from real life that you would give a middle rating of “Somewhat burdensome”.
[open text entry]
And please explain what makes that activity somewhat burdensome:
[open text entry]
---page break---
Please think back to how you felt as you answered questions during the main survey.
How short or long did you feel the survey was?
Very short
Somewhat short
Neither short nor long
Somewhat long
Very long
How easy or difficult did you feel it was to come up with your answers?
Very easy
Somewhat easy
Neither easy nor difficult
Somewhat difficult
Very difficult
How much mental effort did you put into answering the questions?
Very high effort
Somewhat high effort
Neither high nor low effort
Somewhat low effort
Very low effort
How long would you guess that it took to complete the survey? Consider the time starting from the instructions screen until the thank-you screen.
_____ minutes
How important to your community did you feel this survey was?
Extremely important
Very important
Somewhat important
Slightly important
Not at all important
How helpful for your community did you feel this survey was?
Extremely helpful
Very helpful
Somewhat helpful
Slightly helpful
Not at all helpful
How interesting did you feel the topics covered by this survey were?
Extremely interesting
Very interesting
Somewhat interesting
Slightly interesting
Not at all interesting
How sensitive did you feel the topics covered by this survey were?
Extremely sensitive
Very sensitive
Somewhat sensitive
Slightly sensitive
Not at all sensitive
How intrusive did you feel the topics covered by this survey were?
Extremely intrusive
Very intrusive
Somewhat intrusive
Slightly intrusive
Not at all intrusive
How private did you feel the topics covered by this survey were?
Extremely private
Very private
Somewhat private
Slightly private
Not at all private
---page break---
And now just a few final questions about yourself.
In what year were you born?
[open text box; numeric]
What is your sex?
Male
Female
Are you Hispanic or Latino?
Yes
No
What is your race? (Please select one or more)
American Indian or Alaska native
Asian
Black or African American
Native Hawaiian or Pacific Islander
White
What is the highest degree you have received?
No schooling completed
Elementary school diploma
High school diploma or the equivalent (GED)
Associate degree
Bachelor’s degree
Master’s degree
Professional or doctorate degree
---page break---
And one last question.
Group 1:
The survey that you are taking right now was specifically designed with you, the respondent, in mind. Sometimes we hear from respondents that our surveys should be easier to complete and so we have rewritten some questions to be simpler for you. We are studying ways to reduce the feelings of burden that we impose on our respondents, and we hope that by simplifying some questions, your experience was an improvement over what past respondents may have felt.
Please take a moment now to rate your experience taking this survey.
Very positive
Somewhat positive
A little bit positive
Neither positive nor negative
A little bit negative
Somewhat negative
Very negative
Group 2:
The survey that you are taking right now was designed to improve data quality. Sometimes the data we get from respondents is not very good and so we have rewritten some questions to be more demanding and require more effort from you. We are studying how best to write survey questions to ensure that we get accurate data from our respondents, and we hope that by writing some more difficult questions, the data we received from you is improved over what we got from respondents in the past.
Please take a moment now to rate your experience taking this survey.
Very positive
Somewhat positive
A little bit positive
Neither positive nor negative
A little bit negative
Somewhat negative
Very negative
Group 3:
Please take a moment now to rate your experience taking this survey.
Very positive
Somewhat positive
A little bit positive
Neither positive nor negative
A little bit negative
Somewhat negative
Very negative
---page break---
We have recorded your answer as [FILL: Very positive/Somewhat positive/A little bit positive/Neither positive nor negative/A little bit negative/Somewhat negative/Very negative]. Your responses will help us to investigate the impact on the survey experience of questions like those that you were asked today.
Here is your completion code. Please paste this code into the HIT to verify your participation.
[random numeric code]
Thank you for your participation!
Attachment B: Survey 2 Content
Welcome back!
On the following pages, you’ll be asked to tell us about what it’s like to participate in surveys. We are specifically interested in your experience participating in the last survey that you completed for us. There are no right or wrong answers. We are not collecting any personally identifiable information in the survey questions. Your responses will be used to understand the survey experience so that we can learn what it is like to answer questions like these. Our objective with this study is to improve the survey experience for respondents.
The study will take less than 10 minutes to complete. Please do not use your browser's back button.
Please
enter the survey invitation code that you received by email:
[open
text entry]
This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141. This survey will take approximately 10 minutes to complete. Your participation is voluntary, and you have the right to stop at any time. This survey is being administered by Qualtrics and resides on a server outside of the BLS domain. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. By proceeding, you give your consent to participate in this study.
---page break---
First, three questions about the last survey:
What do you remember about the 1st survey that you took for us?
[long open text entry]
How did you decide whether to participate in this survey right now?
[long open text entry]
Do you remember what rating you gave that 1st survey? If you do remember, please indicate which rating you chose. If not, that’s ok! – just mark that option below.
Very positive
Somewhat positive
A little bit positive
Neither positive nor negative
A little bit negative
Somewhat negative
Very negative
I don’t remember what rating I chose
---page break---
Now, we’d like to ask you about your thoughts about participating in surveys. These questions are important because they will help us to learn how to design better surveys in the future. Please answer honestly; your responses will not affect your earnings in any way.
In your own words, what does it mean for participating in a survey to be “burdensome”?
[long open text entry]
In your own words, what does it mean for a survey question to be “burdensome”?
Write a survey question that you would consider “extremely burdensome”. Do not include your response.
[long open text entry]
---page break---
In the first survey that you took for us last week, we asked you a range of questions – you may have noticed that some were more burdensome than others to answer. We would like to learn which ones you thought were more burdensome to answer. On the following pages, you will see pairs of screenshots of questions that you saw. Please choose which one you thought was more burdensome to answer and describe what made you feel that way.
Do you think that people under the age of 14 suffer any ill effects from watching programs with violence in them? By ill effects I mean increased aggression in school or at home, increased nightmares, inability to concentrate on routine chores, and so on. By violence, I mean graphic depictions of individuals inflicting physical injuries on others or on themselves, depictions of individuals wantonly damaging property or possessions, abusive behavior and language to others, and so on. Yes No |
How many people, including children, live in your neighborhood? 0-50 51-100 101-150 … [continue] 1001 or more
|
And please briefly explain why that question was more burdensome to answer:
[open text entry]
How do you prefer to answer surveys- on a desktop computer or a mobile smartphone? Desktop computer Mobile smartphone |
Do you think that teenagers spend a lot of their time using social media? Yes No |
And please briefly explain why that question was more burdensome to answer:
[open text entry]
How many hours per day do you usually spend at a computer (not including your phone)? [numeric text entry] |
Yesterday, how many times did you check your phone? [numeric text entry]
|
And please briefly explain why that question was more burdensome to answer:
[open text entry]
When looking for work, people sometimes do not accept the first job that they are offered. What would you say is the more likely reason that someone would not accept the first job they are offered? The job doesn’t offer enough hours The security protocols are too strict |
Think back to a large purchase that you made – a car, a TV, or a computer, perhaps. Now, please think of another expense that you made at around the same time. What was that expense? [open text entry] |
And please briefly explain why that question was more burdensome to answer:
[open text entry]
---page break---
Which
of the following would make a survey feel burdensome to you?
(Select all that apply)
Long
Boring
Unimportant
Useless
Annoying
Intrusive
Sensitive
Embarrassing
Tedious
Difficult
No benefit to me
Disorganized
Unethical
Other ___________
[RANDOMIZE ORDER OF THE RESPONSE OPTIONS]
---page break---
That’s all the questions we have for you.
Thank you for participating in this research. The purpose of this study was to explore perceptions of burden of survey respondents, and how survey designers may be able to improve the survey experience. We apologize if participating in this survey caused you any confusion or frustration. It is our hope that this study will help to reduce the burden that surveys impose on respondents in the future by informing our understanding of how it feels to take burdensome surveys.
Please do not discuss the content or purpose of the study with anyone else for the next few days while we are still collecting data. Thanks for your help in keeping this study valid. If you have any other comments on this topic, please leave them here:
[open text entry]
Attachment C: Survey 2 request email
Dear Amazon Mechanical Turk Worker,
You are invited to complete a HIT for the Bureau of Labor Statistics, Office of Survey Methods Research (Requester name: Bill Mockovak) under OMB Control Number 1220-0141based on your previous HIT participation for us. This HIT is open only to workers who participated in our first survey, which you took between [fill date range of 1st study field period].
This survey is similar to the first one you took for us, in which you were asked questions about what it is like to take the survey. Perhaps you remember the image of a cartoon pencil and paper at the top? And the survey ended with a question about whether your experience with the survey was positive or negative.
This
new survey is similar to the last one. It will last about 10 minutes
and you will be paid $0.75. Please click on the link below to
participate.
[Link to Survey 2 (or to the mTurk page where they
will have to search for the study)]
Enter
this passcode on the first screen to verify that you did participate
in our first survey:
TAKEASURVEY
Worker IDs will also be cross-checked against the first survey before payment is given.
Thank you for your consideration.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Edgar, Jennifer - BLS |
File Modified | 0000-00-00 |
File Created | 2021-01-27 |