Note to Reviewer - Using Framing Instructions to Reduce Respondent Burden

OMB package_Respondent Burden_15_04_16.docx

Cognitive and Psychological Research

Note to Reviewer - Using Framing Instructions to Reduce Respondent Burden

OMB: 1220-0141

Document [docx]
Download: docx | pdf

April 16, 2015




NOTE TO THE

REVIEWER OF:

OMB CLEARANCE 1220-0141

“Cognitive and Psychological Research”


FROM:

Erica Yu

Research Psychologist

Office of Survey Methods Research


SUBJECT:

Submission of Materials for the ‘Using Framing Instructions to Reduce Respondent Burden’ Study




Please accept the enclosed materials for approval under the OMB clearance package 1220-0141 “Cognitive and Psychological Research.” In accordance with our agreement with OMB, we are submitting a brief description of the study.


The total estimated respondent burden hours for this study are 88 hours.


If there are any questions regarding this project, please contact Erica Yu at 202-691-7924.


  1. Introduction and Purpose

A common approach that survey designers take to reduce respondent burden is to reduce objective factors of burden, such as the number of questions in the survey. However, such an approach to respondent burden places respondent burden and data quality as rivalling ends in a trade-off equation, whereby reductions in burden must come at the cost of reductions in data quantity and quality.


Recent research has proposed that subjective perceptions of burden may also play a role in respondent ratings of burden and associated survey outcomes (Yan, Fricker, & Tsai, 2014). Unlike objective factors, subjective factors are grounded in cognition, which need not be tied to objective factors. Reductions in perceived burden may not be subject to the same trade-offs in data quality.


Studies from cognitive psychology have shown that human perceptions of physical and psychological phenomena are judged relative to a reference set rather than absolutely. When judgments are made relative to a reference set, it follows that different reference sets for the same stimuli may lead to different judgments. In other words, a respondent’s judgment of burden may be changed if his or her reference set is changed. The reference set may be changed at the time of judgment, such as by asking the respondent to rate their survey experience compared to preparing taxes (less burdensome) or ordering a pizza (more burdensome). Or, the reference set may be changed during the experience, such as by notifying the respondent that the length of the survey will be decreased (less burdensome) or increased (more burdensome). Perceived burden changes during the survey experience may also affect data quality, as respondents dynamically modify their effort in reaction to the changes. If survey designers could control the reference set by which respondents judged their burden, they may be able to reduce feelings of burden without lowering data quality. The present study will test whether managing the reference set during the survey experience affects ratings of burden and measures of data quality. The findings from this study may support a renewed emphasis on relieving perceived burden as an alternative to reducing survey length to minimize respondent burden.


  1. Research Design

To manipulate respondent perceptions of burden, this study will vary both objective factors of burden (controlled by length of the survey) and subjective factors (controlled by survey instructions regarding the outcome of a screener affecting perceived length of the survey) to create different reference sets by which respondents can compare their survey experience. Two factors will be manipulated in a between-groups design: actual number of questions asked (16, 32) and screening outcome (screened in to a survey section, screened out of a survey section, no screener). A group of participants will receive no screening information and serve as a control group for understanding the perceived burden of the surveys without the impact of the experimental instructions. All other aspects of the survey experience that are not being controlled as experimental factors will be the same for all groups.


At this time, it is not yet known whether these experimental manipulations of actual survey length and perceived survey length are strong enough to result in detectable differences between groups. An adaptive design will be used whereby data collection will proceed iteratively with increasing strengths of manipulations until sufficient strength is reached. This approach will ensure that the design that minimizes the burden placed on participants is used. All of the manipulations that may be used are included here; only one version will be used at any one time.


Actual Survey Length

The survey length will be manipulated through controlling the number of questions asked. The iterative versions to be tested are included in the table below; the first version to be tested will compared a survey that is 18 questions long to a survey that is 36 questions long.



Actual Short Length

Actual Long Length

Version 1

18 questions

30 questions

Version 2

18 questions

36 questions

Version 3

18 questions

42 questions


Perceived Survey Length

Perceived survey length will be manipulated through controlling a screening outcome for all participants except those in the Control condition. Participants will be randomly assigned to be “screened in” or “screened out”. These scripts will appear after Item 6 in the survey.



Screen-in Framing

Screen-out Framing

Version 1

Based on your answers, we will now ask you some extra questions.

Compared to other respondents, you will be asked more questions, and this may mean that the survey will take more of your time than originally estimated. We appreciate your participation in the survey.

Based on your answers, we will now skip past some questions.

Compared to other respondents, you will be asked fewer questions, and this may mean that the survey will take less of your time than originally estimated. We appreciate your participation in the survey.

Version 2

Based on your answers, we will now ask you to go through additional survey sections and answer extra questions.

Unfortunately, you will have to answer more questions that other respondents don’t have to. The survey will take more of your time and effort than originally estimated. We appreciate your participation in the survey.

Based on your answers, we will now skip past some survey sections.

Fortunately, you don’t have to answer those questions that other respondents have to answer. The survey will take less of your time and effort than originally estimated. We appreciate your participation in the survey.

Version 3

Based on your answers, we will now ask you to complete the long version of our survey.

Unfortunately, you will have to answer more questions that other respondents don’t have to. The survey will take more of your time and effort than originally estimated. We appreciate your participation in the survey.

Based on your answers, we will now ask you to complete the short version of our survey.

Fortunately, you don’t have to answer those questions that other respondents have to answer. The survey will take less of your time and effort than originally estimated. We appreciate your participation in the survey.


The intended effect of each factor is described in the table below.



Actual

Short Survey

(18 questions)

Actual

Long Survey

(30 questions)

Perceived

Short Survey

(screened out)

Experience 18 questions and feel it was a short survey

Experience 30 questions and feel it was a short survey

Perceived

Long Survey

(screened in)

Experience 18 questions and feel it was a long survey

Experience 30 questions and feel it was a long survey

Control

(no screener)

Experience 18 questions

Experience 30 questions


After the experimental protocol, participants will be asked debriefing questions. These questions will be clearly separated through the use of instructions from the main experimental survey sections. Explicit ratings of burden as well as measures of the components of burden will be collected immediately after the experimental protocol. Additional objective and subjective measures of burden will be collected, including length of time spent on survey participation as measured through the web survey instrument and self-reported interest in the survey topic.


We will also indirectly evaluate whether perceived burden affects levels of effort by measuring item non-response and straight-lining, or the tendency for less motivated respondents to use the same response option when response options are displayed in a grid format. We have made several design choices to encourage these satisficing behaviors across all participants. The content of the survey will focus on bird watching, a topic that is not of interest to most people and may encourage item non-response and feelings of burden due to the irrelevance of the survey. Also, the survey questions will be presented in a grid format that has previously been shown to be sensitive to straight-lining. We will analyze these behaviors across groups after the screening outcome is revealed, with comparisons to the control conditions.


The content of the survey is summarized in Appendix A. The debriefing questions are included in Appendix B.



3. Participants

Participants will be recruited using a convenience sample from Amazon Mechanical Turk of adult U.S. citizens (18 years and older); this study is focused on internal validity rather than representativeness of any population. This research design requires a sample of 480 participants in order to sufficiently explore the range of variables of interest. These participants will be randomly assigned to the 6 groups described (a 3x2 design with 80 participants per group).


The primary goal of the proposed research is to explore the effects of survey length and screener outcome on perceived burden. We have a priori hypotheses about the direction of effects and expect a medium to large effect size, given the leading language of the screener outcome scripting. Given these considerations (effect size = 0.4, power = 0.8, and significance level = 0.05 one-tailed), a sample size of 78 participants in each group is expected to be sufficient.


An additional 2 participants will be recruited for an initial pilot test from TryMyUI along with up to 2 federal employee volunteers. These pilot participants will be asked to think aloud while completing the task and answer questions about the experience afterwards, which will help to confirm that questions are worded clearly and experimental manipulations result in sufficiently different perceptions of burden. After initial pilot testing is complete, a second round of pilot testing will be conducted with 40 participants. These pilot participants will be assigned to the 4 main experimental protocols (excluding the 2 control conditions), resulting in 10 participants in each condition. These data will be examined to check that manipulations are having their intended effect.


4. Burden Hours

Our goal is to obtain feedback from 520 participants. We anticipate that the task will take no longer than 10 minutes, totaling 86.67 burden hours. The 4 initial pilot participants will be expected to spend 20 minutes each on the task, for an additional 1.33 burden hours. Total burden hours are expected to total no more than 88 hours. The survey will be administered completely online at the time and location of the participant’s choosing.

5. Payment to Participants

We will recruit 520 participants from the Amazon Mechanical Turk database. Participants will be compensated $0.75 for participating in the study, a typical rate provided by Mechanical Turk for similar tasks; a total of $390 will be paid directly to Amazon Mechanical Turk for participant fees. An additional 2 participants will be recruited from TryMyUI and each paid $20 for their participation; a total of $40. The 2 federal employee volunteers will not be paid.


6. Data Confidentiality

Recruiting of participants will be handled by Amazon Mechanical Turk and TryMyUI. Once participants are recruited into the study, they will be sent a link to the survey, which is hosted by Qualtrics. The data collected as part of this study will be stored on Qualtrics servers. Using the language shown below, participants will be informed of the voluntary nature of the study and they will not be given a pledge of confidentiality.

This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141. We will use the information you provide for statistical purposes only. Your participation is voluntary, and you have the right to stop at any time. This survey is being administered by Qualtrics and resides on a server outside of the BLS Domain. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. By proceeding with this study, you give your consent to participate in this study.



Appendix A: Survey questions

The questions will be presented in “sections” of 6 questions at a time; each of the 6 questions within a section will be presented as a grid using the same response scale. The 6 questions in Section A will be presented first for all participants, in a randomized order. The remaining sections to be presented (Section B1-B7) will be randomly selected to match the survey length specified by the experimental design, and the order of questions within a section will be randomized.


Section A (agree-disagree scale)

  1. I think high unemployment is a good thing for the American economy.

  2. I think the health of the economy is relevant to everyone’s well being.

  3. I am knowledgeable about the economy.

  4. I like to think about what the future will be like for the average American.

  5. I am a “numbers person”.

  6. I think that it is getting easier to find a job these days compared to last year.


Section B1 (frequency scale)

  1. Watch more than 2 hours of news programming during a day. (Likely LOW)

  2. Share on social media a news story related to the inflation rate. (Likely LOW)

  3. Visit BLS.gov to learn about the economy. (Likely LOW)

  4. Watch or read transcripts of the BLS Commissioner testifying before Congress. (Likely LOW)

  5. Read or hear about the Department of Labor in the news. (Likely LOW)

  6. Read or hear about the White House or the President of the United States in the news. (Likely HIGH)


Section B2 (agree-disagree)

  1. I am willing to pay for the Producer Price Index to cover more services. (Likely DISAGREE)

  2. The lag in publishing the Producer Price Index affects me negatively. (Likely DISAGREE)

  3. I find the Producer Price Index to be useful. (Likely DISAGREE)

  4. I would be interested in accessing historical Producer Price Index data. (Likely DISAGREE)

  5. The Producer Price Index is biased by tax cutting policies. (Likely DISAGREE)

  6. I am not familiar with how data for the Producer Price Index are collected. (Likely AGREE)


Section B3 (frequency scale)

  1. Use employment projections to speculate on the economy. (Likely LOW)

  2. Download employment and wage BLS tables for my own use. (Likely LOW)

  3. Compare wages over time for my state. (Likely LOW)

  4. Purchase third-party data to supplement BLS data. (Likely LOW)

  5. Look for the Employment Situation report on the first Friday of the month. (Likely LOW)

  6. Hear about the unemployment rate from television or Internet news sources. (Likely HIGH)


Section B4 (agree-disagree scale)

  1. I do not use the data tools on BLS.gov. (Likely AGREE)

  2. I do not use the data dictionaries to look up unfamiliar variables. (Likely AGREE)

  3. The Handbook of Methods is useful to some people for understanding how BLS data are collected. (Likely AGREE)

  4. Visualizations of data on BLS.gov would be helpful to most users (Likely AGREE)

  5. I typically start at Google when I need to find information. (Likely AGREE)

  6. I use the SAS statistical program every day. (Likely DISAGREE)


Section B5 (frequency)

  1. Access multifactor productivity news releases. (Likely LOW)

  2. Use quarterly indexes of labor productivity. (Likely LOW)

  3. Track national hourly compensation, output per hour, or unit labor costs. (Likely LOW)

  4. Look up how productivity is calculated. (Likely LOW)

  5. Access productivity data produced by any organization other than BLS. (Likely LOW)

  6. Estimate my own labor productivity. (Likely HIGH)


Section B6 (agree-disagree)

  1. It would be a good use of the BLS budget to redesign the regional BLS webpage for the Mid-Atlantic. (Likely DISAGREE)

  2. It would be a good use of the BLS budget to redesign the regional BLS webpage for the Southeast. (Likely DISAGREE)

  3. It would be a good use of the BLS budget to redesign the regional BLS webpage for the Midwest. (Likely DISAGREE)

  4. It would be a good use of the BLS budget to redesign the regional BLS webpage for the Southwest. (Likely DISAGREE)

  5. It would be a good use of the BLS budget to redesign its office space. (Likely DISAGREE)

  6. Websites should be accessible to people with low vision or mobility. (Likely AGREE)






Appendix B: Respondent debriefing questions

The questions will be presented after the experimental protocol portion of the survey and a separation page explaining to the participant that they will now be asked questions about the experience of taking the survey that they just completed. These questions will appear in the same fixed order for all participants.



Thank you for participating in our survey. We’d like to ask you about what it was like to participate. These questions are important because they will help us to learn how to design better surveys in the future. Please answer honestly; your responses will not affect your earnings in any way.

  1. Overall, how burdensome was participating in this survey?

Extremely burdensome

Very burdensome

Somewhat burdensome

Slightly burdensome

Not at all burdensome

---page break---

  1. We would like to understand how your experience participating in this survey relates to other experiences in your life.

    1. What experience from real life would you give a rating of “Extremely burdensome”?

[open text entry]

    1. What experience from real life would you give a rating of “Not at all burdensome”?

[open text entry]

    1. What experience from real life would you give a middle rating of “Somewhat burdensome”?

[open text entry]

---page break---

  1. As you completed the survey, how quickly or slowly did it feel time was passing?

Very slowly

Slowly

Somewhat slowly

Neither slowly nor quickly

Somewhat quickly

Quickly

Very quickly



  1. How short or long did you feel the survey was?

Very short

Short

Somewhat short

Neither short nor long

Somewhat long

Long

Very long



  1. How easy or difficult to complete did you feel the survey was?

Very easy

Easy

Somewhat easy

Neither easy nor difficult

Somewhat difficult

Difficult

Very difficult



  1. How important did you feel this survey was?

Very important

Important

Somewhat important

Neither important nor unimportant

Somewhat unimportant

Unimportant

Very unimportant



  1. How interesting did you feel the topic of the survey was?

Very interesting

Interesting

Somewhat interesting

Neither interesting nor uninteresting

Somewhat uninteresting

Uninteresting

Very uninteresting



  1. How much effort did you put in to completing the survey?

Much more than was needed

More than was needed

Somewhat more than was needed

About the right amount

Somewhat less than was needed

Less than was needed

Much less than was needed



  1. How trustworthy would you say the institution conducting this research is for safeguarding the information that you have provided?

Extremely trustworthy

Very trustworthy

Somewhat trustworthy

Slightly trustworthy

Not at all trustworthy



  1. How willing would you be to participate in another survey like this one in the future?

Extremely trustworthy

Very trustworthy

Somewhat trustworthy

Slightly trustworthy

Not at all trustworthy

---page break---

  1. What is your sex?

Male

Female


  1. In what year were you born?

[open text box; numeric]



  1. Are you Hispanic or Latino?
    Yes
    No



  1. What is your race? (Please select one or more)

American Indian or Alaska native

Asian

Black or African American

Native Hawaiian or Pacific Islander

White


  1. What is the highest degree you received?

No schooling completed

Elementary school diploma

High school diploma or the equivalent (GED)

Associate degree

Bachelor’s degree

Master’s degree

Professional or doctorate degree

---page break---

Thank you for participating in this research. The purpose of this study was to explore perceptions of burden of survey respondents, and how survey designers may be able to improve the survey experience. We apologize if participating in this survey caused you any confusion or frustration. It is our hope that this study will help to reduce the burden that surveys impose on respondents in the future by informing our understanding of how it feels to take burdensome surveys.

Please do not discuss the content or purpose of the study with anyone else for the next few days while we are still collecting data. Thanks for your help in keeping this study valid.

If you have any comments about your survey experience, please leave them in the box below. Please know that we appreciate your participation today.

Thank you!

Appendix C: Survey introduction

All participants would see the same introduction to the survey.



Welcome! Thanks for your interest in our study. You’re here because we have asked you to participate in our research. We are asking you and hundreds of other people to tell us about what they think.


Unlike some surveys or online tasks you may be familiar with, we ask that you complete this survey all at one time and that you only start once you are in a quiet place where you won't be disturbed. The survey takes about 10 minutes, on average. Only share information you're comfortable with - nothing too personal - but please be honest and follow the instructions.

 Please do not use your browser's back button. 

This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141. We will use the information you provide for statistical purposes only. Your participation is voluntary, and you have the right to stop at any time. This survey is being administered by Qualtrics and resides on a server outside of the BLS domain. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. By proceeding, you give your consent to participate in this study.

---page break---


On the following pages, you’ll be asked to answer questions about your attitude toward a range of topics related to the Bureau of Labor Statistics (BLS). We’re interested in your opinion and there are no right or wrong answers.


Let’s get started!


---page break---



14


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorYu, Erica - BLS
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy