Note to Reviewer - Online Testing of BLS Surveys

Note to Reviewer Online Testing of BLS surveys_OMB_v3_clean.docx

Cognitive and Psychological Research

Note to Reviewer - Online Testing of BLS Surveys

OMB: 1220-0141

Document [docx]
Download: docx | pdf

OMB Control Number: 1220-0141

Expiration Date: 03/31/2021

1/14/2021

NOTE TO THE REVIEWER OF:


OMB CLEARANCE 1220-0141

Cognitive and Psychological Research”

FROM:

Robin Kaplan and Erica Yu

Office of Survey Methods Research

SUBJECT:

Submission of Materials for Online Testing of BLS surveys

Please accept the enclosed materials for approval under the OMB clearance package 1220-0141 “Cognitive and Psychological Research.” In accordance with our agreement with OMB, we are submitting a brief description of the study.

The total estimated respondent burden for this study is 827 hours.

If there are any questions regarding this project, please contact Robin Kaplan at 202-691-7383.



  1. Introduction

With several BLS programs (including the Current Population Survey, American Time Use Survey, and the Consumer Expenditure Survey) interested in turning to online modes of data collection to supplement current interviewer-administered production data collection, we have identified a need to begin testing basic survey features to ensure data collected in the self-administered online mode meets data quality standards. While each BLS survey has its own unique features, many methodological issues apply across programs and early, coordinated online testing and a streamlined research program will benefit BLS programs as they move toward mixed-mode data collection. The Office of Survey Methods Research plans to study the following topics in the future to understand the transition from interviewer-administered to mixed mode and self-administered surveys: how to encourage web mode responses, how to design for mobile response, and how to implement features of interviewer-administered data collection.

The current study explores two features of interviewer-administered data collection: giving “don’t know” or “prefer not to say” answers and open-ended style questions.


  1. How to reduce item non-response when offering ‘don’t know’ or ‘prefer not to say’ response options in self-administered online surveys


Previous research has shown that web respondents will select ‘Don’t know’ (DK) or ‘Prefer not to say’ (i.e., a soft refusal) options when they are offered explicitly on the screen more often than when they are hidden (that is, only visible after skipping an item). However, hiding response options might not be advisable in cases where ‘Don’t know’ could be a valid response; research has shown that omitting DK options leads to higher break-off rates (e.g., Lemcke et al., 2019; McGee et al., 2019). In many surveys, especially those using proxy response where one household member reports for others, DK is a valid response option. Thus, interviewer-administered surveys always have an implicit DK response option (not read aloud, but that the interviewer can record). To make comparisons between interviewer-administered and online, self-administered surveys, including an explicit DK would also be necessary to collect valid DK responses and to also differentiate valid DK responses from refusals.


The literature has also shown that DK and ‘Prefer not to say’ options represent different response types, where DK maps onto knowledge issues (e.g., not knowing how much income someone in the household earns), whereas refusals map onto issues of sensitivity (e.g., not wanting to disclose someone’s disability status; Cobb 2018; Lee et al. 2004; Tourangeau, 1984; Bickart et al. 1990). Thus, including a refusal option (i.e., ‘prefer not to say’) enables differentiation of which questions elicited knowledge challenges or difficulty arriving at an answer versus sensitivity issues, which may help design tailored probes to individual questions in the future. Further, it allows for comparisons of distributions of item non-response when a refusal option is offered explicitly versus when participants must skip the question altogether as an implied refusal. This could provide insight into participants’ propensity to select the refusal option or to skip items they prefer not to answer, for example, sometimes it is difficult to know whether participants skipped a question because they wanted to “speed” through the survey versus did not want to answer a particular item.


How should an online survey be designed to encourage accurate, substantive responses even when DK and ‘prefer not to say’ options are offered? To test ways to reduce item non-response when offering the options, we propose an experiment where participants who initially select DK, ‘Prefer not to say,’ or skip items will be shown a probe an interviewer might use in the field to encourage a substantive response, such as politely asking respondents to provide data or explaining the importance of the response. This mirrors what actual interviewers say to survey respondents; for instance, interviewers will often preface a survey interview by telling respondents that they do not have to answer any questions that they don’t want to, and provide reminders of this option when asking sensitive questions even though DK and ‘Prefer not to answer’ are not explicit response options (Kaplan & Yu, 2020). These types of interviewer probing strategies may help increase responses to questions that typically have high item non-response in both online and interviewer-administered surveys, such as those about income or disability. See Attachment A for a summary of research using similar methodologies with web surveys, the types of probes used, and the main finding of each study.


  1. How to administer open-ended and field-coded questions in self-administered online surveys


Field-coded and open-ended questions require responses to be coded, a process often done by interviewers. One example of this is in the Current Population Survey (CPS) questions about job search strategies and distinguishing between active versus passive job search strategies. In addition to this coding, interviewers probe respondents when initial responses do not match the response categories. There is little research on how to collect data using this style of question in a self-administered mode without interviewers. To test ways to accurately collect data in self-administered modes for these questions, we propose in the current research assessing how different closed-ended processes can be used to prompt respondents to select codes.


Taking the example of the CPS, interviewers ask respondents the following question:

What are all of the things you have done to find work during the last 4 weeks?”


1 Contact employer directly/interview

2 Contacted public employment agency

3 Contacted private employment agency

4 Contacted friends or relatives

5 Contacted school/university employment center

6 Sent out resumes/filled out applications

7 Checked union/professional registers

8 Placed or answered ads

9 Other active

10 Looked at ads

11 Attended job training programs/courses

12 Other passive

13 Nothing


This is an open-ended question in that the response categories are not read to respondents, and interviewers must field code them into the above categories during the interview. These response options are not part of an existing, standardized framework but rather approximate bottom-up categories describing common job search activities. Given that no well-known structure exists for classifying job search activities, an open-ended format is likely too messy to be useful in auto-coding. For example, respondents may not understand the level of detail required to accurately classify the job search, such as responding that he or she “used the computer”, which is a response that does not distinguish between active and passive. This is currently observed in interviewer-administered surveys and is handled with interviewer probing. Thus, the current research proposes embedding an experiment using this CPS question to assess two closed-ended approaches: asking a series of closed-ended questions that cover all possible response options, or prompt respondents to select a category themselves.



  1. Methodology

The Office of Survey Methods Research (OSMR) has used online panels to pretest BLS survey questions and conduct small-scale survey methodological research (see this White Paper for more information and details). In previous work, online testing has allowed for early and rapid testing of questionnaire design options to identify “showstoppers”, iterative testing to make improvements, and a broad range of research participants. With the current study’s focus on the online mode, recruiting participants from online panels is expected to provide these same benefits. This study will use participants from Amazon Mechanical Turk (MTurk; Berinsky et al., 2012; Paolacci & Chandler, 2014). MTurk is an online marketplace where individuals can sign up to participate in short online research tasks for nominal compensation.

The two research objectives will be investigated independently in this study using the same set of participants. The survey items used to explore the first research objective will be distinct from those used to explore the second research objective. Below, we include a description of the items to be included in the survey instrument. The items were selected to represent questions common to the BLS household surveys of interest, including those from the CPS and CE. Because both of these surveys also ask for proxy response and include some household-level questions, participants will answer the questions on behalf of themselves and up to one additional household member (a very similar protocol was used and received approval in a Previous study examining proxy responses and a Respondent burden study). Taken together, these questions are representative of the types of questions BLS respondents encounter in production surveys. A comprehensive list of all questions included in the instrument and their source can be found in Attachment B (survey instrument)


Question type

Examples

Justification

Demographics

Age, sex, education, race, ethnicity

Demographic questions are common to all household surveys, and yet how respondents report ‘don’t know’ and ‘prefer not to say’ responses to these items in an online, self-administered mode is still unknown (per research question 1). Including demographic questions will also allow the researchers to understand who participated in the research.

Labor force questions

Whether participants are currently working, looking for work, and what they have done to find work.

Developing ways to ask respondents about labor force participation in a self-administered mode is critical to evaluate as the CPS looks toward moving to a mixed-mode survey. Of key importance is getting respondents to correctly categorize themselves and/or other household members as in or out of the labor force, as well as actively or passively seeking work (per research question 2).

Items likely to have higher levels of item nonresponse, indicative of potential question sensitivity

Income, disability status (CPS); charitable contributions, alcohol expenditures (CE)

Previous research has indicated that sensitive items respondents encounter in BLS surveys (e.g., income) may have higher levels of item nonresponse due to sensitivity. Thus, it is important to include these items to assess differences in item nonresponse between these items and the more basic demographic items in an online format and to fully represent the range of questions respondents are likely to encounter in BLS surveys. These items also provide valuable opportunities to understand why participants respond DK or Prefer not to say (per research question 1).

Items likely to have higher levels of ‘don’t know’ responses, indicative of knowledge issues or difficulty forming a response

Household level questions, proxy items

The CPS and CE both have proxy responses where one household member answers on behalf of others in the household. Previous research indicates that questions about other household members may pose unique difficulty. Respondents may not remember or have the knowledge to respond on another household member’s behalf, in particular if they are not closely related. This can lead to increased levels of valid ‘don’t know’ responses. By including proxy questions and collecting relationship between the respondent and household member, we can assess level of don’t know responses across these questions and also by relationship type (per research question 1).


Research objective 1: How to reduce item non-response when offering ‘don’t know’ or ‘prefer not to say’ response options in self-administered online surveys


In some online surveys, item non-response rates can be lower than in interviewer-administered modes. This may occur for a number of reasons, including online surveys being self-administered and anonymous, conditions that minimize social desirable responding (Kreuter et al., 2008). Further, some online respondents may be more cooperative since they are volunteers receiving small incentives (e.g., Behr et al., 2017; Stewart et al., 2017). Thus, in order to have sufficient power to detect potential differences in item non-response rates, we will include items expected to have high item non-response rates: items that respondents are asked to answer on behalf of other household members and items that represent a mixture of response difficulty and sensitivity. The questions come from various Federal household surveys (including the Current Population Survey, Consumer Expenditure Interview Survey, and American Community Survey; for the full instrument, see Attachment B, along with an Appendix outlining the source of each question included on the instrument). See the below links for work previously conducted at BLS using similar methodologies and samples:


In addition, we will add language for all participants (see Attachment B for the exact language) to the beginning of the survey to encourage accurate responses and explain to participants that they will have DK and/or ‘Prefer not to say’ options available to them. Research has shown that asking participants to check a box to acknowledge they understand every question is voluntary and commit to providing accurate responses can increase data quality in online data collection (de Leeuw et al., 2015; Joinson et al., 2008; Kaplan & Edgar, 2018; Betts, 2016). This type of commitment statement is typically used specifically for online testing and exploratory research using web-based panels. It would not be recommended for implementation in any production survey. An almost identical intervention was approved and used in the CIPSEA research study (linked above). Checking the box is not required to continue taking the survey, participants can proceed without doing so and is not expected to lead to an increase in break-offs, as MTurk surveys generally have extremely low levels of breakoffs. This commitment device is common practice in online testing instruments (recently used in pretesting for the National Health Interview Survey, and Census Household Pulse Survey, and has been shown to increase data quality. This instruction is also important to let participants know that they can opt-out of responding to items they do not want to answer. As noted, this mirrors what actual interviewers say to survey respondents. In previous OSMR research, we found that interviewers say something very similar at the beginning of their interviews (e.g., “If you don’t want to answer a question, just say no thank you and we will skip it”.) In previous studies using online panels with similar methodology, we have found item non-response rates comparable to those obtained in Federal surveys using this method (e.g., around 30% item non-response for individual and household income; see Kaplan & Edgar, 2018).

In a between-subjects design, we will randomly assigned participants to be offered different non-response options (DK only or DK and ‘prefer not to say’) for selected items as appropriate, and follow-up probes after either skipping the question or selecting a non-response option (with probing or without probing). Given the survey programs’ needs and the research goals for this study, offering a DK option will be required in any future online self-administered instrument1 , and so the study design does not include any version without the option. A ‘prefer not to say’, or explicit refusal option, will be provided in half of the study conditions, and be used to determine whether responses in the DK only condition represent knowledge/difficulty or sensitivity issues. The table below illustrates the conditions and number of participants in each group:



Follow-up Probes for

Non-Response Options

and Skips



With Probes

No Probes

Non-Response Options

DK only

Condition 1

n = 1240

Condition 2

n = 1240

DK and Prefer not to say

Condition 3

n = 1240

Condition 4

n = 1240



Anytime that participants opt to select Don’t Know, Prefer Not to Say, or skip a question, they will be prompted with the following probe based on Baghal and Lynn (2015, see Attachment A):

If possible, please provide an answer to this question, as this is one of the key questions in this survey.”

This prompt will be programmed in the instrument to be displayed throughout the survey for those in Conditions 1 and 3. Due to the way the skip logic has to be programmed, we cannot use a ‘go back’ option. Participants can proceed without selecting a response to the prompt, though. After viewing the prompt, the question will be displayed again to participants, and participants will be given the same options to provide a substantive response. They will not be prompted a second time if they opt not to provide a substantive response following the prompt2.

Outcome Variables

  • Overall rate of DK, ’Prefer not to say’, and skips across all conditions

  • Overall rate of DK, ’Prefer not to say’, and skips across each condition

    • Main effect of adding ‘Prefer not to say’ as a response option (Comparison of Conditions 1 and 3 vs. Conditions 2 and 4)

    • Main effect of adding follow-up probes after non-response (Comparison of Conditions 1 and 2 vs. Conditions 3 vs. 4)


Research objective 2: How to administer open-ended and field-coded questions in self-administered online surveys


To assess the second research objective, this study will randomly assign3 participants to one of two groups to answer the question about looking for work in the CPS (i.e., “What are all the things you have done to find work during the last 4 weeks?”):


    1. Series of close-ended questions: Respondents are asked a series of closed-ended questions that cover all possible response options

    2. Self-select code: Respondents are prompted to select a category code themselves


In addition to the two types of closed-ended questions, participants will be asked to provide an open-ended description of what they have been doing to look for work. The order of this open-ended probe will be counterbalanced such that half of participants will answer the open-ended question first and half will answer one of the closed-ended versions first. The table below illustrates the study design and number of participants in each group.




Order



Open-Ended First

Closed-Ended First

Closed-Ended Version

Series of closed-ends

n=1240

n=1240

Self-select code

n=1240

n=1240






Because participating on MTurk can be considered doing work for pay, it is highly likely that few participants in this study will respond that they have not done any work for pay in the last week and therefore only few participants will be administered the target question about what they are doing to look for work. Thus, participants will be instructed to exclude their MTurk work for the purposes of this question4.


In addition, we will show all participants two vignettes about other people who are currently looking for work and ask them to answer the questions using the design above (except they will not answer the open-ended portion since we already know the correct coding categories per CPS). Vignettes are an effective tool to use in exploratory work to assess respondents’ interpretations and understanding of new survey questions and contexts (e.g., Beck, 2010). They also provide the benefit of reducing socially desirable responding (e.g., Lee, 1993). The vignettes will cover the following circumstances:

  • A person who is looking for work and has done passive job search only

  • A person who is looking for work and has done both passive and active job search

No vignettes for active-only search activities are included in the study because space is limited in the survey and we expect that active search activities are simpler to categorize. Based on the findings from this study, we may conduct research with active search activities in the future.


The vignettes and follow-up questions can be found in Attachment C.


Finally, we will collect ratings of subjective burden to supplement paradata collected in the survey instrument on objective time spent on each page. It will be important to understand the level of burden associated with each of the closed-ended approaches since these questions have not yet been tested via self-response and may have implications for response and data quality.


Outcome variables:

  • Compare within-subject open-ended responses to the same participant’s own closed-ended responses to determine how well they match one another

  • Compare responses across the two closed-ended groups to determine whether one group’s closed-ended responses aligns more closely to the open-ended response

  • Compare responses across the two closed-ended groups to determine whether one group’s closed-ended responses aligns more closely with the “true value” (or the correct codes) for the vignette descriptions


Debriefing section


To gain a better understanding of the reasons participants opted to select Don’t know, Prefer not to say, or skip a question, we included a debriefing section at the end of the survey. This debriefing is important for the first research question, to better understand ways to reduce item nonresponse. Question categories were adapted from previous research using similar closed-ended categories (adapted from Steen et al., 2019). Some of the categories get at respondent difficulty (e.g., being unsure of an answer, the answer could vary, having insufficient information to answer) or respondent sensitivity (e.g., concerns about privacy, being uncomfortable responding). This will allow us to assess whether certain questions or types of questions elicit different reasons for nonresponse, and allow for tailored nonresponse follow-up prompts in future research. For example, if a certain question always leads to nonresponse due to privacy concerns, a prompt could be designed to remind respondents that their answers will be kept confidential.


In this section, participants will see a prompt for each item they did not provide a substantive response to, and they will be asked to identify the reason(s) for doing so, as follows:


[For each question that was originally answered “DK” or “Prefer not to say” or skipped:]


Earlier, you either answered Don’t Know, Prefer not to say, or skipped the following question [insert question stem].


For what reason(s) did you select this response? Select all that apply.

I was unsure of the answer

My answer could vary depending on the situation

The question was unclear

The response categories were unclear

I didn’t have the information needed to answer

I thought the question invaded privacy or confidentiality

I thought the question was too sensitive or personal

I thought the question was not something the government should be asking

I wasn’t comfortable providing that information in an online survey

Other reason, specify: ______


Burden section


The end of the survey will ask participants a set of questions about how burdensome the survey was to complete. While these items are borrowed from production survey debriefings, our goal is not to compare the data to those surveys. Rather than assessing level of overall burden across all participants, the analysis of the burden questions will parallel the analysis conducted on the other outcome variables, that is, we will use these data to make between-group comparisons across the treatment conditions they affected the experience of responding to the survey:



    • Main effect of adding ‘Prefer not to say’ as a response option (Comparison of Conditions 1 and 3 vs. Conditions 2 and 4) on burden metrics

    • Main effect of adding follow-up probes after non-response (Comparison of Conditions 1 and 2 vs. Conditions 3 vs. 4) on burden metrics



Four questions designed to assess subjective burden will be included, and the full wording is available in Attachment B.

  • A general burden question will be asked first to gauge how burdensome participants found the survey in general. Prior research has shown that a general burden question is valid and reliable in measuring respondents’ subjective perceptions of the burden of the survey, and can be predictive of future survey participation (Fricker et al., 2014).

  • The second question gauges participants’ level of enjoyment in completing the survey. Some researchers and experts in respondent burden who attended international conferences have discussed preliminary work that using a more positive burden metric (enjoyment) is a complementary measure and so we included it as an exploratory question to see how it compares to more traditional burden metrics.

  • The third question gets at sensitivity of the questions – as noted, questions including those about income are often perceived as sensitive and elicit a higher level of nonresponse than other items. Crossing the level of sensitivity participants rate the survey with levels of item nonresponse could provide evidence suggesting that sensitivity leads to higher levels of nonresponse.

  • The fourth question gets at difficulty, like the sensitivity metric, questions included on the survey may be perceived as more difficult to answer, especially those about other household members. Crossing the level of difficulty participants rate the survey with the level of don’t know responses could suggest that difficulty is associated with higher levels of don’t know responses.

Taken together, these metrics would provide an indicator of how burdensome it is for respondents to answer these questions in an online, self-administered mode and help target the reasons for experienced burden. While the questions included in this research are drawn from different surveys, they include a representative mix of topics (demographics, sensitive questions that could yield refusals, questions that could pose knowledge issues and don’t know responses) that many respondents would encounter in actual BLS production surveys.


  1. Participants

Up to 4960 Amazon Mechanical Turk participants will be recruited. This sample size was determined to sufficiently explore the range of variables of interest, and because we expect a very small effect size as the study design is subtle for online surveys of this nature (e.g., Hill et al., 2016). This sample size also takes into account break-offs, incomplete data, and participants who do not follow the task instructions, similar to other OMB-approved samples used for studies of this nature linked in the Research Objective 1 section.

IV. Burden Hours

The survey is expected to take an average of 10 minutes to complete for a total of up to 827 burden hours. This estimate was determined from data on a similar survey with the same questions where the median time to complete the survey was 9 minutes.

Table 1. Estimated Burden Hours

# of Participants Screened

Minutes per participant for Screening

Total Screening Burden

Maximum number of Participants

Minutes per participant for data collection

Total Collection Burden

Total Burden (Screening + Collection)

4960

0

0

4960

10

827

827



V. Payment to Participants

Participants will receive $1.00 for participation in the survey, a typical rate for similar MTurk tasks (Paolacci & Chandler, 2014). Recruiting of participants will be handled by Amazon Mechanical Turk. Once participants are recruited into the study, they will be given a link to the survey, which is hosted by Qualtrics.com. The data collected as part of this study will be stored on Qualtrics servers.


Participants will be informed of the OMB number and the voluntary nature of the study.

This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141 (Expiration Date: March 31, 2021). Without this currently approved number, we could not conduct this survey. This survey will take approximately 10 minutes to complete. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. This survey is being administered by Qualtrics and resides on a server outside of the BLS Domain. Your participation is voluntary, and you have the right to stop at any time.

  1. Attachments

Attachment A: Review of previous studies using web probes

Attachment B: Survey instrument (with Appendix showing the source of each question)

Attachment C: Vignettes



References

Beck, J. (2010). On the Usefulness of Pretesting Vignettes in Exploratory Research. Survey Methodology, 02.

Berinsky, A.J., Huber, G.A. and Lenz, G.S. (2012) ‘Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk’, Political Analysis, 20(3), pp. 351–368. doi: 10.1093/pan/mpr057.

Bickart, B.A., J. Blair, G. Menon, and S. Sudman. 1990. “Cognitive Aspects of Proxy

Reporting of Behavior.” In Advances in Consumer Research 17, edited by M. Goldberg,

G. Gorn, and R. Pollay, 198–206. Provo, UT: Association for Consumer Research.

Cobb, C. 2018. “Answering for Someone Else: Proxy Reports in Survey Research.” In The

Palgrave Handbook of Survey Research, edited by D.L. Vannette and J.A. Krosnick,

87–93. Cham: Palgrave Macmillan. DOI: https://doi.org/10.1007/978-3-319-54395-6.

de Leeuw, E. D., Hox, J. J., & Boevé, A. (2015). Handling Do-Not-Know Answers Exploring New Approaches in Online and Mixed-Mode Surveys. Social Science Computer Review, 0894439315573744.

Hall, H., N. Lewis, J. Chandler, and P. Ellsworth. Conducting Longitudinal Studies on Amazon’s Mechanical Turk: A Meta-analysis with Recommendations. Working Paper, 2016.

Kaplan, R.K. and Edgar, J. (2018). Priming confidentiality concerns: How reminders of privacy affect response rates and data quality in online data collection. Paper presented at AAPOR May of 2018. Denver, CO.

Kreuter, F., Presser, S., & Tourangeau, R. (2008). Social desirability bias in CATI, IVR, and Web surveys the effects of mode and question sensitivity. Public Opinion Quarterly72(5), 847-865.

Lee, R. (1993). Doing Research on Sensitive Topics, Thousand Oaks, CA: Sage.

Lee, S., N.A. Mathiowetz, and R. Tourangeau. 2004. “Perceptions of Disability: The

Effect of Self-and Proxy Response.” Journal of Official Statistics 20: 671–686.

Lemcke, J., Albrecht, S., Schertell, S., and Wetzstein, M. (2019). The Effects of Forced Choice, Soft Prompt and No Prompt Option on Data Quality in Web Surveys - Results of a Methodological Experiment. Paper presented at the European Survey Research Conference in Zagreb, Croatia.

McGee, A M., Hanson, T., and Taylor, L. (2019). Do We Know What to Do With “Don’t Know”? Paper presented at the European Survey Research Conference in Zagreb, Croatia.

Paolacci, G., and J. Chandler. “Inside the Turk: Understanding Mechanical Turk as a Participant Pool.” Current Directions in Psychological Science, vol. 23, no. 3, 2014, pp. 184–188.

Steen et al. (2019). The Presentation of Don't Know Answer Options in Web Surveys: An Experiment with the NatCen Panel. Presented at the European Survey Research Association Conference: https://www.europeansurveyresearch.org/conferences/programme?sess=97

Tourangeau, R. 1984. “Cognitive Sciences and Survey Methods.” In Cognitive Aspects of

Survey Methodology: Building a Bridge Between Disciplines, edited by T. Jabine, M.L.

Straf, J.M. Tanur, and R. Tourangeau, 73–100. Washington, D.C. National Academy

Press.






1 A DK response is often a valid response for many survey questions and therefore will be required for some items in self-administered BLS surveys. In interviewer-administered surveys, respondents do have the option to say “don’t know” even if it’s not included in the offered response options, and this study aims to mimic that.

Some of the demographic questions (i.e., race/ethnicity and sex) are currently being explored by other interagency groups and have broader research implications. Thus, these items will not include offered DK and prefer not to say response options, but participants may skip those questions as-needed. They are included in this research for demographic purposes and analysis of the sample.

2 We acknowledge that it is possible that after seeing the same prompt, its effectiveness may wear off. However, two of the conditions will be controls without use of any prompt, where we can assess the differences in drop-offs by prompt. In previous research, we’ve observed that participants do not respond with DK, Prefer not to say, or skips at high rates, so it is unlikely that the same participants will get the prompt repeatedly or even more than a handful of times.

3 We did not ask participants to answer both types of questions above (close-ends and self-selections). This is because answers to one set of questions would potentially bias the response to the other set, and would not be representative of their responses had they not previously been exposed to one version of the questions. The included open-ended probe was designed to help validate the response to the question set participants were assigned to.

4 We anticipate a low incidence rate of participants who are looking for work, even after instructing participants to exclude any MTurk work from the question and we are not screening out anyone based on job status. It is likely that most participants will be skipped out of the labor force section of the protocol, however, we can still collect responses from anyone who is looking for work outside of the work they perform on MTurk. This is also why the vignettes were included so that participants could still respond to the labor force questions critical to Research Question 2.

10


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorEdgar, Jennifer - BLS
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy