Note to Reviewer - Measuring Sensitivity

OMB package_Measuring Sensitivity_2.12.15 final.docx

Cognitive and Psychological Research

Note to Reviewer - Measuring Sensitivity

OMB: 1220-0141

Document [docx]
Download: docx | pdf

January 31, 2021



NOTE TO THE

REVIEWER OF:

OMB CLEARANCE 1220-0141

“Cognitive and Psychological Research”


FROM:

Robin Kaplan

Research Statistician

Office of Survey Methods Research


SUBJECT:

Submission of Materials for the Measuring Sensitivity study




Please accept the enclosed materials for approval under the OMB clearance package 1220-0141 “Cognitive and Psychological Research.” In accordance with our agreement with OMB, we are submitting a brief description of the study.


The total estimated respondent burden hours for this study is 303 hours.


If there are any questions regarding this project, please contact Robin Kaplan at 202-691-7383.




1. Introduction


Responses to questions across a broad range of surveys and topics have been shown to be influenced by sensitivity, or how personal, invasive, threatening, or uneasy a question makes respondents feel. It is well documented that topics such as income or drug use are widely considered sensitive; but sometimes even questions that seem factual and impersonal, such as voting in a recent election or owning a library card, can also be perceived as sensitive and cause distortions in responses that create threats to data quality (Tourangeau & Yan, 2007). Despite widespread acknowledgement in the survey research literature that question sensitivity can bias responses, no standard methodology has been developed to assess the perceived sensitivity of questions. Instead, researchers tend to rely on their intuitions or assumptions of what questions or topics are considered sensitive without consulting respondents or pre-testing questions for their level of sensitivity (e.g., Barnett, 1998; De Schrijver, 2012; Krupmal, 2013). Furthermore, even less is known about the impact of question sensitivity on interviewers and whether interviewer feelings of sensitivity might also bias responses. Survey methodologists often assume that interviewers remain neutral throughout administration of the survey and read questions as worded, but they may also be equally as affected as respondents to sensitive interview contexts and questions.


The research that has been conducted on question sensitivity has largely focused on how sensitive respondents find survey contexts or questions (Tourangeau & Yan, 2007), with the assumption that sensitive contexts or questions have very little impact on the interviewer. However, researchers have shown that respondent sensitivity to the content of the question may be moderated by the way in which the interviewer phrases the question (Barnett, 1998; Bradburn, Sudman, & Wansink, 2004) and have explored ways in which question wording can help to reduce respondent sensitivity to questions. In addition, respondents are more likely to disclose sensitive information when surveys are self-administered and the presence of an interviewer is minimized (e.g., Kreuter et al., 2008; Lind et al., 2013). However, we know far less about how to reduce question sensitivity in interviewer-administered surveys. Further research is needed to understand how to develop effective interventions that reduce feelings of sensitivity for both respondents and interviewers, and the potential impact of those feelings on data quality.


Research Questions

The proposed research is exploratory in nature and as a precursor to future laboratory studies that will investigate sensitive contexts and questions using actual field interviewers as participants. In this preliminary work, our goal is to explore perceptions of sensitivity across different survey contexts, relative to a neutral context; to assess whether interviewers experience feelings of sensitivity in different survey contexts; to assess the effectiveness of an indirect measure of sensitivity evoked by questions likely to elicit socially desirable responses; and to assess covariates of degree of sensitivity felt by respondents and interviewers. These goals are described in more detail below:


As mentioned, we have four main exploratory research questions: (a) does question sensitivity affect interviewers, which would suggest that their perceptions of sensitivity should be considered as a factor of question sensitivity; (b) can a question wording intervention affect feelings of sensitivity (respondent or interviewer) to the survey question, and (c) can an indirect measure of sensitivity, relying on retrieval of sensitive information from memory, produce distortions in memory that reveal social desirability biases? If so, this technique could be applied quickly and easily by investigators to determine question sensitivity without relying solely on self-report measures, as respondents are less likely to disclose sensitive information or report that a question was sensitive or personal in the presence of an interviewer. We will also explore whether particular covariates (e.g., attitudes towards survey topics, ability to take on other perspectives, individual differences in social desirability) are related to individual differences in mean sensitivity ratings.


These exploratory questions will serve as the first in a set of studies to answer the broader objective of this research -- to develop a standardized method to assess question sensitivity that can be applied across surveys and content areas, which researchers and questionnaire designers can use to understand potential response biases and for use in future studies using interviewers as research participants.


2. Research Design

The proposed experimental design manipulates two factors in a 2x3 between-groups design: vignette perspective (respondent vs. interviewer) and the type of sensitivity intervention used (forgiving wording, unforgiving wording, or neutral questioning). At the start of the task, participants will be asked to take on either the perspective of the vignette character to capture the perspective of having to answer sensitive questions or the role of the interviewer to capture the perspective of having to ask sensitive questions. Within the vignettes, we will manipulate the type of sensitivity intervention used: positive loading at the start of the question to reduce sensitivity in the form of ‘forgiving’ wording; negative loading to increase sensitivity in the form of ‘unforgiving’ wording; or neutral wording to have a neutral effect.


This research will use questions from a wide range of federal surveys that contain questions that could be perceived as sensitive (e.g., McNeely, 2012; Bradburn et al., 2004). Topics include employment, sleep habits, purchasing alcoholic beverages, donations to charity, and questions about income. We will also explore whether individual differences in attitudes towards the vignette topics (e.g., Tourangeau & Smith, 1996), perspective taking ability, and socially desirable responding (e.g., Peter & Valekenburg, 2011) are also related to sensitivity ratings.


Perspective Vignettes

Respondent sensitivity is a major concern for survey designers given the demonstrated potential for response bias. By asking participants to take on the role of the respondent when reading the vignettes, we hope to understand how respondents feel when answering potentially sensitive questions.

Interviewers also play an important role in the survey process, including verbally asking the survey question, recording a response, and also building rapport and trust with the respondent. We hypothesize that, when asking questions, interviewers are aware of potential sensitivity. The awareness may then affect interviewer behavior, manifesting in an apology, ‘distancing’ behavior (e.g., “I didn’t write this question”), or skipping of the question. If interviewers are affected by sensitivity in any way, we expect that it would have effects on the answers respondents provide and the data obtained. However, we have found no literature examining interviewers’ feelings of sensitivity when asking survey questions. As a first step in understanding how question sensitivity affects interviewers, participants are asked to take on the perspective of the interviewers in the vignettes.


Sensitivity Intervention

One technique of using question wording to reduce feelings of sensitivity is a “forgiving wording” intervention (e.g., Tourangeau & Smith, 1996; Naher & Krumpal, 2012; Peter & Valekenburg, 2011), which loads sensitive behavioral questions to encourage respondents to make potentially embarrassing admissions by “forgiving” the behavior. One classic example from Sudman and Bradburn (1982) is the “everyone does it approach” – wherein a question assumes a negative or embarrassing behavior in the question to encourage honest reporting (italicized below):


Even the calmest parents get angry at their children sometimes. Did your children do anything in the past 7 days to make you yourself angry?


Similarly, the Current Population Survey (CPS) currently loads positive ‘forgiving wording’ to the front of the involuntary part-time work question (italicized below):


Some people work part time because they cannot find full time work or because business is poor. Others work part time because of family obligations or other personal reasons. What is your main reason for working part time?”


In this example, the forgiving introduction provides external attributions for why a respondent may not be able to find full-time work. This introduction may reduce question sensitivity and encourage more honest responses, and also make the question more comfortable for interviewers to ask of respondents who have struggled to find full-time work.


The use of forgiving wording introductions has been shown to sometimes increase disclosure of socially undesirable behavior (Peter & Valekenburg, 2011; Sudman & Bradburn, 1982; Tourangeau & Smith, 1996), and other times they have little effect (Peter & Valekenburg, 2011). Thus, an important part of understanding how to develop effective forgiving wording interventions is to determine whether such wording can reduce perceived sensitivity of questions.


In the present research, we will use forgiving and unforgiving interventions, along with a neutral question wording to serve as a control for comparison (see Appendix A). The forgiving wording intervention is designed to decrease question sensitivity by ‘forgiving’ socially undesirable behaviors. In contrast, the ‘unforgiving wording’ intervention is designed to heighten sensitivity to survey questions by placing them in a context that makes the most socially desirable behavior salient. A similar approach was implemented by Tourangeau and Smith (1996), who placed sensitive survey questions in a permissive context to decrease sensitivity, and a restrictive context to heighten sensitivity, to contrast the effects of each across respondents. Although survey designers may never attempt to implement ‘unforgiving’ wording in a survey, we include this manipulation in the study because it is expected to provide useful information about the full range of sensitivity. Critically, this will enable us to evaluate how well different measures of sensitivity capture a range of sensitive feelings, including how personal the questions are or how likely the questions are to elicit an honest response. A design that only examines neutral and sensitivity-reducing question wording would not be able to evaluate the measures’ capacity for capturing high sensitivity feelings; furthermore, the sensitivity-heightening questions are needed to understand how participants who are feeling sensitive react to the questions measuring sensitivity and how they use the response scales.


Indirect Measure of Sensitivity

As a secondary analysis of the impact of sensitivity on data quality, the research design will also investigate whether the retrieval from memory of socially-sensitive information can be used to indirectly assess question sensitivity. Embedded in the vignettes, among many numerical pieces of information, interviewers will ask respondents about socially-sensitive numbers, such as the amount of money spent on alcohol expenditures or the number of jobs applied to. Participants will then be asked in a “memory quiz” to recall socially-sensitive information. In this way, the participant encodes the to-be-recalled information at the start of the task and must later recall that information, allowing for biases such as social desirability to possibly affect responses. This type of memory distortion has been demonstrated in a wide range of psychology studies showing that the direction of memory bias or numerical rounding can reveal biases in perceptions of events, often due to socially desirable reporting, that neutral questioning might not reveal (e.g., Bahrick et al., 1996; Loftus et al., 1978; Belli et al., 1999; Adams et al., 2004). Drawing on this logic, the degree to which the participant’s numeric responses deviate from the vignette-provided values may reveal the degree of sensitivity felt by the participant; larger errors may reflect greater sensitivity.

It is important to note that in the psychology literature, sensitivity and social desirability are often conceived of as separate, but highly related concepts. We take the approach of Tourangeau and Yan (2007) that socially desirability is one component of sensitivity (in addition to intrusiveness and threat of disclosure) and that, “a question is sensitive when it asks for a socially undesirable answer, when it asks, in effect, that the respondent admit he or she has violated a social norm” (pp. 860). In this study, the unforgiving context serves to heighten the impact of a social norm and the forgiving context reduces it. Thus, in this study, we expect that sensitivity will relate closely to the responding in a socially desirable manner.

An alternate hypothesis is that participants will have better memory for numbers placed in a socially-sensitive context relative to a neutral context. For instance, if a vignette character spent a great deal of money on alcohol, this information might stand out and display an advantage in memory later – a possible ceiling effect wherein everyone remembers the numbers very well. However, for this study, these numbers will be kept constant across all conditions of the study to assess the impact of the type of sensitivity intervention on memory for the numbers. The numbers are not expected to be sensitive in and of themselves, but rather they may be perceived as more or less sensitive across the forgiving, unforgiving, and neutral contexts. Thus, we expect to see different patterns of memory bias across conditions, but also note that this is an exploratory measure of question sensitivity. As such, its effectiveness at eliciting differences in memory across the conditions remains an open question.

Analysis Plan

Research Question A: Does question sensitivity affect interviewers?

To explore whether question sensitivity affects interviewers, we will compare mean sensitivity ratings across a range of sensitivity measures (see Appendices D and H for the sensitivity measures used) for each of the 3 contexts: forgiving, unforgiving, or neutral. If sensitive contexts do not affect participants taking on the interviewer perspective, we’d expect to see low levels of sensitivity across the 3 contexts. If participants who take on the interviewer perspective are affected by sensitive contexts and questions, then we’d expect that the forgiving wording context will reduce mean sensitivity ratings and the unforgiving context will heighten mean sensitivity ratings, relative to the neutral context.

This question is interested in exploring whether interviewers might be sensitive to the survey context, contrary to the typical assumption that interviewers are always neutral and objective throughout the survey process. As such, we’re interested in looking at mean sensitivity ratings from participants who took on the interviewer perspective across the three contexts. We do not have specific hypotheses regarding differences in perspective type (interviewer vs. respondent); rather, we are interested in exploring whether people find different survey contexts sensitive from the interviewer’s perspective. However, there is a possibility that participants will perceive the interviewer perspective as less sensitive than the respondent’s. This is because participants will be told that the interviewers must read all questions are worded at the beginning of the study. Participants might have this information in mind while completing our study and think the interviewer is just doing his or her job. In contrast, respondents don’t know what questions are coming and since the information pertains to their lives, they might find the survey context more intrusive or sensitive than an interviewer. The possible outcomes of this exploratory analysis are outlined in the graphs below:

The first possible outcome is that participants taking on the interviewer and respondent contexts display similar mean levels of sensitivity ratings. This would indicate that the interviewer perspective is equally as sensitive as the respondent perspective.

The second possible outcome is an interaction between perspective and wording manipulation context. In this outcome, participants taking on the interviewer perspective display lower mean sensitivity ratings than those taking on the respondent perspective.

As this is exploratory research, we are impartial to which of the two outcomes arise.

Research Question B: Can a question wording intervention affect feelings of sensitivity (respondent or interviewer) to the survey question?

To determine whether type of wording intervention (forgiving, unforgiving, or neutral questioning) affects feelings of sensitivity, we will assess mean sensitivity ratings across wording interventions. We hypothesize that forgiving wording introductions will decrease mean sensitivity ratings; unforgiving wording will increase mean sensitivity ratings; and that neutral or direct questioning will lie somewhere in between the forgiving and unforgiving wording conditions. As research on the effectiveness of forgiving survey introductions has been scant, and results are mixed, we hope to identify whether such introductions are effective at manipulating the perceived sensitivity of questions in different contexts. In addition, no prior work has been conducted on the effects of forgiving wording introductions on interviewers’ perceptions of the sensitivity of the survey question. As such, we plan to explore these issues and additional analysis will also identify on which proposed dimensions of sensitivity this expected pattern is not demonstrated across the sensitivity intervention groups.


Research Question C: Can an indirect measure of sensitivity using retrieval from memory of socially-sensitive information be used to assess question sensitivity?


We will assess whether the retrieval from memory of socially-sensitive information can be used to indirectly assess question sensitivity. We expect that the degree to which participants’ numeric responses deviate from the vignette-provided values will reveal the degree of sensitivity felt by the participants; larger errors may reflect greater sensitivity. We will compute sensitivity deviation scores for all participants (the vignette provided values minus the participant-provided values from memory). We will assess mean deviation scores for each question across the 3 wording interventions (forgiving, unforgiving, and neutral). We hypothesize that forgiving wording introductions will decrease mean sensitivity deviation scores; unforgiving wording will increase mean sensitivity deviation scores; and that neutral or direct questioning will lie somewhere in between the forgiving and unforgiving wording conditions. If the manipulation is effective, we’d expect to see response patterns similar to the possible outcomes presented in the chart in Research Question A.



Procedure

The full survey instrument is included as appendices; each segment of the survey appears as a separate appendix. An overview of the protocol is given below, and a detailed description about each segment follows.


Participants will be introduced to the survey (Appendix B) and read vignettes describing survey respondents and excerpts of their survey responses (Appendix C). They will be asked to take on the perspective of either the respondent or the interviewer while they read the excerpts. After each vignette, participants will answer a number of questions about the vignette characters, how sensitive they think the respondent or interviewer felt answering or asking the questions, and their attitudes about the topic of the vignette (Appendixes D and E).


Participants will then complete a distraction task (Appendix F) and a memory quiz about the information provided in the vignettes (Appendix G). Then, the participants will be probed for sensitivity ratings using several different measures of sensitivity discussed in the literature, such as how sensitive or personal the questions are and their likelihood of eliciting honest responses (e.g., Tourangeau & Yan, 2007; Bardburn & Sudman, 1979; Rasinski et al., 1999; Appendix H). These ratings will inform our understanding of what dimensions of sensitivity each rating measures and how best to measure the concept across surveys from the perspective of both the respondent and the interviewer. The participants will then complete a few short attitude and individual differences scales (Appendix I) and complete some demographic questions (Appendix J) before ending the study (Appendix K). More details about the procedures are outlined, in the order they will be presented, below.

  1. Vignettes – The interview questions used in these excerpts were taken from a range of federal surveys and selected for their diversity and potential for causing respondents and interviewers to view these questions as sensitive. This study includes eight vignettes across a range of potentially sensitive topics; however, due to considerations of respondent burden, each participant will see only four vignettes. One half of study participants will be given Survey 1, including four vignettes about employment (questions from the CPS). The other half of participants will be given Survey 2, including four vignettes about a range of other topics, including sleep habits (from the American Time Use Survey; ATUS), expenditures on alcohol and charitable donations (from the Consumer Expenditure Survey; CE), and income (from the CPS). Participants will be randomly assigned to Survey 1 or Survey 2 (Appendix C).

Survey 1:

  • Employment – an interview with a recent college graduate about what she has been doing to find work

  • Employment – an interview with a respondent who is struggling to find a job about what she has been doing to find work.

  • Employment – an interview with a respondent who is a discouraged worker being asked about why she has not been looking for work.

  • Employment – an interview with a respondent who is an involuntary part-time worker about why she is working only part-time.

Survey 2:

  • Alcohol expenditures – an interview with a respondent who has problems with alcohol about how much money he has spent on alcohol.

  • Charitable donations – an interview with a respondent with rising living costs about how much money he has given to charity.

  • Income – an interview with a respondent who has recently had a pay cut about how much money he earns.

  • Sleep habits – an interview with a respondent who has not found a job and has instead been socializing about how many hours of sleep he gets.

The wording of the questions used in the interview excerpts will be manipulated to decrease (forgiving) or increase (unforgiving) interviewer and respondent sensitivity to the question topic. When the participant reads the excerpts with these manipulated questions, we expect that the participant’s feelings of sensitivity will likewise be increased or decreased. A third version of the question wording represents how the interview question is typically presented in a production survey, without additional text, and will serve as a control group (neutral). Examples of the wordings for each of the manipulations is shown below. A full description of all wording manipulations is in Appendix A.


Forgiving

Unforgiving

Neutral

Most people purchase beer and wine to consume at home for celebrations or to relax. Purchases of alcoholic beverages for social occasions have become more popular. What has been your usual monthly expense for alcohol, including beer and wine to be served at home?

Purchases of alcohol have become less common as the economy tightens consumers’ budgets. Most people are choosing instead to spend their money on their family. What has been your usual monthly expense for alcohol, including beer and wine to be served at home?

What has been your usual monthly expense for alcohol, including beer and wine to be served at home?


  1. Follow-up questions – After each vignette, participants will rate the sensitivity of the survey questions using several different sensitivity measures (e.g., “how personal was this question?”). Participants will also complete ratings likely to interact with sensitivity, including their attitudes towards the survey topics and how much they empathized with the vignette characters. These questions are presented in Appendices D and E, as they will appear to the participants.

  2. Distractor task – After all vignettes are read and vignette-specific follow-up questions are answered, the participant will be asked to complete a distractor task (Appendix F). The task is to count backwards from a large number by intervals of seven. The purpose of this task is to clear any vignette information from the participant’s working memory so that retrievals of information for subsequent questions should be from long-term memory, as would be done by a respondent in the field.

  3. Recall of sensitive information – The participants will be asked to answer questions based on their memory for the information provided in the vignettes (Appendix G). The purpose of this task is to investigate whether participants’ responses deviate from vignette-provided information and reveal participant feelings of sensitivity.

  4. Global Sensitivity questions – After completing the memory questions, participants will be asked a series of questions regarding their perceptions of the vignettes as a whole (Appendix H). The purpose of this section is to assess perceptions of sensitivity-related metrics, such as respondent honesty, which mode of data collection would promote the most honest responses, and likelihood of item non-response.

  5. Attitude and individual differences questions – Participants will be asked a series of questions regarding their beliefs and attitudes about the survey topics in the vignettes and surveys used to generate federal statistics. Participants’ attitudes towards these topics are expected to be an important covariate with sensitivity ratings (Tourangeau & Smith, 1996). Participants will also complete two individual differences scales that are expected to relate to sensitivity ratings, described below (Appendix I).

6a. Interpersonal Reactivity Index (IRI; Davis, 1980). Because the tasks in this research study require participants to take on the perspective of respondents or interviewers, one potential covariate for sensitivity ratings in this study is perspective-taking ability and empathic concern. Participants will complete the perspective taking and empathic concern subscales of the Interpersonal Reactivity Index (IRI; Davis, 1980), a common scale used to assess these traits. The perspective taking subscale contain 7 items and assesses the tendency to spontaneously adopt the psychological point of view of others (e.g., I sometimes find it difficult to see things from the "other guy's" point of view.) The empathic concern subscale contains 7 items and assesses "other-oriented" feelings of sympathy and concern for unfortunate others (e.g., I often have tender, concerned feelings for people less fortunate than me.)

6b. Balanced Inventory of Socially Desirable Responding (BIDR; Paulhus, 1984). The tasks in this research study refer to sensitive topics and situations where survey respondents might exhibit a social desirability bias when answering questions. Participants who exhibit high versus low levels of social desirability and may be differentially affected by forgiving wording introductions (see Peter & Valekenburg, 2011). Thus, another potential covariate for sensitivity ratings in this study is participants’ own tendency to respond in socially desirable ways. Participants will complete a short version of Paulhus’s (1984) Balanced Inventory of Socially Desirable Responding (BIDR). This is a commonly used scale to assess two dimensions of social desirability. The first dimension is Self-Deceptive Enhancement, or the tendency to give self-reports that are believed but have a positivity bias and contains 10 items (e.g., My first impressions of people usually turn out to be right.) The second dimension is Impression Management and assesses deliberate self-presentation to an audience. It also contains 10 items (e.g., I sometimes tell lies if I have to.)

Scores on the IRI and BIDR will be used to determine whether perspective taking and socially desirable responding co-vary with level of sensitivity and ratings of the vignette characters and interviewers. See the Appendix for the complete version of each scale.

  1. Demographic questions – At the end of the task, participants will be asked to provide basic demographic information: their gender, age, race, and education (Appendix K).

  2. Thank you – The participants will be thanked for their participation and given an opportunity to leave any comments in an open-ended text entry box (Appendix L).


3. Participants

Participants will be recruited using a convenience sample from Amazon Mechanical Turk of adult U.S. citizens (18 years and older); this study is focused on internal validity rather than representativeness of any population. This research design requires a large sample of 909 participants in order to sufficiently explore the range of variables of interest and because we expect a very small effect size since, as the study manipulations are subtle for online surveys of this nature. These participants will be randomly assigned to the 12 groups described (a 3x2 design repeated over two surveys with 75 participants per group). An additional 9 participants may be recruited to account for break-offs and incomplete data.


3a. Power Analysis

The primary goal of the proposed research is to explore main effects of taking on the Interviewer and Respondent perspective on mean sensitivity ratings. As mentioned, we expect a very small effect size, as this is an online study that asks participants to imagine different scenarios and lacks the realism of true survey contexts. Online studies such as these require a large sample size to even detect very small effects, as reflected in the power analyses.


Sample size estimation


A statistical power analysis was performed for sample size estimation, based on data from a similar study by Bradburn et al. (1978) X (N= 1,172), that assessed people’s ratings of the sensitivity of several survey items similar to ours. Statistical analyses were not conducted in this study as it was exploratory; however, mean sensitivity ratings were approximately 2.89 on a 4 point scale from not at all uneasy to very uneasy. Transforming this to match our 5-point scale, this mean would be 3.47 with an approximate SD=.59.


With an alpha = .05 and power = 0.80, the projected sample size needed with this effect size is approximately N = 427 for detecting the main effects for the Interviewer and Respondent perspective. Thus, our proposed sample size of N = 450 for participants taking on each perspective very close to the amount needed for the main objective of this study and should also allow for expected attrition and our additional objectives of exploring possible covariates related to sensitivity ratings, such as attitudes towards the survey topics, vignette characters, and individual differences in perspective taking and social desirability.


4. Burden Hours

Our goal is to obtain responses from 909 participants recruited from Amazon Mechanical Turk. Each session is expected to take no more than 20 minutes to complete, for a total of 303 burden hours. The survey will be administered completely online at the time and location of the participant’s choosing.


5. Payment

We will recruit 909 participants from the Amazon Mechanical Turk database, half of whom will complete the first survey and half the second survey. Participants will be compensated $1.00 for participating in the study, a typical rate provided by Mechanical Turk for similar tasks. The total of $999.90 allocated for this survey will be paid directly to Amazon Mechanical Turk to administer the surveys and recruit participants.


6. Data Confidentiality

Recruiting of participants will be handled by Amazon Mechanical Turk. Participants will be informed that the study is about their perceptions of different types of questions. Once participants are recruited into the study, they will be sent a link to the survey, which is hosted by Qualtrics. The data collected as part of this study will be stored on Qualtrics servers. Using the language shown below, participants will be informed of the voluntary nature of the study and they will not be given a pledge of confidentiality.


This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141. We will use the information you provide for statistical purposes only. Your participation is voluntary, and you have the right to stop at any time. This survey is being administered by Qualtrics and resides on a server outside of the BLS Domain. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. By proceeding with this study, you give your consent to participate in this study.




Appendix A: Question wording versions


The three manipulated question wordings for each of the vignettes is shown here:


Forgiving

Unforgiving

Neutral

Employment – Looking for work

Despite wanting fulfilling jobs that contribute to society, many people take a long time to find work because business is poor, or due to a lack of job openings in their industry or geographic location. Have you been doing anything to find work during the last 4 weeks?

Most people agree that they would like to find fulfilling jobs that contribute to society business conditions are improving, and job openings in different industries and geographic locations are on the rise. Have you been doing anything to find work during the last 4 weeks?

Have you been doing anything to find work during the last 4 weeks?

Employment –Involuntary part-time

Some people work part time because they cannot find full time work or because business is poor. Others work part time because of family obligations or other personal reasons. What is your main reason for working part time?


Many people choose to work part time because they want to spend more time on themselves or for other personal reasons, and not because they can’t find full time work. What is your main reason for working part time?

What is your main reason for working part time?


Employment – Discouraged worker

Many people have given up on finding work because business is poor, a lack of job openings in their geographic location, or a lack of necessary job skills. What is the main reason you were not looking for work during the last 4 weeks?

Employment opportunities are increasing. Business conditions are improving and job openings are on the rise in many different geographic locations. What is the main reason you were not looking for work during the last 4 weeks?

What is the main reason you were not looking for work during the last 4 weeks?

Sleep habits*

Most people agree that a good night’s sleep and making time to get enough rest is an important part of being healthy. How many hours of sleep do you get on a typical weekday?


Most people have very busy schedules. Between work and other obligations, many people are in a constant “time crunch,” not able to get the recommended amount of rest. How many hours of sleep do you get on a typical weekday?



How many hours of sleep do you get on a typical weekday?


Alcohol expenditures

Most people purchase beer and wine to consume at home for celebrations or to relax. Purchases of alcoholic beverages for social occasions have become more popular. What has been your usual monthly expense for alcohol, including beer and wine to be served at home?

Purchases of alcohol have become less common as the economy tightens consumers’ budgets. Most people are choosing instead to spend their money on their family. What has been your usual monthly expense for alcohol, including beer and wine to be served at home?

What has been your usual monthly expense for alcohol, including beer and wine to be served at home?

Income*

People are happier in the long-term when they find work that brings them personal fulfillment and contributes to society compared to jobs that have the highest incomes. What is your annual rate of pay, in dollars, for your main job?

Many people agree that income is an essential part of today’s lifestyles and that higher levels of income are an important part of supporting a family and building a fulfilling life. What is your annual rate of pay, in dollars, for your main job?

What is your annual rate of pay, in dollars, for your main job?

Charitable donations

Most people would like to give money to charity. However, with the recent economic uncertainty, many people have been unable to donate as much as they would like to and try to save as much of their income as possible. In the past 3 months, have you given any money to benefit charities or other organizations?

Most people like to give money to charity. In fact, many people are able to regularly donate at least a small proportion of their income to directly support charities and other organizations. In the past 3 months, have you given any money to benefit charities or other organizations?

In the past 3 months, have you given any money to benefit charities or other organizations?


*It should be noted that the questions about sleep are based on previous cognitive interview work in OSMR showing that social desirability concerns are related to estimates of sleep duration. People believed that oversleeping was socially undesirable and reported getting an amount of sleep at the lower bounds of what they believed was a socially acceptable number of hours to sleep per night. Thus, the questions were worded with the assumption that is more ‘socially desirable’ to appear busy and underreport the number of hours slept per night. We acknowledge this is one of the weaker items in the set and it may have a smaller effect than the other items in the survey.


*Questions about income are considered universally sensitive, regardless of survey context. People on the lower and upper end of the income spectrum are particularly likely to find this question sensitive. The wording interventions aim to lessen some of the stigma about reporting income, downplaying the importance of money to people’s lives. This should reduce sensitivity for people at both extremes of the income spectrum. Conversely, the unforgiving context should heighten sensitivity since it primes the importance of earnings.

Appendix B: Instrument Introduction


The following task will be administered to participants on-line. The horizontal lines indicate page breaks, for which the participant must click a button to continue to the next screen.


Introduction

Welcome!

Thanks for your interest in our research.

We'll be asking you to read a few stories that describe the way that people interact with each other. Later, we’ll ask you to answer questions about the stories. And at the end, you’ll answer a few questions about yourself.

Unlike some surveys or online tasks, we ask that you complete this task all at one time. Please begin only when you are in a quiet place where you won't be disturbed for about 20 minutes.

Please do not use your browser's back button.

This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141. We will use the information you provide for statistical purposes only. Your participation is voluntary, and you have the right to stop at any time. This survey is being administered by Qualtrics and resides on a server outside of the BLS Domain. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. By proceeding with this study, you give your consent to participate in this study.


What you’ll be asked to do

On the next few pages, you will read stories in which one person interviews another for a survey. For each story, you will read a short profile about the interviewee, followed by excerpts from the interview. We are interested in how you think [the person answering the interview questions / the interviewer] would feel while [answering / asking] the different kinds of questions.


  • Put yourself in the shoes of [the person answering the interview questions / the interviewer] in each of the stories

  • Try to take on their perspective. Focus on how [the person answering the interview questions / the interviewer] would think, feel, and react while [answering / asking] each question.

  • Please note that all interviewers were required to read the questions exactly as written.


Please take your time when reading the stories.







Appendix C: Stories


Next, the participant will be shown a series of four scenarios with follow-up questions after each scenario. Participants will be randomly assigned to read one group of four scenarios (Survey 1 or Survey 2), as follows below. The bracketed text represents text that will be replaced by experimental group-specific text. For example, participants randomly assigned to read forgiving wording will see that lead-in text for each scenario, as shown in Appendix A.


Story 1: Wendy

Interviewer: [Forgiving wording, Unforgiving wording, or Neutral wording] Have you been doing anything to find work during the last 4 weeks?

Wendy: Well, I worked on my resume and went on 4 job interviews. I heard back from 2 of them, but I haven’t gotten an offer yet.

Interviewer: Did you do anything else?

Wendy: No.

Finished reading the profile and the interview?













Story 2: Beth

Interviewer: [Forgiving wording, Unforgiving wording, or Neutral wording] Have you been doing anything to find work during the last 4 weeks?

Beth: Like I mentioned earlier, the assembly plant was shut down. I looked for work for over 6 weeks – it was 46 days, I counted - and have come up with nothing so far.

 

Finished reading the profile and the interview?


Story 3: Charlene


Interviewer: [Forgiving wording, Unforgiving wording, or Neutral wording] What is the main reason you were not looking for work during the last 4 weeks?


Charlene: I'm so tired of hearing: ‘Your resume is excellent but the position requires someone more up-to-date on the newest teaching methods.’ I can't help how old I am.


Interviewer: So there’s just nothing available in your line of work for someone your age, or you just couldn’t find any work, or…? –


Charlene: I looked for work for 14 straight weeks and came up with nothing. That’s more than anyone else I know. My friend John looked for only 12 weeks. I won't go through that again. There's no job out there for me.


Finished reading the profile and the interview?



Story 4: Pat


Interviewer: [Forgiving wording, Unforgiving wording, or Neutral wording] What is your main reason for working part time?


Pat: To pay for renovations, my employer cut my hours in half.


Interviewer: So is that you can only find part-time work?


Pat: [Short pause] I’ve been trying for 8 weeks, but can’t another full-time manager position at another hotel. I applied to 6 jobs. I even am looking at hotels as far as 55 miles away from home. I’m having a hard time making ends meet with so few hours.


Finished reading the profile and the interview?



Scenarios – Survey 2


Story 1: Daniel

Interviewer: [Forgiving wording, Unforgiving wording, or Neutral wording] How many hours do you sleep at night on a typical weekday?


Daniel: Um…It is hard for me to estimate that, since I don’t have a regular work schedule.


Interviewer: That’s okay, just try to think of your best estimate.


Daniel: Uh, since I don’t have to wake up early to go to work… I get up at around 9:30 in the morning usually. I would say some days I sleep like 8 hours and others it is like 10 hours. On average, it’s, um… 9 hours and 15 minutes or so.


Finished reading the profile and the interview?
















Story 2: Sam

Interviewer: [Forgiving wording, Unforgiving wording, or Neutral wording] What has been your usual monthly expense for alcohol, including beer and wine to be served at home?


Sam: Hm, um, that is hard to remember an exact dollar amount.


Interviewer: Okay, just try to think back to your expenses and estimate.


Sam: I guess I usually get a 6-pack a week normally, so most months that would be… about $40.50… but during summer there have been more parties. So in August, it was… and then... $55.50 total.


Finished reading the profile and the interview?


















Story 3: Alex

Interviewer: [Forgiving wording, Unforgiving wording, or Neutral wording] What is your annual rate of pay, in dollars, for your main job?


Alex: Like I said, it was recently cut, so I’d have to recalculate the amount it would be…


Interviewer: That’s okay, just try to give me your best estimate of what it would be. You can give a range if that is easier.


Alex: [Pause] I am supposed to get $46,930 but then after the cuts, I think that my salary is um, down to, $39,460. So, my annual rate for the last year would be… let’s see… $43,195.


Finished reading the profile and the interview?

















Story 4: Chris

Interviewer: [Forgiving wording, Unforgiving wording, or Neutral wording] In the past 3 months, have you given any money to benefit charities or other organizations?


Chris: Um, I used to. Hard to say if I did in the past 3 months or not, after my hours got cut [Pause] Um, yes I think I did. I gave $27.45 to the Humane Society and $2 to someone on the street – does that count?


Interviewer: Um, it’s just organizations that we’re looking for. Was it for an organization?


Chris: I think so.


Interviewer: Ok, we’ll count it.


Chris: Ok.


Finished reading the profile and the interview?







Appendix D: Story follow-up questions

After each scenario, participants will be asked a series of follow-up questions about the scenario character and survey topic. The questions follow the same format for each scenario and will be presented in the same order for each scenario. Participants will only be asked about the group of four scenarios they read earlier. The questions customized for the Wendy scenario are below as an example.


Follow-up Questions for Story with Wendy


Please answer a few questions about the story you just read. 

 

How sensitive do you think [Wendy/ the interviewer] felt while [answering / asking] this question?

 

Not at all sensitive

Slightly sensitive

Moderately sensitive

Very sensitive

Extremely sensitive



How personal do you think [Wendy/ the interviewer] felt while [answering / asking] this question?

 

Not at all personal

Slightly personal

Moderately personal

Very personal

Extremely personal



How similar are you to Wendy?


Not at all similar

Slightly similar

Moderately similar

Very similar

Extremely similar



How negatively or positively do you think most people would evaluate Wendy?


Very negatively

Somewhat negatively

Neither negatively nor positively

Somewhat positively

Very positively



How easy or difficult was it to put yourself in Wendy’s shoes when reading the scenario?


Very easy

Somewhat easy

Neither easy nor difficult

Somewhat difficult

Very difficult



How likely do you think it is that Wendy will participate in similar interviews in the future?


Not at all likely

Slightly likely

Moderately likely

Very likely

Completely likely



How happy do you think Wendy was with her decision to participate in this interview?


Not at all happy

Slightly happy

Moderately happy

Very happy

Completely happy


Appendix E: Topic-specific follow-up questions

Immediately after these follow-up questions, the participants are asked a final scenario topic-related question. The questions for each scenario are as follows below. Participants will only see the questions relevant to the scenarios that they viewed.


Wendy

Thinking about the real world now - in your opinion, how easy or difficult is the job market facing recent college graduates these days?


Very easy

Somewhat easy

Neither easy nor difficult

Somewhat difficult

Very difficult



Beth

Thinking about the real world now – in your opinion, how important would you say that manufacturing is to the health of the US economy?


Very important

Somewhat important

Neither important nor not important

Somewhat not important

Not at all important



Charlene

Thinking about the real world now – in your opinion, how easy or difficult is it for older people to find jobs these days?


Very easy

Somewhat easy

Neither easy nor difficult

Somewhat difficult

Very difficult



Pat

Thinking about the real world now – in your opinion, how important would you say it is for companies to ensure all of their employees can make ends meet?


Very important

Somewhat important

Neither important nor not important

Somewhat not important

Not at all important





Daniel

Thinking about the real world now – in your opinion, how important is it spend some time on leisure activities while unemployed?


Very important

Somewhat important

Neither important nor not important

Somewhat not important

Not at all important



Sam

Thinking about the real world now – in your opinion, how important would you say it is for someone with a mild alcohol addiction to stop drinking altogether?


Very important

Somewhat important

Neither important nor not important

Somewhat not important

Not at all important



Alex

Thinking about the real world now – in your opinion, how important would you say it is for people to disclose their annual income for an economic survey?


Very important

Somewhat important

Neither important nor not important

Somewhat not important

Not at all important



Chris

Thinking about the real world now – in your opinion, how important would you say it is for people to give money to charity?


Very important

Somewhat important

Neither important nor not important

Somewhat not important

Not at all important

Appendix F: Distractor task

After ready the four scenarios and answering the follow-up questions, the participants will be asked to complete a distractor task. The page with the distractor task will automatically advance after 30 seconds.


Distractor task


Thank you for completing the first part of our study. You’re about halfway through.

Next we’d like to ask you to complete a short number game. You have 30 seconds to try to get as many right as you can.

Two numbers will appear on the screen, like in this example below.


 
Count backwards from the big number, by subtracting the value of the small number. 


Type in the numbers as you go. In this example,

 

651 - 7 = 644, so you would first enter 644,

644 - 7 = 637, so you would then enter 637,

637 - 7 = 630, so you would then enter 630, and so on.

 

You have 30 seconds before the game ends.



Your 30 seconds starts now!

 Count down from
 [random number between 141 and 999]

by
7

[10 text entry boxes]



Time’s up!


Appendix G: Recall questions for sensitive information


After completing the distractor task, participants will be asked questions about the four scenarios. The questions will be specific to the scenario; each scenario’s questions are as follows below. The questions about each scenario will be grouped together but the order of those questions and the order of the scenarios will be randomized for each participant. Participants will only see the questions related to the scenarios they saw earlier.



Earlier you read stories where characters answered questions about [employment / sleep habits, money spent on alcohol, charity donations, and income]. Next are questions about those stories. We are interested in seeing how much you remember about them. Even if you don’t remember the information, give it your best guess.

 


Wendy


Thinking about Wendy...


How many job interviews did Wendy go on?

[text entry]


How many job offers did Wendy get?

[text entry]


How many second interviews did Wendy go on?

 [text entry]



Beth

 

Thinking about Beth...


How many days has Beth been looking for work?

[text entry box] (number validation)


How many job offers did Beth get?

[text entry box] (number validation)


How many resumes did Beth send to other manufacturing companies in the area?

[text entry box] (number validation)





















Charlene

 

Thinking about Charlene...


How many weeks has Charlene been looking for work?

[text entry box] (number validation)


How many job offers did Charlene get?

[text entry box] (number validation)


How old is Charlene?

[text entry box] (number validation)




















Pat

 

Thinking about Pat...


How many jobs did Pat apply to?

[text entry box] (number validation)


How many miles away is the furthest hotel that Pat applied to?

[text entry box] (number validation)


How many job fairs did Pat attend to find work?

[text entry box] (number validation)




















Daniel

 

Thinking about Daniel...


How many hours of sleep does Daniel get on a typical weekday?

[text entry box] (number validation; hours and minutes)


What time does Daniel wake up on a typical weekday?

[text entry box] (time validation; hours and minutes)


What time does Daniel go to sleep on a typical weekend?

[text entry box] (time validation; hours and minutes)




















Sam

 

Thinking about Sam...


How much money in total did Sam say he usually spends on alcohol every month?

[text entry box] (number validation of dollars and cents)


How much money did Sam spend on beer in August?

[text entry box] (number validation of dollars and cents)


How much money did Sam spend on beer in July?

[text entry] (numeric validation of dollars and cents)



















Alex

 

Thinking about Alex...


How much money does Alex make?

[text entry box] (number validation)


How much money did Alex earn at his current job before his pay cut?

[text entry box] (number validation)


How much money did Alex earn at his previous construction job?

[text entry box] (number validation)




















Chris

 

Thinking about Chris...


How much money in total did Chris donate to the Humane Society?
[text entry box] (number validation)


To how many organization(s) did Chris give money?

[text entry box] (number validation)


How many hours did Chris volunteer at the Humane Society?

[text entry] (numeric validation)

Appendix H: Global sensitivity questions


The participants will then be asked to answer questions about the scenarios in general.


Once again, think back to the stories you read earlier about [employment / sleep habits, money spent on alcohol, charity donations, and income]. Please think about those stories as a whole and how the characters and interviewers felt while answering the following questions.


As a whole, how sensitive do you think the characters felt while answering the questions?

  • Not at all sensitive

  • Slightly sensitive

  • Moderately sensitive

  • Very sensitive

  • Extremely sensitive

As a whole, how sensitive do you think the interviewers felt while asking the questions?

  • Not at all sensitive

  • Slightly sensitive

  • Moderately sensitive

  • Very sensitive

  • Extremely sensitive

As a whole, how honest do you think the characters were in answering the questions?

  • Not at all honest

  • Slightly honest

  • Moderately honest

  • Very honest

  • Extremely honest

Under which circumstance do you think the characters would be most likely to provide honest responses?

  • Talking to an interviewer in-person

  • Talking to an interviewer on the telephone

  • A paper-and-pencil survey sent over mail

  • An online survey sent over the Internet

  • A mobile survey sent over a Smartphone

  • A mobile survey sent over text-message

As a whole, how likely is it that the characters wanted to skip over, or not answer the questions?

  • Not at all likely

  • Slightly likely

  • Moderately likely

  • Very likely

  • Completely likely

As a whole, how likely is it that the interviewers wanted to skip over, or not have to ask the questions?

  • Not at all likely

  • Slightly likely

  • Moderately likely

  • Very likely

  • Completely likely

As a whole, how much regret do you think the characters felt about participating in the interviews?


  • No regret

  • Slight regret

  • Moderate regret

  • A lot of regret

  • Extreme regret





































Appendix I: Attitude and Individual Differences

Next, the participants will be asked about their attitudes toward the survey topics.


Thank you. Now we’d like to know more about your thoughts on the story topics you read about today.


In general, how easy or difficult is it for the average American to find a job these days?

  • Extremely easy

  • Very easy

  • Somewhat easy

  • Neither easy nor difficult

  • Somewhat difficult

  • Very difficult

  • Extremely difficult


How long do you think it takes for the average American to find a job?

Answer in whichever unit is easiest for you:

  • ___ days

  • ___ weeks

  • ___ months

[text entry number validation]



How many hours per night do you think is appropriate for most people to sleep?

[text entry number validation]



How many alcoholic beverages do you think the average American consumes in one week?

[text entry number validation]



How willing do you think the average American would be to disclose his or her income to an interviewer?

  • Not at all willing

  • Slightly willing

  • Moderately willing

  • Very willing

  • Completely willing



How much money do you think the average American donates to charity each year?

[text entry number validation]



Grid with all questions for the scale appearing on one page

1 2 3 4 5

(favor) (oppose)


Please tell us how strongly you favor or oppose the government asking people questions to use for federal statistics about each of the following topics…

[Row headers: employment status, sleep habits, money spent on alcohol, charitable donations, income] ?

  • Favor

  • Slightly favor

  • Neither favor nor oppose

  • Slightly oppose

  • Oppose









The participants will then be asked to answer questions about themselves, completing two scales. The scales will be presented in a grid format.


Now please tell us a bit about yourself. Read the following questions and choose the answer option that best describes you.


Perspective taking scale – Grid with all questions for the scale appearing on one page

1 2 3 4 5

(does not describe me well) (describes me very well)


  1. I sometimes find it difficult to see things from the "other guy's" point of view.

  2. I try to look at everybody's side of a disagreement before I make a decision.

  3. I sometimes try to understand my friends better by imagining how things look from their perspective.

  4. If I'm sure I'm right about something, I don't waste much time listening to other people's arguments.

  5. I believe that there are two sides to every question and try to look at them both.

  6. When I'm upset at someone, I usually try to "put myself in his shoes" for a while.

  7. Before criticizing somebody, I try to imagine how I would feel if I were in their place.












Empathic concern scale – Grid

1 2 3 4 5

(does not describe me well) (describes me very well)


  1. I often have tender, concerned feelings for people less fortunate than me.

  2. Sometimes I don't feel very sorry for other people when they are having problems.

  3. When I see someone being taken advantage of, I feel kind of protective towards them.

  4. Other people's misfortunes do not usually disturb me a great deal.

  5. When I see someone being treated unfairly, I sometimes don't feel very much pity for them.

  6. I am often quite touched by things that I see happen.

  7. I would describe myself as a pretty soft-hearted person.



Balanced Inventory of Socially Desirable Responding – 2 Grids

1 2 3 4 5

(strongly disagree) (strongly agree)


  1. My first impressions of people usually turn out to be right.

  2. It would be hard for me to break any of my bad habits.

  3. I have not always been honest with myself.

  4. I always know why I like things.

  5. Once I’ve made up my mind, other people can seldom change my opinion.

  6. It’s hard for me to shut off a disturbing thought.

  7. I never regret my decisions.

  8. I rarely appreciate criticism.

  9. I am very confident of my judgments.

  10. I don’t always know the reasons why I do things.


1 2 3 4 5

(strongly disagree) (strongly agree)


  1. I sometimes tell lies if I have to.

  2. I never cover up my mistakes.

  3. I always obey laws, even if I am unlikely to get caught.

  4. I have said something bad about a friend behind his or her back.

  5. When I hear people talking privately, I avoid listening.

  6. I have received too much change from a salesperson without telling him or her.

  7. When I was young I sometimes stole things.

  8. I have done things that I don’t tell other people about.

  9. I never take things that don’t belong to me.

  10. I don’t gossip about other people’s business.



Appendix J: Demographic Questions


  1. How old are you? ___ [validate two digits]


  1. What is your gender?

    • Male

    • Female


  1. Which of the following best describes you?

    • Employed full time

    • Employed part time

    • Unemployed

    • Student

    • Retired


  1. Are you Hispanic or Latino?

    • Yes

    • No

  2. What is your race? Please select one or more.

    • American Indian or Alaska Native

    • Asian

    • Black or African American

    • Native Hawaiian or Other Pacific Islander

    • White



  1. Which of the following best describes your highest level of education?

    • Less than high school

    • High school diploma or equivalent

    • Some college

    • Associate’s degree or Bachelor’s degree

    • Master’s degree or Doctoral degree




Appendix K: Thank you


The participants will be thanked for their participation.



Thank you for participating in our study.


If you have any comments you would like to share, please use the space below.

[text entry box]






































References

Adams, S. A., Matthews, C. E., Ebbeling, C. B., Moore, C. G., Cunningham, J. E., Fulton, J., & Hebert, J. R. (2005). The effect of social desirability and social approval on self-reports of physical activity. American journal of epidemiology, 161(4), 389-398.

Bahrick, H. P., Hall, L. K., & Berger, S. A. (1996). Accuracy and distortion in memory for high school grades. Psychological Science, 7(5), 265-271.

Barnett, J. (1998). Sensitive questions and response effects: an evaluation. Journal of Managerial Psychology, 13(1/2), 63-76.

Belli, R. F., Traugott, M. W., Young, M., & McGonagle, K. A. (1999). Reducing vote overreporting in surveys: Social desirability, memory failure, and source monitoring. Public Opinion Quarterly, 90-108.

Bradburn, N. M., Sudman, S., Blair, E., & Stocking, C. (1978). Question threat and response bias. Public Opinion Quarterly, 42(2), 221-234.

Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions: the definitive guide to questionnaire design--for market research, political polls, and social and health questionnaires. John Wiley & Sons.

Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of personality and social psychology, 44(1), 113.

De Schrijver, A. (2012). Sample survey on sensitive topics: Investigating respondents' understanding and trust in alternative versions of the randomized response technique. Journal of Research Practice, 8(1).

Kreuter, F., Presser, S., & Tourangeau, R. (2008). Social desirability bias in CATI, IVR, and Web surveys the effects of mode and question sensitivity. Public Opinion Quarterly, 72(5), 847-865.

Krumpal, I. (2013). Determinants of social desirability bias in sensitive surveys: a literature review. Quality & Quantity, 47(4), 2025-2047.

Loftus, E. F., Miller, D. G., & Burns, H. J. (1978). Semantic integration of verbal information into a visual memory. Journal of experimental psychology: Human learning and memory, 4(1), 19.

McNeeley, S. (2012). Sensitive issues in surveys: Reducing refusals while increasing reliability and quality of responses to sensitive survey items. In Handbook of survey methodology for the social sciences (pp. 377-396). Springer New York.

Näher, A. F., & Krumpal, I. (2012). Asking sensitive questions: the impact of forgiving wording and question context on social desirability bias. Quality & Quantity, 46(5), 1601-1616.

Paulhus, D. L. (1984). Two-component models of socially desirable responding. Journal of personality and social psychology, 46(3), 598.

Peter, J., & Valkenburg, P. M. (2011). The Impact of “Forgiving” Introductions on the Reporting of Sensitive Behavior in Surveys: The Role of Social Desirability Response Style and Developmental Status. Public Opinion Quarterly, 75(4), 779–787.

Rasinski, K. A., Willis, G. B., Baldwin, A. K., Yeh, W., & Lee, L. (1999). Methods of data collection, perceptions of risks and losses, and motivation to give truthful answers to sensitive survey questions. Applied Cognitive Psychology, 13(5), 465-484.

Tourangeau, R., & Smith, T. W. (1996). Asking sensitive questions the impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60(2), 275-304.

Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological bulletin, 133(5), 859.

Sudman, S., & Bradburn, N. M. (1982). Asking questions: A practical guide to questionnaire design.




52


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorYu, Erica - BLS
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy