OMB Control Number: 1220-0141
Expiration Date: 07/31/2027
DATE: September 9, 2024
NOTE TO THE REVIEWER OF: |
OMB CLEARANCE #1220-0141 “Cognitive and Psychological Research”
|
FROM: |
Robin Kaplan Research Psychologist Office of Survey Methods Research (OSMR)
|
SUBJECT: |
Submission of Materials for Select All versus Forced Choice Research Study |
Please accept the enclosed materials for approval under the OMB clearance package #1220-0141 “Cognitive and Psychological Research.” In accordance with our agreement with OMB, I am submitting a brief description of the study.
The total estimated respondent burden hours for this study are 500 hours.
If there are any questions regarding this project, please direct them to Robin Kaplan (202-691-7383; [email protected]).
Introduction and Purpose
A common practice among survey researchers when asking about a battery of items in surveys is to use a select all that apply format. This format asks survey respondents to endorse individual items within a single question rather than asking about each as a separate question (aka forced choice). This format is appealing as it reduces the amount of space or questions within a survey (Lau and Kennedy, 2019) and reduces respondent burden (Rasinski, Mingay, and Bradburn 1994).
While the select all that apply format is popular for its efficiency, an unintended consequence is the potential for reduced data quality. The alternative forced choice format, where respondents are asked to consider and respond yes or no to each item individually, has been shown to result in greater item-level endorsement (Callegaro et al. 2015; Lau and Kennedy 2019; Rasinski, Mingay and Bradburn 1994; Smyth et al. 2006). Researchers have concluded that the forced choice format results in greater item-level endorsement because respondents are engaging in deeper cognitive processing (e.g., Smyth et al. 2006). However, others have suggested that the greater item-level endorsement may be due to acquiescence bias (Callegaro et al. 2015). While the reasons for the increased endorsement remain debatable, researchers agree that the lack of validation studies make determinants of accuracy speculative at best.
Previous research has focused on quantifying the effect, or on approaches to make selection instructions more salient (Kunz and Fuchs 2018). However, more research is needed to understand the effect of the question referent, that is, whether the question is asking about recalled behaviors, recalled facts or knowledge, or attitudes which may be recalled or created on the spot (Converse 1964; Tourangeau, Rips, and Rasinski 2000). Evidence of this can be seen in the work by Rasinski, Mingay, and Bradburn (1994). The authors found evidence of increased endorsement for forced choice formats in comparison to select all, however this effect differed in strength between the three questions tested. One question was knowledge-based, about what a school offered. Another question was near hypothetical, asking why students were not going to engage in a future activity. The third question tested was based on recalled experiences and had the most response options to review. The question asking students to consider future actions showed the largest difference, while the question with the most response options showed greater endorsement of options listed first. Clearly there is the potential for the question referent to interact with question format.
Given the strong evidence that question format impacts response distributions, this research will extend previous research by investigating whether this effect is moderated by the question referent (i.e., question type – behaviors, factual, or attitudinal). To examine this, we will randomly assign participants to select all vs. forced choice (yes/no) question formats. Question topics include the following: behavioral (e.g., apps people used to purchase goods or services, job search methods), factual (e.g., appliances that came with their home or rental), and attitudinal (e.g., attitudes towards privacy and surveys). In addition, questions about subjective burden and measures of objective burden (i.e., duration of time spent on each survey page and on the overall survey) will be assessed to investigate the idea that forced choice formats are more cognitively taxing because respondents must consider each item individually. This research will further advance OSMR’s efforts to conduct research on adding web modes to existing BLS surveys, and will inform best practices for self-administration of questions that have multiple response categories.
Research Design and Procedures
This research will be conducted online via web instruments programmed in Qualtrics. A total of 3,000 participants will complete a web survey consisting of basic demographic questions and a series of questions in a select all that apply format or a forced choice format. Participants will be randomly assigned to the select all versus forced choice formats. Each question grid will include 7-10 individual items. Questions and response categories will be displayed in randomized order to avoid primacy or other order effects. Question topics include the following: behavioral (apps they used to purchase goods or services), factual (appliances that came with their home or rental), and attitudinal (attitudes towards privacy and surveys), and hypothetical (things they may or may not do in the next 4 weeks). One of the attitude questions will include a manipulation where half the participants receive a question stem in the affirmative (“Which of the following statements reflect how you feel about surveys?”) versus the negative (“Which of the following statements do NOT reflect how you feel about surveys?”) to assess differences in response distributions for the positively worded versus negatively worded items. One of the hypothetical questions will include a manipulation where half the participants receive a set of response options with low expected incidence versus a version replacing some options that have a higher expected incidence. This will assess whether acquiescence bias influences the number of categories selected. Participants will also complete brief questions about their perceptions of how burdensome the survey was, how easy or difficult it was to answer the questions, and the accuracy of their responses. See Attachment A for the full web survey instrument.
Participants will be recruited via two online non-probability platforms: Amazon Mechanical Turk (n=1500) and the CloudResearch Connect web panels (n=1500). Participants will sign-up for the survey, which will take 10 minutes to complete. Participants from both panels will be required to be 18 years or older and live in the United States.
Because we do not know the “true value” of participants’ responses to these questions, we will not be able to assess the accuracy of each question format. However, if one format consistently produces higher levels of endorsement, this would suggest the question format impacted the endorsement of response options. Additional analyses will include number of categories endorsed as function of question format and topic, crossed with covariates of interest (e.g., age, device type, education, measures of subjective burden, time spent per question and on the total survey, break-offs, and differences by recruitment panel).
Additional research: Additionally, we will examine what, if any, effect autocomplete functionality in a self-response survey has on the response experience and data quality. To assess how well participants can classify the type of industry or occupation in which they work, participants who are currently employed will be asked an open-ended question about the type of industry or occupation in which they work. Then they will receive an autocompleted list of industries by NAICS Code and occupations to select from based on the Occupational Employment and Wage Statistics program at BLS. A comparison of the open-ended descriptions and selection from the dropdown will be compared to assess participants’ ability to self-code their industry and occupation. Although the algorithm generating the autocompleted list of industries and occupations is proprietary to the survey platform Qualtrics, our research questions are not specific to the list but rather they are about participant behavior using the list. The results will help inform design of self-administered versions of these questions for web surveys.
Participants and Burden Hours
Online Surveys: A total of 3000 participants will participate in the study. The survey will take a total of 10 minutes each, for a total of 500 burden hours. Recruitment will be done via MTurk (n=1500) and the CloudResearch Connect platform (n=1500).
Mode |
Participants Contacted |
Recruitment Hours |
Recruitment Total Hours |
Participants Completed |
Session Hours |
Session Total Hours |
Total Collection Burden |
|
Online Survey |
3000 |
0.00 |
0 |
3000 |
0.17 |
500 |
500 |
|
Total |
|
|
500 hours |
Payment
For this study, online participants who complete the online survey will receive $2.50, a typical rate for similar tasks.
Data Confidentiality
Online survey participants will be informed of the OMB number and the voluntary nature of the study, but will not be given an assurance of confidentiality.
This voluntary study is being collected by the Bureau of Labor Statistics under OMB No. 1220-0141 (Expiration Date: July 31, 2027). Without this currently approved number, we could not conduct this survey. This survey will take approximately 10 minutes to complete. If you have any comments regarding this estimate or any other aspect of this study, send them to [email protected]. The BLS cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. This survey is being administered by Qualtrics and resides on a server outside of the BLS Domain. Your participation is voluntary, and you have the right to stop at any time.
Attachments
Attachment A: Web survey instrument
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | December 1, 2008 |
Author | LAN User Support |
File Modified | 0000-00-00 |
File Created | 2025-02-04 |