OMB Memo to conduct cognitive test on hate crime questions

Hate Crime testing_OMB generic clearance request_final.docx

Generic Clearance for Cognitive, Pilot and Field Studies for Bureau of Justice Statistics Data Collection Activities

OMB Memo to conduct cognitive test on hate crime questions

OMB: 1121-0339

Document [docx]
Download: docx | pdf




U.S. Department of Justice


Office of Justice Programs


Bureau of Justice Statistics

Washington, D.C. 20531


MEMORANDUM



To: Robert Sivinski

Official of Statistical and Science Policy

Office of Management and Budget


Through: Melody Braswell

Clearance Officer

Justice Management Division


Jeffrey H. Anderson

Director

Bureau of Justice Statistics


Allen J. Beck

Senior Statistical Advisor


Guy Burnett

Senior Advisor


From: Heather Brotsos

Chief, Victimization Statistics Unit

Date: July 29, 2020


Re: BJS Request for OMB Clearance for Testing of Proposed Revisions to the National Crime Victimization Survey hate crime questions under the BJS Generic Clearance Agreement (OMB Number 1121-0339)


The Bureau of Justice Statistics (BJS) requests clearance for tasks related to cognitive interviewing and testing of proposed revisions to the National Crime Victimization Survey (NCVS) hate crime questions under the BJS OMB generic clearance agreement (OMB Number 1121-0339). This set of cognitive interviewing and testing tasks will be focused on the series of NCVS questions that are used to determine whether a victim perceives and has evidence that the offender was motivated by bias against persons with their characteristics or religious beliefs.

The NCVS hate crime questions were developed in 1999 to complement the data collected by the FBI under the Hate Crime Statistics Act and have been included in NCVS data since 2003. As part of the larger, planned NCVS instrument redesign, BJS has worked to further examine and improve these items focusing on two related issues a) improving comprehension of the items and b) improving measurement validity, in particular reducing the likelihood of false positives. NCVS hate crime counts, which are based largely on the victim’s perceptions that a crime was motivated by bias, are consistently higher than counts generated from the FBI’s two-tiered determination process that follows a criminal investigation. The NCVS also captures crimes not reported to police and therefore will always produce higher counts than the FBI data. However, there are concerns that the original NCVS evidence items used to classify incidents as hate crimes might be too broad, resulting in false positive responses. For example, the NCVS incident summaries suggest that victims may be answering the hate crime questions affirmatively if they believe they were targeted because of a perceived physical vulnerability, rather than prejudice or bigotry on the part of the offender. The changes proposed for testing are intended to reduce the complexity of the items and the terminology used, and to provide clarification of the concepts to improve overall measurement validity. These changes will be described further in the sections below.


Under this clearance, two versions of the hate crime questions will be administered to a maximum of 5,000 total respondents through a short (3-5 minute) web-based survey. Respondents to the survey will also be asked about their interest in participating in an additional cognitive interview. The structure of the cognitive interviews will vary depending on whether the respondent reported experiencing a hate crime in the web-based survey. Approximately 60 cognitive interviews will be conducted (20 victims; 40 nonvictims). The web survey and cognitive interviewing approaches are described in more detail below.


This memo first provides background on the NCVS hate crime questions and proposed revisions. Next is a description of the proposed testing procedure, followed by a description of language, burden hours, reporting, protection of human subjects, informed consent, data confidentiality and security.


This clearance request covers only these two testing exercises. Any changes to the actual administration of the NCVS in the field will be addressed through a separate clearance.


  1. Background on the NCVS Hate Crime Questions and Proposed Revisions


BJS first added hate crime questions to the NCVS in 1999; these questions were available on the public-use data files beginning in 2003. The questions were designed to be used in conjunction with the data from the FBI’s Uniform Crime Reports Hate Crime Statistics Program. Both collections, which are the two major sources of hate crime data in the United States, use the definition of hate crime from the Hate Crime Statistics Act (28 U.S.C. § 534). The act defines hate crimes as “crimes that manifest evidence of prejudice based on race, gender or gender identity, religion, disability, sexual orientation or ethnicity.” The NCVS measures crimes motivated by an offender’s perception that a victim belongs to one of these protected groups or is associated with the protected group. It captures incidents described by victims as hate crimes but cannot directly measure the offenders’ intent.


There are three major elements that go into the NCVS classification of a hate crime: the type of offense committed, the type of bias motivating the act, and the evidence demonstrating the offender’s bias. A survey respondent must first answer affirmatively to one of the NCVS screener questions and identify that he or she experienced one or more criminal victimizations within the scope of the survey – rape or sexual assault, robbery, assault, burglary, motor vehicle theft, or other theft – in the prior six months. Next, the victim must perceive that the offender was motivated by bias because of the victim’s status in a protected group. Finally, the victim must have evidence that the offender was motivated by hate. The current NCVS hate crime series asks about seven different types of potential evidence and BJS uses three of them (offender used hate language, offender left hate-related signs or symbols at the scene, or the incident was confirmed to be a hate crime by police investigators) to qualify as sufficient evidence to classify the offense as a hate crime.


NCVS hate crime counts are considerably higher than the FBI’s counts. At least some of this difference in magnitude is to be expected and is attributed to the NCVS capturing crimes that are not reported to police. However, it is also possible that NCVS victims may answer affirmatively to questions about whether they were targeted because of their characteristics or religious beliefs for other reasons beyond the offender’s prejudice. For example, respondents may answer the hate crime questions affirmatively if they believe that the offender targeted them because of a perceived vulnerability. It may also be difficult for victims to distinguish between general aggression on the part of an offender versus an offender having bias against their particular demographic characteristics or religious affiliation. Although the NCVS includes items designed to filter out crimes that had no evidence of bias, the majority (99%) of crimes are perceived as hate crimes due to the offender using hurtful or abusive language. The current version of this evidence item does not specify that the language had to be related to the protected characteristics, so the NCVS may be classifying as hate crimes those that would not legally be considered hate crimes. On the other hand, BJS also requires the presence of one of three types of evidence to classify a victimization as a hate crime. Thus, it may be possible that some hate crime victims are misclassified as nonhate victims because they did not report one of the three types of evidence.


As part of planning toward the NCVS instrument redesign efforts, BJS and its contractors tested an initial set of modified hate crime items (under a separate OMB) and conducted a more in-depth secondary analysis of the current NCVS items to understand their potential for false negative and false positive classifications of hate crime in the NCVS. In July 2019, BJS received results from a small (15-person) cognitive test of the redesigned NCVS instrument that included modifications to the hate crime items (see instrument version 1 in appendix 3). In particular, the initial screener item was modified to include mention of the legally protected characteristics, and clarifying language was added to indicate that the evidence had to be related to the protected characteristics. Testing results highlighted a few areas of concern about whether respondents fully understood the hate crime language. Of the 15 victims surveyed—

  • Three victims thought that a hate crime based on sex was about an act of sex.

  • Seven respondents were confused about the difference between sex and gender.

  • Most respondents could not articulate a definition for prejudice or discrimination, although all indicated understanding the intent of the words.

  • Three victims indicated the crime was a hate crime, and when asked the evidence items one of those victims affirmed that “the incident happened around a holiday, event, or place commonly associated with a specific group” because the rape occurred on Valentine’s Day.


Despite the above areas of concern, overall, most of the victims’ responses were aligned with their crime narratives, suggesting general understanding of the hate crime items. Given the small sample, which included only 3 victims who believed a hate crime was perpetrated against them, additional testing with a larger sample and more hate crime victims is warranted.


From September 2019 to May 2020, BJS also worked with researchers to examine the logic, consistency, flow, and general validity of the NCVS hate crime items currently in the field to develop findings that would inform the ongoing redesign of the hate crime section of the new instrument. The analysis included questions such as the following: how often did respondents state that the crime was motivated by racial bias when the offender and victim were the same race? How often did respondents indicate that the crime was motivated by 3 or more biases? How often did respondents say the police told them it was a hate crime but also report the crime was not reported to police? The analysis also incorporated findings from a review of the NCVS incident summaries that are collected by Census field interviewers at the end of NCVS interviews. The analysis resulted in several recommended changes to the hate crime questions that could help reduce some of the potential challenges with the questions, including making the reading level of the terminology used more accessible, and eliminating some of the broader questions that were used to skip victims into or out of all or sets of the hate crime questions.


As a result of the research and development work to date, there are two versions of the hate crime questions that BJS proposes testing before implementing a final version as part of the planned redesigned NCVS instrument (see appendix 3). Both versions include modifications to improve measurement validity, including specific mention of the protected characteristics throughout; however, there are two major areas where the two versions of the instrument differ.

  • Flow –

    • Version one maintains the general flow and skip patterns traditionally used in the NCVS to date. It uses broad questions to skip respondents into or out of more detailed items. For example “Did the offender(s) say something, write something, or leave something behind at the crime scene that made you think it was a hate crime?” If the respondent says ‘no,’ the questions about specific types of evidence are not administered.

    • Version two eliminates the broad questions. It leads with asking all victims to indicate whether they think they were targeted for the crime because of protected characteristic or religious beliefs, with a yes/no response for each, rather than using a broad screener item. Instead of asking the evidence screener, it moves right into asking whether the incident involved specific types of evidence.

  • Terminology –

    • Version one leads with a definition of hate crime that relies on terms, like “prejudice or bigotry,” and the term ‘hate crime’ is used throughout the instrument.

    • Version two does not use the term ‘hate crime’ until the end of the series of questions.


Cognitive testing of these items will help to determine whether respondents understand one set of questions better than the other including the terminology used in each, and whether any misunderstanding has the potential to impact measurement quality. Testing these questions provides more of an empirical basis for determining which of the versions performs better. The proposed testing efforts include a particular focus on the following issues.


  1. False positive responses – are respondents able to accurately identify the types of incidents that are covered by the questions; are respondents answering hate crime questions affirmatively based on experiences that are within the scope of the survey; are respondents able to distinguish aggression from hate; are respondents able to distinguish hate speech from threats of violence that are hate-motivated; are respondents thinking about incidents in which the offender was partially or wholly motivated by bias?

  2. Understanding of terminology – do respondents understand the terms being used (prejudice, bigotry, being targeted, perceived characteristics); are there other terms and phrasing that better or more simply convey the intended meaning for the questions?

  3. Bias motivation – do respondents accurately think about being targeted because of the offender’s perceptions; when respondents report multiple types of bias, how are they thinking about the offender’s motivation and can they identify a primary motivation; how do respondents distinguish between being targeted because of a perceived vulnerability versus prejudice against them?

  4. Evidence – how well do the types of evidence questions perform, in terms of distinguishing more clear-cut offenses from those that may not actually be hate-motivated; would other terms and phrasing better convey the intended meaning of the evidence questions; are there other concepts that should be captured as part of these questions; did offender(s) specifically use hurtful or abusive language referring to the protected characteristics?).


  1. Testing Procedures


In this memo, BJS is seeking generic clearance specifically to cover online testing and cognitive interviewing activities focused on the two versions of the hate crime questions. Because it is necessary to first identify whether a respondent has experienced a crime, respondents will also be asked a series of questions about their experiences with crime. Rather than using the full NCVS instrument to identify victims, which would be unnecessarily burdensome for the purposes of this testing effort, questions about victimization experiences will come from the BJS Local-Area Crime Survey.1 Because hate crime is a relatively rare event, the testing will use a 3-year reference period, rather than the typical 6-month NCVS reference period. The online testing and cognitive interviewing are expected to take place from August to October of 2020. The online testing provides the opportunity to collect responses to the hate crime questions from large numbers of respondents, as well as written descriptions of the incidents to enable assessment of the likelihood of false positive responses. Cognitive interviewing provides the opportunity to further probe victims on their perceptions and understanding of the questions. For nonvictims, hypothetic scenarios will be used in the cognitive interviewing exercise to understand how and why respondents answer the hate crime questions based on varied elements of potential hate crime scenarios.


    1. Online Testing


RTI International will conduct the online testing and cognitive interviewing. RTI has significant expertise using web-based platforms for data collection, crowdsourcing, and pilot testing and has investigated and utilized multiple online platforms, such as Cint, MTurk, Facebook, Twitter, and others (Keating & Furberg, 2013; Keating, Rhodes, & Richards, 2013; Richards, Dean, & Cook, 2013). These platforms have utility for quickly and efficiently collecting data from large numbers of adult respondents,2 reflecting the characteristics of the population of interest (in this case, all adult US residents). For the current effort, respondents will be recruited from the most popular crowdsourcing platform in the US, Amazon’s Mechanical Turk (MTurk), as well as through the Facebook and Instagram social media platforms (see Appendix 1 for the human intelligence task – ‘HIT’ - posting that will appear on MTurk and the advertisement that will appear on the two social media sites). Interested persons will be screened to ensure they are age 18 or older, English speaking, and currently living in the United States.


The online testing effort will involve randomized self-administration of the two revised versions of the hate crime questions to a target sample of up to 5,000 respondents (each version will be administered to approximately 2,500 respondents). The target number of 5,000 assumes that about 2% of respondents (n=100) will report experiencing a hate crime in the past three years and complete the questions and narrative. Split across the two instruments, this would provide approximately 50 responses and narratives to review. If the survey captures the target number of victims prior to obtaining 5,000 responses, particularly victims willing to participate in the cognitive interviews, the data collection could be ended at that point.


The anticipated number of responses will not be sufficient for detecting statistically significant differences between the two instruments in the number of false positive responses. Rather, the analysis will focus on reviewing the survey responses and narrative descriptions to identify whether one version of the questions better aligns with the crime narratives in terms of whether the crime is a hate crime or nonhate crime. RTI will examine how often each survey version appears to generate more false positives, and whether there are any apparent patterns in the types of bias and types of evidence reported in apparent false positive responses. Additionally, RTI will examine general comprehension of the items, and quality measures such as break offs, inconsistencies, and missing and ‘don’t know’ responses.


The survey is expected to take approximately 3 minutes for nonvictims and 5 minutes for victims (see Appendix 3 for the survey instruments). Those who complete the survey via MTurk will be paid $5 through the online platform. Accordingly, those who complete the survey through Facebook or Instagram recruitment will be given the opportunity to receive a $5 Amazon.com gift card.


After the respondents complete the last survey question, they will be asked if they are interested in participating in a follow-up interview, conducted via a videoconferencing platform. If they are interested, respondents will be asked to provide an email address and phone number at which they can be reached. As a last step, all respondents, regardless of their interest in participating in the follow-up interview, will be taken to a webpage that includes a list of resources related to hate crime victimization that they can access if interested.



1.2 Cognitive Interviewing


Cognitive interviews are an important tool for evaluating respondent understanding and ability to accurately answer survey questions. Cognitive interviews involve an interviewer administering the survey questions to a potential respondent and probing that respondent on how they interpreted the question, how difficult it was to answer, and their process for formulating their response. Cognitive interviews are generally conducted prior to fielding BJS survey instruments that are new or have been substantively altered.


Recruitment and Screening. Planned recruitment and cognitive interviewing activities reflect current COVID-19 pandemic conditions and the related federal, state, and local policies, including restrictions on geographic mobility, the closure of non-essential businesses (including RTI offices), and social distancing recommendations. For this effort, RTI will aim to conduct 60 cognitive interviews with persons age 18 or older via a videoconferencing platform, such as Zoom. Participants will be recruited from the pool of online survey respondents and through organizations that work with victims of hate crimes.


The goal will be to recruit about 20-30 respondents who identified as hate crime victims and about 30-40 respondents who either did not experience any crime or did not experience a hate crime specifically. If the recruitment effort results in a larger number of victims who express interest in participating, they will be prioritized over the nonvictims. We will attempt to recruit victims and nonvictims with as much variation in geographic location, demographic characteristics, and hate crime incident characteristics as possible.


Online survey recruitment

Participants who complete the online self-administered survey will be asked if they are interested in participating in an additional follow-up interview (see Appendix 4). Those who are interested will also be asked to confirm that they have access to:

  1. A private and safe area of their home (or another setting) where they can complete the interview out of earshot of other people and without interruption.

  2. A device with both audio and video capabilities for completing the interview, including a laptop, desktop, tablet, or smartphone.

  3. Wifi, internet, or cellular service with enough available data to participate in a 60-minute video interview.


Once we have identified interested persons who meet the eligibility criteria, the recruiter will reach out to eligible respondents via email to:

  • provide an overview of the study,

  • explain that respondents who complete the cognitive interview will be offered a $40 electronic Amazon.com gift card to compensate for the costs associated with data and internet usage,

  • share an electronic copy of the informed consent form, and

  • provide a calendar for the potential respondent to enter availability for the cognitive interview, if he or she agrees to participate.


Once the respondent has provided dates of availability, the recruiter will send a calendar invitation, including a link to access the videoconferencing platform, to both the potential respondent and the interviewer. The recruiter will again attach a copy of the informed consent form to the calendar invitation. A day before the scheduled interview, the recruiter will send the potential respondent a reminder email with the date and time of the scheduled interview. A final follow-up reminder will be sent on the morning of the interview.


Known victim recruitment

BJS will work with victim service or other organizations to identify individuals who are age 18 or older, English speaking, experienced a hate crime in the past 3 years, and would be willing and able to participate in a cognitive interview without likely experiencing undue trauma. Once the victim participants are identified, BJS will provide the contact information to RTI and an RTI recruiter will follow the process detailed in the prior section.


Consent/Assent Procedures. At the start of the interview, the interviewer will introduce herself to the participant, confirm the participant’s name, and confirm that the participant is on video and can hear the interviewer well. The interviewer will then ask the participant to confirm that he/she is in a private area of their home or other private setting (out of earshot distance of other people). The interviewer will ask the participant to let her know if at any point during the interview, the respondent is interrupted or if they no longer feel they are in a private setting.


The interviewer will then read through the entire informed consent form, providing an opportunity for the respondent to ask any questions. The interviewer will document the respondent’s decision to participate, including the respondent’s willingness to have the interview recorded, and the interviewer will sign and date the consent form as a witness. A copy of the informed consent forms that will be shared with the respondent and used by the interviewer (including a signature block) are included in Appendix 5 and 6. The form for the respondent includes a list of national numbers to contact for resources on and assistance with hate crime.


Cognitive Interviews: The cognitive interviews will differ depending on whether the participant reported experiencing a hate crime.


Victims

Victims who completed the online survey will be administered the same hate crime questions that they answered through the survey. Those recruited from organizations that work with victims will be randomly administered one of the two versions of the hate crime questions. The goal will be to have approximately 10 victim respondents per instrument. All interviews will be conducted by experienced RTI staff who have completed training on the cognitive interview protocol (see Appendices 7a, and 7c).


Interviewers will read each question aloud to respondents, record the response, and then following the interview protocol, probe respondents to gauge their understanding of the question and how they formulated their response. In addition to the structured probes built into the protocol, the interviewer will also use spontaneous probes during the interview to get further clarification on respondent reactions to particular questions.


Following the questions in which the respondent answers thinking about their own experience, the interviewer will ask the respondent to answer the questions based on a series of seven hypothetical vignettes that present situations that could be perceived as hate crimes (see Nonvictims). The respondent will be asked to answer the hate crime survey questions as a hypothetical victim. The interviewer will then probe the respondent to understand why they answered the questions the way they did.


Nonvictims

During the interview, the interviewer will show a vignette on the screen and ask the respondent to read the scenario aloud and answer the hate crime survey questions as though he or she is the victim. The interviewer will then probe the respondent to understand why they answered the questions the way they did (see Appendices 7b and 7d). Each respondent will be asked to read each vignette and answer the questions and probes for each vignette, but the interviewer will vary the order in which the vignettes are read.


Each cognitive interview is expected to take no more than 60 minutes to complete. The specific areas of focus for cognitive interviewing will be:

  1. Understanding of terminology – do respondents understand the terms being used (prejudice, bigotry, being targeted, perceived characteristics); are there other terms and phrasing that better or more simply convey the intended meaning for the questions?

  2. Bias motivation – do respondents accurately think about being targeted because of the offender’s perceptions; when respondents report multiple types of bias, how are they thinking about the offender’s motivation and can they identify a primary motivation; how do respondents distinguish between being targeted because of a perceived vulnerability versus prejudice against them?

  3. Evidence – how well do the types of evidence questions perform, in terms of distinguishing more clear-cut offenses from those that may not actually be hate-motivated; would other terms and phrasing better convey the intended meaning of the evidence questions; are there other concepts that should be captured as part of these questions; did offender(s) specifically use hurtful or abusive language referring to the protected characteristics?).


Upon completion of the protocol, the interviewer will give the respondent the option of receiving a $40 electronic Amazon.com gift card via email or text to compensate for the costs associated with data and internet usage.



  1. Language


The online testing and cognitive interviews will be conducted in English.


  1. Timeline

Milestone

Start Date

End Date

Obtain OMB generic clearance

7/30/20

8/13/20

Recruitment and testing period

8/13/20

10/9/20

Analyze data and develop instrument recommendations

10/12/20

10/26/20

Draft final report

10/26/20

10/30/20



  1. Burden Hours for Testing


The burden associated with the proposed cognitive and online testing is presented in the following table.



Burden Associated with Planned Hate Crime Testing Activities

 

# of Respondents

Average Administration Time (minutes)

Burden (hours)

Web survey

5,000

5

417

Cognitive interviews

60

60

60

Total

5,060

~

477

  1. Cost

The cost of using MTurk and social media platforms for recruitment will be approximately $200. The cost of stipends for the online self-administered survey will be $25,000 ($5*5,000 respondent) and the cost of stipends for the cognitive interviews will be $2,400 ($40*60 respondents).


Thus, the total cost of testing is expected to be approximately $27,600.


  1. Reporting


Upon completion of the online testing and cognitive interviewing, RTI will provide BJS with a report describing the findings and including final recommendations regarding which version of the questions appeared function best in terms of clarity and accuracy of information collect and any suggested changes to both of the two instruments based on the cognitive interviewing. The report will provide detailed information on the testing methodology, respondent characteristics, data quality measures, such as response rates, break offs, and skipped questions, and findings related to the key questions of interest (described in the background section of the memo).


  1. Protection of Human Subjects


There is a slight risk of emotional distress for respondents given the sensitive nature of the topic, since the questions are of a somewhat personal nature; however, appropriate safeguards are in place. RTI’s Institutional Review Board (IRB), which has Federal-wide assurance, has reviewed the planned testing activities, and designated these activities as ‘not human research.’


  1. Informed Consent, Data Confidentiality, and Data Security


    1. Informed Consent


The first page of the online self-administered survey will be an informed consent form (see Appendix 2). Panelists will be brought to the form immediately after clicking on the link displayed in the recruitment add on Facebook or HIT from MTurk. If the respondent wants to proceed, they will indicate that they consent and will then proceed into the survey.


Prior to administering the cognitive interview, interviewers will provide respondents with an informed consent form (see Appendix 5). Interviewers will give respondents time to read through the form and when they have finished, interviewers will ask for them to verbally provide consent or refusal.


    1. Data Confidentiality and Security


BJS is authorized to conduct this data collection under 34 U.S.C. § 10132. BJS will protect and maintain the confidentiality of personally identifiable information (PII) to the fullest extent under federal law. BJS, its employees, and its contractors (RTI staff) will only use the information provided for statistical or research purposes pursuant to 34 U.S.C. § 10134, and will not disclose respondent information in identifiable form to anyone outside of the BJS project team. All PII collected under BJS’s authority is protected under the confidentiality provisions of 34 U.S.C. § 10231. Any person who violates these provisions may be punished by a fine up to $10,000, in addition to any other penalties imposed by law. Further, per the Cybersecurity Enhancement Act of 2015 (6 U.S.C. § 151), federal information systems are protected from malicious activities through cybersecurity screening of transmitted data.


For recruitment purposes, RTI will collect email addresses from respondents who complete the online survey from Facebook or Instagram in order to provide the $5 stipend. RTI will also collect a first name, email address, and phone number from all respondents who are interested in participating in a follow up cognitive interview. This information will be stored on a secure drive at RTI with restricted access to those directly involved in the effort. The files will be destroyed upon completion of the project.



  1. References:


Berzofsky, M. E., McKay, T. E., Hsieh, Y. P., & Smith, A. (2018). Probability-Based Samples on Twitter: Methodology and Application. Survey Practice, 11(2), 1. doi:10.29115/SP-2018-0033.


Guillory, J., Wiant, K. F., Farrelly, M., Fiacco, L., Alam, I., Hoffman, L., . . . Alexander, T. N. (2018). Recruiting Hard-to-Reach Populations for Survey Research: Using Facebook and Instagram Advertisements and In-Person Intercept in LGBT Bars and Nightclubs to Recruit LGBT Young Adults. Journal of Medical Internet Research, 20(6), e197-e197. doi:10.2196/jmir.9461


Hsieh, Y. P., Sanders, H., Eckman, S., & Smith, A. (2018). Motivated misreporting in crowdsourcing tasks of content coding, image classification, and survey. Paper presented at the 73th Annual Conference of the American Association for Public Opinion Research, Denver, Co. May 16-19, 2018.


Keating, M. D., & Furberg, R. D. (2013, November). A methodological framework for

crowdsourcing in research. Presented at 2013 Federal Committee on Statistical

Methodology Research Conference, Washington, DC.


Keating, M. D., Rhodes, B. B., & Richards, A. K. (2013). Crowdsourcing: A flexible method for

innovation, data collection, and analysis in social science research. In Social media,

sociality, and survey research. (pp. 179–201). Hoboken, NJ: John Wiley & Sons, Inc.


Murphy, J. J. (2013, March). Ten things every survey researcher should know about Twitter.

Presented at 2013 Federal Computer Assisted Survey Information Collection (CASIC)

Workshops, Washington, DC.


Phillip, S. K. (2019). A Partially Successful Attempt to Integrate a Web-Recruited Cohort into an Address-Based Sample. Survey Research Methods, 13(1). doi:10.18148/srm/2019.v1i1.7222


Richards, A. K., Dean, E. F., & Cook, S. L. (2013). Collecting diary data on Twitter. In Social

Media, Sociality, and Survey Research. (pp. 203–230). Hoboken, NJ: John Wiley &

Sons, Inc.


Sage, A. J. (2013). The Facebook Platform and the Future of Social Research. In Social Media,

Sociality, and Survey Research. (pp. 87–106). Hoboken, NJ: John Wiley & Sons, Inc.

2 Most of the online survey platforms exclude juveniles, so this effort will focus on persons age 18 or older.

10

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSeptember 15, 2005
AuthorJessica Stroop, BJS
File Modified0000-00-00
File Created2021-01-13

© 2024 OMB.report | Privacy Policy