OMB Memo on Cognitive Testing for SFS using Crowdsourcing

SFS cog testing generic clearance_crowdsourcing.docx

Research to support the National Crime Victimization Survey (NCVS)

OMB Memo on Cognitive Testing for SFS using Crowdsourcing

OMB: 1121-0325

Document [docx]
Download: docx | pdf




U.S. Department of Justice


Office of Justice Programs


Bureau of Justice Statistics

Washington, D.C. 20531


MEMORANDUM



To: Jennifer Park

Official of Statistical and Science Policy

Office of Management and Budget


Through: Lynn Murray

Clearance Officer

Justice Management Division


Jeri M. Mulrow

Acting Director

Bureau of Justice Statistics


From: Lynn Langton

Bureau of Justice Statistics


Date: August 31, 2016


Re: BJS Request for OMB Clearance for Cognitive Testing of the Supplemental Fraud Survey (SFS) under the NCVS Generic Clearance Agreement (OMB Number 1121-0325)


The Bureau of Justice Statistics (BJS) requests clearance for cognitive interviewing tasks under the OMB generic clearance agreement (OMB Number 1121-0325) for activities related to the National Crime Victimization Survey Redesign Research (NCVS-RR) program. This initial set of cognitive interviewing efforts will focus on finalizing the screening section of an instrument to capture data about fraud victimization. The screener will ultimately be administered as part of an NCVS Supplemental Fraud Survey (SFS) to all survey respondents 18 years of age and older. The primary purpose of the screener section of the SFS is to measure the prevalence of a range of different types of fraud, whereas the full instrument will capture additional details about the consequences of and victims’ reactions to specific fraud victimization experiences.


Under this clearance, the introductory screener section of the instrument will be tested using crowdsourcing techniques (described in more detail below) with 300 online respondents. The purpose of this testing is to ensure that the questions are capable of identifying victims of different types of financial fraud. A separate OMB generic clearance request will be submitted for in-person cognitive testing of the full SFS instrument.


Once the instrument has been finalized through these cognitive testing approaches, it will be administered as a supplement to the National Crime Victimization Survey. OMB approval for the full administration of the SFS will be sought after completion of the cognitive testing in early 2017.


Background on the Project and Instrument Development


Financial fraud is a major problem for individuals and for society, but our understanding of the scope of the problem has been hampered by a lack of valid, national statistics. Key sources of crime statistics in the United States, including the NCVS and the Federal Bureau of Investigation's Uniform Crime Reports, have historically focused on traditional property crimes like burglary and larceny and have not attempted to measure the prevalence of fraud.


One of the impediments to the inclusion of fraud in national data collections on crime has been the lack of a clear definition for the term “fraud.” Because no systematic categorization existed, researchers and practitioners often classified fraud types based on different characteristics, including communication method (e.g., cyber fraud, mail fraud), product marketed (e.g., lottery fraud, securities fraud), strategy employed (e.g., advance fee fraud, overpayment fraud), group targeted (e.g., elder fraud), and/or fraudster characteristics (e.g., employee fraud, occupational fraud). This led to a proliferation of overlapping and often confusing definitions and categorizations that hampered the generation of valid fraud prevalence estimates as well as the understanding of the mechanisms and consequences of fraud.


To address the need for a fraud classification system, the Financial Fraud Research Center (FFRC), a joint project of the Stanford Center on Longevity and the FINRA Investor Education Foundation (FINRA Foundation), collaborated with BJS to develop a standardized fraud classification scheme. The purpose was to group and organize fraud types meaningfully and systematically into a definitional framework that could be translated into survey questions that could be administered as a supplement to the NCVS.


The taxonomy was reviewed by an extended review panel consisting of a wider scope of fraud and measurement researchers and practitioners. Input from the extended review panel helped refine the taxonomy by addressing potential areas of overlap or confusion. As a final validation step, to assess comprehensiveness and applicability, the taxonomy was tested using consumer complaint data from the Federal Trade Commission’s (FTC) Consumer Sentinel Network database. Three-hundred consumer fraud complaint cases were classified using the taxonomy coding scheme. This validation step using FTC data identified gaps in the taxonomy and areas where clearer definitions were needed. The objective was to ensure that the taxonomy captured the full range of common scams perpetrated against consumers and that the definitions reflected consumers’ actual experiences. Based on the consumer complaint data, parts of the taxonomy were reorganized and amended with additional fraud types. The final report and taxonomy are available at: http://fraudresearchcenter.org/2015/07/framework-for-a-taxonomy-of-fraud/.


Using the taxonomy as the basis for instrument development, BJS, working in collaboration with the FFRC, developed an instrument to measure the key categories and attributes of financial fraud. The resulting instrument was designed to measure the annual prevalence of seven types of financial fraud – consumer investment fraud, consumer products and services fraud, employment fraud, prize and grant fraud, phantom debt fraud, charity fraud, and relationship and trust fraud – and to capture more detailed information about the fraud incident experienced most recently, including:


  • Information needed for coding detailed fraud types based on the taxonomy

  • Mode of initial contact

  • Method used for transferring funds

  • Monetary losses

  • Victim reporting behaviors


The FFRC used the instrument to conduct their own cognitive testing in 2015, and to administer the survey to approximately 2,000 web-based respondents in early 2016. The FFRC study found a much higher prevalence of fraud than anticipated based on prior research. However, the study also included a narrative option in the web-based survey that allowed respondents to provide a description of what had happened to them. The narratives revealed that a large proportion of respondents who responded affirmatively to the screening questions about fraud did not in fact experience something that would rise to the level of criminal fraud.


In an effort to improve scope and reduce the possibility of over-reporting, the screener section was revised based on these findings. Additionally, several new questions were added to the screener instrument to allow respondents to first report on some negative financial experiences that can be upsetting but do not constitute fraud. This will enable BJS to record these responses separate from those that will be used to estimate the prevalence of fraud. Testing is needed to ensure that the changes to the items address the Type I errors identified through the FFRC’s web survey.


Cognitive Interview Procedures


In this memo, BJS is seeking generic clearance specifically to cover cognitive testing activities focused on the screening sections of the SFS instrument. Crowdsourcing will be implemented for the purpose of quickly and efficiently administering the screener to a large number of respondents and obtaining a brief description in the respondent’s words about any experiences associated with an affirmative response. All crowdsourcing will take place in September and October of 2016 and will be completed in sufficient time to inform the second OMB generic clearance for in-person cognitive testing of the full SFS instrument.


Crowdsourcing


RTI, BJS’s data collection agent for this project, has significant expertise in crowdsourcing data collection, which has been developed through externally and internally funded projects over the past several years. Crowdsourcing techniques have been used with success in the development of prior BJS collections, including the recent Campus Climate Survey Validation Study (http://www.bjs.gov/content/pub/pdf/ccsvsftr.pdf), and the findings have resulted in useful improvements to and clarifications of survey questions. This approach, particularly when coupled with the use of narratives, presents an important methodological advancement that will yield information that will have utility for future studies. The first crowdsourcing technique will identify approximately 300 respondents with targeted specific demographic characteristics that include U.S. residency, English speaking ability, and are ages 18 years or older. RTI began standardizing crowdsourcing strategies in an effort to employ best practices for capturing large amounts of data in short periods of time, and has developed an evidence-based methodological framework for crowdsourcing. RTI’s past experience with crowdsourcing has shown that some diversity is also captured through crowdsourcing, although it is not likely to yield a representative sample of respondents. Representativeness is not, however, an essential element of the cognitive testing phase since all respondents are eligible and all perspectives on the matters being studied will be relevant. The target number of 300 respondents was identified as a number that allows for variation among respondents based on sex, race/ethnicity, and age.

Anticipated respondent distribution

 

 

Number of respondents

Sex



Male

150


Female

150

Race/Ethnicity



White

200


Black

40


Hispanic

50


Other

10

Age



16-24

50


25-50

100


51-64

75

 

65+

75



Additionally, assuming a 10% prevalence rate, the group of 300 respondents should yield approximately 30 fraud victims, plus an additional 100 respondents who experienced a variety of negative financial experiences that do not rise to the level of fraud but will also be captured in the screener. With approximately 130 respondents selecting on one of the 15 screeners, it is also expected that the sample will allow for variation among respondents in the types of fraud and negative financial experiences reported.


RTI has investigated and pilot tested crowdsourcing platforms such as Cint, Amazon Mechanical Turk, GigWalk, TryMyUI, Facebook, Twitter, and others (Murphy, 2013; Keating & Furberg, 2013; Keating, Rhodes, & Richards, 2013; Richards, Dean, & Cook, 2013; Sage, 2013). RTI staff will recruit approximately 300 volunteers from an opt-in web panel, such as Cint, which is an opinion hub that has access to 10 million panelists in 60 countries. The panel allows researchers to gain insights by targeting specific panelist demographics (e.g., race, age, gender) and characteristics (e.g., country of residence, occupation). The Cint panelists are pre-registered panel members who are looking to complete small web-based surveys for minimal compensation. Cint will e-mail the invitation (which uses standard text) to participate in the survey only to panelists whose characteristics include: residence within the United States, English speaking, and are 18 years of age or older. RTI will not have access to any information about the respondents who are invited (i.e., we will not receive any information about the sampling frame from Cint, either during sample selection or after panelists actually complete the survey). More information about Cint can be found at http://www.cint.com. Quite simply, Cint provides for efficient data collection with a motivated crowd who can supply the input and information needed for initial cognitive testing of the SFS instrument.


The crowdsourced respondents will receive a 15-item screener questionnaire that is expected to take approximate 2 minutes to complete. Next to each question that a respondent receives will be a box for him/her to add open-ended comments about the question, including any difficulty understanding specific terms or recommendations for improvements. In addition, each time the respondent answers a question affirmatively, a new question will appear asking the respondent to provide a brief description of what happened (1 to 2 sentences at most) during the most recent experience. A final question at the end of the screener will allow respondents to report on any other experiences with fraud that they had not already reported in the screening questions. If a respondent affirmatively responds to two or three items, the questionnaire may take up to 10 minutes to complete. It is anticipated that only a small proportion of respondents would be asked to complete more than three narrative and about 60% of respondents will not have any affirmative responses.


After the respondents complete the last survey question, they will be taken to a webpage that includes a list of resources related to fraud victimization that the respondent can access if he or she is interested. After clicking on this page, the respondent will be redirected to Cint’s platform, where they will receive their payment of a nominal incentive paid through Cint’s payment system (e.g., $1). Importantly, Cint staff cannot view panelists’ responses to survey questions because the survey website is entirely outside of Cint’s platform.


The crowdsourcing will be conducted in two waves. After approximately 150 responses are received, BJS and RTI will review the responses to assess whether changes need to be made to certain screeners to improve their performance, prior to administering the questionnaire to the next 150 respondents.


Language


The crowdsourced cognitive interviews will be conducted in English.


Burden Hours for Cognitive Testing


The burden for this task consists of participants completing the SFS instrument online. The burden associated with these activities is presented in the following table.







Minimum and Maximum Burden Associated with Planned CCSVS Cognitive Testing Activities

 

Minimum # of Respondents

Average Administration Time (minutes)

Minimum Burden (hours)

Maximum # of Respondents

Average Administration Time (minutes)

Maximum Burden (hours)

Crowdsourcing Cognitive Interviewing

300

5

25.0

300

5

25.0

Cost


Due to the nominal incentive used with Cint panel members as well as Cint service fees, the costs for using the web-based platform will be approximately $500. The costs for RTI to assist in the development of the screener and crowdsourcing protocol, to oversee the Cint administration of the protocol and to analyze and report on findings from the first round of cognitive testing will be approximately $15,000. Thus, the total cost for this cognitive testing will be about $15,500.


Reporting


Upon completion of all crowdsourcing activities, a draft cognitive interviewing report will be delivered to BJS that will include recommendations for additional changes to the SFS screener items. These recommendations will provide detailed information on the cognitive testing methodology, basic characteristics of the respondents, average time needed to complete the screener instrument and narratives, and any issues with question comprehension noted by respondents. RTI will also include a draft of the screener that is being recommended for use in the second round of cognitive interviewing, which will be administered via in-person interviews.


Protection of Human Subjects


There is a slight risk of emotional distress for the respondents given the sensitive nature of the topic, since the questions are of a somewhat personal nature; however, appropriate safeguards are in place and the planned cognitive testing has been reviewed and approved by RTI’s Institutional Review Board (IRB), which has Federal-wide assurance.


Informed Consent, Data Confidentiality and Data Security


Informed Consent


After clicking on the link displayed in the recruitment e-mail sent from Cint, the Cint panelist will be brought to the first page of the survey, which will be an online informed consent form. If the respondent wants to proceed, they will indicate that they consent and will then proceed into the survey.





Data Confidentiality and Security


BJS’s pledge of confidentiality is based on its governing statutes Title 42 USC, Section 3735 and 3789g, which establish the allowable use of data collected by BJS. Under these sections, data collected by BJS shall be used only for statistical or research purposes and shall be gathered in a manner that precludes their use for law enforcement or any purpose relating to a particular individual other than statistical or research purposes (Section 3735). BJS staff, other federal employees, and RTI staff (the data collection agent) shall not use or reveal any research or statistical information identifiable to any specific private person for any purpose other than the research and statistical purposes for which it was obtained. Pursuant to 42 U.S.C. Sec. 3789g, BJS will not publish any data identifiable specific to a private person (including respondents and decedents). The crowdsourcing methodology will not be collecting any personally identifying information from respondents.



References:


Keating, M. D., & Furberg, R. D. (2013, November). A methodological framework for

crowdsourcing in research. Presented at 2013 Federal Committee on Statistical

Methodology Research Conference, Washington, DC.


Keating, M. D., Rhodes, B. B., & Richards, A. K. (2013). Crowdsourcing: A flexible method for

innovation, data collection, and analysis in social science research. In Social media,

sociality, and survey research. (pp. 179–201). Hoboken, NJ: John Wiley & Sons, Inc.


Murphy, J. J. (2013, March). Ten things every survey researcher should know about Twitter.

Presented at 2013 Federal Computer Assisted Survey Information Collection (CASIC)

Workshops, Washington, DC.


Richards, A. K., Dean, E. F., & Cook, S. L. (2013). Collecting diary data on Twitter. In Social

Media, Sociality, and Survey Research. (pp. 203–230). Hoboken, NJ: John Wiley &

Sons, Inc.


Sage, A. J. (2013). The Facebook Platform and the Future of Social Research. In Social Media,

Sociality, and Survey Research. (pp. 87–106). Hoboken, NJ: John Wiley & Sons, Inc.

7

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSeptember 15, 2005
AuthorPlanty, Michael
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy