Memo to OMB on Identity Theft Supplement Test

1_ITS testing OMB generic clearance request_4.10.20.docx

Generic Clearance for Cognitive, Pilot and Field Studies for Bureau of Justice Statistics Data Collection Activities

Memo to OMB on Identity Theft Supplement Test

OMB: 1121-0339

Document [docx]
Download: docx | pdf




U.S. Department of Justice


Office of Justice Programs


Bureau of Justice Statistics

Washington, D.C. 20531


MEMORANDUM



To: Robert Sivinski

Official of Statistical and Science Policy

Office of Management and Budget


Through: Melody Braswell

Clearance Officer

Justice Management Division


Jeffrey H. Anderson

Director

Bureau of Justice Statistics


Allen J. Beck

Senior Statistical Advisor


Devon B. Adams

Acting Deputy Director


Guy Burnett

Senior Advisor


From: Heather Brotsos

Chief, Victimization Statistics Unit

Date: April 10, 2020


Re: BJS Request for OMB Clearance for Testing of Proposed Revisions to the Identity Theft Supplement under the BJS Generic Clearance Agreement (OMB Number 1121-0339)


The Bureau of Justice Statistics (BJS) requests clearance for cognitive interviewing and testing of proposed revisions to the National Crime Victimization Survey (NCVS) Identity Theft Supplement (ITS) under the BJS OMB generic clearance agreement (OMB Number 1121-0339). This set of cognitive interviewing and testing tasks will be focused on the screening section of the ITS, which is used to determine the prevalence of identity theft and route respondents to appropriate follow-up questions. The ITS has been administered, in its current form, every two years since 2012 to all NCVS survey respondents age 16 or older, following the completion of the core survey. The proposed changes are intended to serve three main purposes: 1. Reduce the likelihood of forward telescoping; 2. Improve the dating of incidents; and 3. Refine the measurement of identity theft by excluding attempted incidents. These changes will be described further in the sections below.


Under this clearance, the ITS will be cognitively tested with 30 respondents to examine the comprehensibility of proposed changes to the screener section. Additionally, a randomized experiment will be conducted with approximately 31,500 online1 survey respondents to assess whether the changes serve their intended purpose and improve the overall measurement of identity theft. The design of the cognitive interviewing and online testing is described in more detail below.


Once the instrument has been finalized through these testing approaches, it will be administered as a supplement to the NCVS from July-December of 2021. OMB approval for the full administration of the ITS will be sought under a separate clearance request.


This memo first provides background on the ITS and proposed revisions. Next is a description of the proposed testing procedure, followed by a description of language, burden hours, reporting, protection of human subjects, informed consent, data confidentiality and security.


  1. Background on the ITS and Proposed Revisions


BJS developed the Identity Theft Supplement (ITS) to the National Crime Victimization Survey (NCVS) in 2006 and 2007, in conjunction with the Federal Trade Commission (FTC), National Institute of Justice (NIJ), Bureau of Justice Assistance (BJA), and Office for Victims of Crime (OVC). The survey was designed to fill key data needs for each of the agencies and to respond to a recommendation from the 2007 President’s Task Force on Identity Theft2 that BJS should periodically administer identity theft survey supplements to collect detailed individual-level data on the prevalence and consequences of identity theft. The first iteration of the ITS was administered in 2008 to all NCVS respondents age 16 or older during a six-month period. After a redesign to address identified problems with the initial survey instrument and measurement approach,3 the ITS was then administered in 2012, 2014, 2016, and 2018 using an instrument that remained largely unchanged from one administration to the next to enable analysis of trends over time.


For the ITS, identity theft is defined as “the unauthorized use or attempted use of existing accounts, or the unauthorized use or attempted use of personal information, to open a new account or for other fraudulent purposes.” The survey captures a broad range of incidents, from the misuse of an existing credit card, which typically results in no or low out-of-pocket losses, takes little time to resolve, and tends to cause low levels of distress; to the misuse of someone’s social security number, which can result in much greater losses, distress, and time spent resolving related issues. It also captures known incidents in which an offender attempts to use a person’s identifying information but is unsuccessful at obtaining goods or services, but BJS has not historically distinguished between attempted and successful incidents in reports.


Given changes in technology and the scope of crimes over the more than a decade since the ITS was first introduced, BJS was interested in reexamining persistent measurement challenges for the supplement and reevaluating the nature of crimes included in its definition of identity theft. A secondary data analysis was conducted to examine several key issues in the ITS that impact how identity theft is measured, and the resulting prevalence estimates, including:

  1. the unbounded nature of the estimates4;

  2. the ongoing, episodic nature of many incidents of identity theft and determinations about when an incident should be included within the survey reference period; and

  3. the inclusion of attempted incidents.

Findings suggested BJS should consider using a dual-reference period in the screener to reduce the likelihood of respondents telescoping incidents into the 12-month reference period. With this approach, respondents are first asked about lifetime experiences with identity theft, with a follow-up question asking about experiencing identity theft in the prior 12 months. Additionally, findings indicated that BJS should ask respondents to provide a date of the most recent known occurrence of identity theft to ensure that the incidents reported in the screener occurred within the 12-month survey reference period for the ITS. Finally, findings suggested that respondents should be asked to focus only on successfully completed incidents of identity theft, because there are challenges with correctly collecting and identifying attempted incidents, and the grouping of attempted and completed incidents muddles understanding and appreciation for the severity of completed incidents.


The ITS screener section was revised to address these issues. Testing is needed to ensure that the changes do not have a negative impact on the clarity of the screener, respondent burden, and other data quality measures, and that the changes have the anticipated impact on prevalence rates. The proposed changes will result in a break-in-series in prevalence estimates for the ITS, so it is important to ensure that they have the intended positive outcome of improving the measurement of identity theft.


  1. Testing Procedures


In this memo, BJS is seeking generic clearance specifically to cover cognitive interviewing and online pilot testing activities focused on the screening section of the ITS instrument. The cognitive interviewing is expected to take place in May of 2020 and the online pilot testing to occur from the beginning of June through the end of July, starting once recommendations from the cognitive interviewing have been incorporated into the screener. Cognitive interviewing provides the opportunity to probe respondents on their perceptions and understanding of the questions, whereas the online testing will be used to quickly and efficiently administer the screener to a large number of respondents and compare prevalence rates across different versions of the screener.


    1. Cognitive Interviewing


Cognitive interviews are an important tool for evaluating respondent understanding and ability to accurately answer survey questions. Cognitive interviews involve an interviewer administering the survey questions to a potential respondent and probing that respondent on how they interpreted the question, how difficult it was to answer, and their process for formulating an answer. Cognitive interviews are generally conducted prior to fielding BJS survey instruments that are new or have been substantively altered.


Recruitment and Screening. Planned recruitment and cognitive interviewing activities reflect current COVID-19 pandemic conditions and the related federal, state, and local policies, including restrictions on geographic mobility, the closure of non-essential businesses (including RTI offices), and social distancing recommendations. For this effort, RTI will conduct 30 cognitive interviews with persons age 18 or older. Respondents will be recruited from the most popular crowdsourcing platform in the US, Amazon’s Mechanical Turk (MTurk) (see Appendix 1 for the human intelligence task – ‘HIT’ - posting that will appear on MTurk). Interested persons will be screened for whether they currently live in the US and have experienced identity theft during the prior year, and if so, whether it involved the misuse of personal information or of an existing account (see Appendix 2 for recruitment screener). Participants will also be asked to confirm that they have access to:

  1. A private and safe area of their home (or another setting) where they can complete the interview out of earshot of other people and without interruption.

  2. A device with both audio and video capabilities for completing the interview, including a laptop, desktop, tablet, or smartphone.

  3. Wifi or internet service with enough available data to participate in a 30-minute video interview.


The goal will be to recruit about 15 respondents who meet the above criteria and experienced the misuse of personal information in the past year; about 10 who meet the above criteria and experienced the misuse of an existing account in the past year; and about 5 nonvictims who also meet the above criteria.


Once we have identified interested persons who meet the eligibility criteria, the recruiter will reach out to eligible respondents via email to:

  • provide an overview of the study

  • explain that respondents who complete the cognitive interview will be offered a $20 electronic Amazon.com gift card to compensate for the costs associated with data and internet usage

  • share an electronic copy of the informed consent form and

  • provide a calendar for the potential respondent to enter availability for the cognitive interview, if he or she agrees to participate.


Once the respondent has provided dates of availability, the recruiter will send a calendar invitation, including a link to access the videoconferencing platform (subject to the approval of BJS), to both the potential respondent and the interviewer. The recruiter will again attach a copy of the informed consent form to the calendar invitation. A day before the scheduled interview, the recruiter will send the potential respondent a reminder email with the date and time of the scheduled interview.


Consent/Assent Procedures. At the start of the interview, the interviewer will introduce herself to the participant, confirm the participant’s name, and confirm that the participant is on video and can hear the interviewer well. The interviewer will then ask the participant to confirm that he/she is in a private area of their home or other private setting (out of earshot distance of other people). The interviewer will ask the participant to let her know if at any point during the interview, the respondent is interrupted or if they no longer feel they are in a private setting.



The interviewer will then read through the entire informed consent form, providing an opportunity for the respondent to ask any questions. The interviewer will document the respondent’s decision to participate, including the respondent’s willingness to have the interview recorded, and the interviewer will sign and date the consent form as a witness. A copy of the informed consent forms that will be shared with the respondent and used by the interviewer (including a signature block) are included in Appendix 3a and 3b. The form for the respondent includes a list of national numbers to contact for identity theft assistance.


Cognitive Interviews: The cognitive interviews will involve administering the new version of the screener that includes:

  1. questions about experiences with identity theft in one’s lifetime and during the prior 12 months;

  2. questions about the month and year of most recent occurrence, following each screener question that the respondent answers affirmatively; and

  3. the exclusion of attempted incidents through language clarifying what types of incidents the respondent should report.

All interviews will be conducted by experienced RTI staff who have completed training on the cognitive interview protocol (see Appendix 4).


Interviewers will read each question aloud to respondents, record the response and, then following the interview protocol, will probe respondents to gauge their understanding of the question and how they formulated their response. In addition to the structured probes built into the protocol, the interviewer will also use spontaneous probes during the interview to get further clarification on respondent reactions to particular questions.


Each cognitive interview is expected to take no more than 30 minutes to complete. The specific areas of focus for cognitive interviewing will be:

  • Dual-reference period – Whether respondents find the dual-reference period to be confusing; whether they find it helpful in remembering instances of identity theft, how they react to being asked about two different reference periods; and whether it is difficult for them to think about both lifetime experiences and experiences in the prior 12 months;

  • Dating of most recent incident – Whether respondents are able to accurately date the most recent occurrence; how respondents think about the concept of ‘most recent occurrence’; and whether respondents confuse most recent occurrence with when the incident was discovered.

  • Attempts – How do respondents think about an attempted incident versus a completed incident; are the instructions about what should be included under each screener question clear; and what was the nature of the incident that prompted the respondent to answer the screener question affirmatively.


Upon completion of the protocol, the interviewer will give the respondent the option of receiving a $20 electronic Amazon.com gift card via email or text to compensate for the costs associated with data and internet usage.


    1. Online pilot testing


RTI has significant expertise using web-based platforms for data collection, crowdsourcing, and pilot testing. RTI has investigated and pilot tested the use of online platforms, such as Cint, MTurk, Facebook, Twitter, and others (Keating & Furberg, 2013; Keating, Rhodes, & Richards, 2013; Richards, Dean, & Cook, 2013). These platforms have utility for quickly and efficiently collecting data from large numbers of adult respondents,5 reflecting the characteristics of the population of interest (in this case, all adult US residents).


The current effort will involve randomized administration of one of three versions of the ITS screener to a target sample of 31,500 respondents (each version will be administered to approximately 10,500 respondents), selected to be calibrated as a nationally comparable sample that resembles a proximate representation of the population in terms of age, sex, and race/Hispanic origin. The sample size estimate is based on a power calculation of the minimum sample needed to detect a 1% change in the prevalence of identity theft, assuming a 9% base prevalence and 70% power.


Online testing platform: RTI will primarily use NORC’s AmeriSpeak platform to conduct the online testing. Through AmeriSpeak, NORC will use a combination of probability and non-probability sample to get to the target of 31,500 respondents. First AmeriSpeak will provide 10,000 completions from their probability panel (https://amerispeak.norc.org/Pages/default.aspx). The panelists are pre-registered panel members who complete small surveys for minimal compensation. AmeriSpeak regularly uses a mixed-mode approach to collecting data from panelists, by surveying participants both online and via telephone. The phone plus online approach ensures no groups are left out, such as non-internet users and those potentially without access to the internet including the elderly, low-income persons and those in rural areas. This practice guarantees the most representative sample achievable. AmeriSpeak estimates that 10-15% of completed interviews from the probability sample will be completed via telephone.


The balance of the completions will be from non-probability samples – predominately AmeriSpeak’s TrueNorth Calibration approach (http://amerispeak.norc.org/our-capabilities/Pages/TrueNorth.aspx), supplemented with approximately 5,000 respondents from Amazon’s MTurk. Based on this approach, AmeriSpeak estimates that the final distribution of completes will be: 10,000 AmeriSpeak probability sample; 16,500 AmeriSpeak nonprobability sample; 5,000 MTurk nonprobability sample. Calibration weights will be applied to the full sample.


The TrueNorth Calibration approach has been successfully used with major national surveys on a variety of topics and substantially reduces the cost and collection time associated with large samples of respondents. The benefit of including the MTurk sample is that MTurk workers tend to produce data with better quality compared to other nonprobability panelist when they participate in scientific research (Hsieh, et al, 2018). Additionally, the cost will be reduced because RTI will recruit the MTurk sample and provide the respondents to AmeriSpeak. RTI will coordinate with AmeriSpeak on the non-probability sample recruitment to supplement the AmeriSpeak effort with the sample of MTurk respondents. With this approach, we will have the ability to compare the data generated from the MTurk sample against the full AmeriSpeak sample.


NORC has successfully contracted with other organizations to apply this methodology to collect data on topics ranging from romance fraud, to public health, to food allergies. For example, with the Stanford University Food Allergy Prevalence Survey, the AmeriSpeak panel was used to obtain more than 40,000 respondents (~7,000 probability/33,000 nonprobability).6



Conducting the testing: Before respondents are invited to apply, the potential samples will be deduplicated to the greatest degree possible, and respondents will be screened for: being residents of the United States, English speaking, and 18 years of age or older. RTI will not have access to any personally identifying information about the respondents who participate in the survey (i.e., RTI will not receive any personally identifying information about the sampling frame from the platform, either during sample selection or after panelists actually complete the survey).


Those who agree to participate will be randomly asked to complete one of three versions of the ITS screener questionnaire. The three versions of the screener will be randomized across each of samples (i.e. of the 10,000 probability sample respondents, the goal would be to have approximately 3,333 respondents completing each version of the screener). The screeners are expected to take no more than 5 minutes to complete. The three versions of the instrument used in the experiment are (see Appendix 5):


  • Version 1: Current ITS screener (control group)

  • Version 2: Includes all key changes being examined – dual-reference period, exclusion of attempted incidents, and dating of most recent occurrence in screener (treatment 1)

  • Version 3: Current ITS screener with attempted incidents excluded, plus dating of most recent occurrence added after screener (treatment 2)


After the respondents complete the last survey question, they will be taken to a webpage that includes a list of resources related to identity theft victimization that the respondent can access if he or she is interested. After clicking on this page, the respondent will be redirected to the platform, where they will receive their payment of $1, paid through the platform’s payment system. The data are completely confidential and even the platform staff cannot view the responses provided to survey questions because the survey website resides entirely outside of the platform.


Analysis: Analysis will focus on testing for significant differences in the overall prevalence of identity theft. The table below shows the key comparisons and metrics that will be examined.


Test

Research question

Key metric

Control v. Treatment 1

Does the new screener improve the measurement of identity theft relative to the current screener?

Overall prevalence rate

Control v. Treatment 2

What is the impact of removing attempts on rates of identity theft?

Overall prevalence rate; distribution of types of identity theft experienced by victims if possible




Additionally, for each of the groups, RTI will examine data quality measures such as break offs, inconsistencies, missing and don’t know responses, and occurrence or discovery dates that are outside of the reference period, which could suggest forward telescoping.

Based on prior experiences with online platforms and comparisons to NCVS data, we expect that prevalence rates of identity theft may be higher in the web-based environment than they have been through the Census Bureau’s administration of the ITS. This may mean that we have more power to detect differences than initially expected. It also means that findings about the magnitude of differences between two different instruments will not directly translate to the magnitude of differences that might be expected based on Census’s administration. However, we assume that if one instrument version performs better in the web-based environment (results in statistically significant differences in prevalence), this finding will translate to performance in the field.


  1. Language


The cognitive interviews will be conducted in English as will the online testing.


  1. Burden Hours for Testing


The burden associated with the proposed cognitive and online testing is presented in the following table.



Burden Associated with Planned ITS Testing Activities

 

# of Respondents

Average Administration Time (minutes)

Burden (hours)

Cognitive Interviewing

30

30

15

Web-based pilot testing

31,500

5

2,625

Total

31,530

~

2,640

Cost


Cognitive testing

The cost of using MTurk for recruitment will be approximately $200 and the cost of stipends will be $600 ($20*30 respondents).


Online pilot testing

Due to the nominal incentive ($1) provided to participating panel members, AmeriSpeak service fees, and the cost of deduplicating and collecting data from probability and non-probability samples this task will cost approximately $216,500.


Thus, the total cost of testing is expected to be approximately $217,100.


  1. Reporting


Upon completion of cognitive interviewing, a cognitive interview report will be delivered to BJS that will include recommendations for any necessary adjustments to the proposed ITS screener revisions. The report will provide detailed information on the cognitive testing methodology, basic characteristics of the respondents, average time needed to complete the screener instrument, and any issues with question comprehension noted by respondents, specifically around the use of the dual-reference period, the dating of the most recent incident, and the definition of an attempted incident. RTI will also provide a draft of the three versions of the screener that will be recommended for use in the online testing.


Upon completion of the online testing, RTI will provide BJS with a report describing the findings from the testing and including final recommendations regarding which version of the screener should be administered in July 2021 as a part of the ITS questionnaire. The report will provide detailed information on the testing methodology, characteristics of the weighted and unweighted samples, testing procedures used, data quality measures, such as response rates, break offs, and skipped questions, and findings on the overall prevalence of identity theft across the three different screener versions and by identity theft subtypes, if possible.


  1. Protection of Human Subjects


There is a slight risk of emotional distress for the respondents given the sensitive nature of the topic, since the questions are of a somewhat personal nature; however, appropriate safeguards are in place. RTI’s Institutional Review Board (IRB), which has Federal-wide assurance, has reviewed the planned testing activities and designed these activities as ‘not human research.’


  1. Informed Consent, Data Confidentiality, and Data Security


    1. Informed Consent


Prior to administering the cognitive interview, interviewers will provide respondents with an informed consent form (see Appendix 2). Interviewers will give respondents time to read through the form and when they have finished, interviewers will ask for them to verbally provide consent or refusal.


For the online testing, the first page of the survey will be an informed consent form (see Appendix 6). Panelists will be brought to the form immediately after clicking on the link displayed in the recruitment e-mail sent from AmeriSpeak. If the respondent wants to proceed, they will indicate that they consent and will then proceed into the survey.


    1. Data Confidentiality and Security


BJS is authorized to conduct this data collection under 34 U.S.C. § 10132. BJS will protect and maintain the confidentiality of personally identifiable information (PII) to the fullest extent under federal law. BJS, its employees, and its contractors (RTI staff) will only use the information provided for statistical or research purposes pursuant to 34 U.S.C. § 10134, and will not disclose respondent information in identifiable form to anyone outside of the BJS project team. All PII collected under BJS’s authority is protected under the confidentiality provisions of 34 U.S.C. § 10231. Any person who violates these provisions may be punished by a fine up to $10,000, in addition to any other penalties imposed by law. Further, per the Cybersecurity Enhancement Act of 2015 (6 U.S.C. § 151), federal information systems are protected from malicious activities through cybersecurity screening of transmitted data.


The online testing platform will not be collecting any personally identifying information from respondents.



  1. References:


Berzofsky, M. E., McKay, T. E., Hsieh, Y. P., & Smith, A. (2018). Probability-Based Samples on Twitter: Methodology and Application. Survey Practice, 11(2), 1. doi:10.29115/SP-2018-0033.


Guillory, J., Wiant, K. F., Farrelly, M., Fiacco, L., Alam, I., Hoffman, L., . . . Alexander, T. N. (2018). Recruiting Hard-to-Reach Populations for Survey Research: Using Facebook and Instagram Advertisements and In-Person Intercept in LGBT Bars and Nightclubs to Recruit LGBT Young Adults. Journal of Medical Internet Research, 20(6), e197-e197. doi:10.2196/jmir.9461


Hsieh, Y. P., Sanders, H., Eckman, S., & Smith, A. (2018). Motivated misreporting in crowdsourcing tasks of content coding, image classification, and survey. Paper presented at the 73th Annual Conference of the American Association for Public Opinion Research, Denver, Co. May 16-19, 2018.


Keating, M. D., & Furberg, R. D. (2013, November). A methodological framework for

crowdsourcing in research. Presented at 2013 Federal Committee on Statistical

Methodology Research Conference, Washington, DC.


Keating, M. D., Rhodes, B. B., & Richards, A. K. (2013). Crowdsourcing: A flexible method for

innovation, data collection, and analysis in social science research. In Social media,

sociality, and survey research. (pp. 179–201). Hoboken, NJ: John Wiley & Sons, Inc.


Murphy, J. J. (2013, March). Ten things every survey researcher should know about Twitter.

Presented at 2013 Federal Computer Assisted Survey Information Collection (CASIC)

Workshops, Washington, DC.


Phillip, S. K. (2019). A Partially Successful Attempt to Integrate a Web-Recruited Cohort into an Address-Based Sample. Survey Research Methods, 13(1). doi:10.18148/srm/2019.v1i1.7222


Richards, A. K., Dean, E. F., & Cook, S. L. (2013). Collecting diary data on Twitter. In Social

Media, Sociality, and Survey Research. (pp. 203–230). Hoboken, NJ: John Wiley &

Sons, Inc.


Sage, A. J. (2013). The Facebook Platform and the Future of Social Research. In Social Media,

Sociality, and Survey Research. (pp. 87–106). Hoboken, NJ: John Wiley & Sons, Inc.

1 The majority of the pilot testing will occur online. About 10-15% of respondents will be interviewed by phone.

3 The changes and rationale for the changes were documented with the materials submitted in the 2012 Office of Management and Budget (OMB) Paperwork Reduction Act (PRA) Information Collection Review package, available at https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201112-1121-004.

4 Unlike in the core NCVS where interviews 2-7 are bounded by the prior interview, the ITS and other NCVS supplements are completely unbounded.

5 Most of the online survey platforms exclude juveniles, so this effort will focus on persons age 18 or older.

8

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSeptember 15, 2005
AuthorJessica Stroop, BJS
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy