2017 SFS OMB Supporting Statement Part B_FINAL

2017 SFS OMB Supporting Statement Part B_FINAL.docx

2017 National Crime Victimization Survey (NCVS) Supplemental Fraud Survey (SFS)

OMB: 1121-0359

Document [docx]
Download: docx | pdf




Supporting Statement – 2017 National Crime Victimization Survey (NCVS) Supplemental Fraud Survey (SFS)


B. Collection of Information Employing Statistical Methods


1. Universe and Respondent Selection


The sample for the SFS is all persons age 18 or older in NCVS interviewed households. The NCVS sample of households is drawn from the more than 120 million households nationwide and excludes military barracks and institutionalized populations. In 2017, the annual national sample is planned to be approximately 224,000 designated addresses located in 542 stratified Primary Sampling Units (PSUs) throughout the United States. The sample consists of seven parts, each of which is designated for interview in a given month and again at 6-month intervals for a total of seven interviews per household.


Every ten years, the Census Bureau redesigns the samples for all of their continuing demographic surveys, including the NCVS. In 2015, the 2000 sample design started to phase out and the 2010 sample design started to be phased in. Although the PSUs did not change in 2015, some of the cases assigned to 2015 interviews were selected from the 2010 design from the Master Address File (MAF). The MAF contains all addresses from the most recent decennial census plus updates from the United States Postal Service, state and local address lists, and other address listing operations. The MAF is the frame used to reach the target NCVS population. Beginning in 2016, some PSUs were removed from the sample, some new PSUs were added to the sample, and some continuing PSUs that were selected for both the 2000 and 2010 designs remained in the sample. The phase-in and phase-out of the designs started in January 2015 and will continue through December 2017. As part of the 2010 design, new addresses are selected each year from a master list of addresses based upon the 2010 Decennial Census of Population and Housing and addresses from the United States Postal Service. The new sample sizes are larger than in previous years to support state-level estimates in 22 states. In 2017, approximately 91% of the sample will be drawn from the 2010 design, with the remaining 9% from the 2000 design.


The NCVS uses a rotating sample. The sample usually consists of seven groups for each month of enumeration. When the SFS will be in the field (October-December 2017), there will be one rotation group selected as part of the 2000 sample design. This one rotation group will only be in continuing PSUs and will contain about 6% of all SFS sample units. The remaining sample will be divided into seven rotation groups that were selected as part of the 2010 sample design.


Each interview period the interviewer completes or updates the household composition component of the NCVS interview and asks the crime screen questions (NCVS-1) for each household member age 12 or older. The interviewer then completes a crime incident report (NCVS-2) for each reported crime incident identified in the crime screener. Following either the screener or the administration of the crime incident report, depending on whether a crime was reported, each household member age 18 or older will be administered the SFS. Each household member provides the information by self-response. Proxy respondents will not be allowed for the SFS. For the NCVS, proxy respondents are allowable under very limited circumstances and represent less than 5% of all interviews. All forms and materials used for the NCVS screener and crime incident report have been previously approved by OMB (OMB NO: 1121-0111). The SFS instrument is included in Attachment 2.


SAMPLING


Sample selection for the NCVS, and by default the SFS, has three stages: the selection of primary sampling units or areas known as PSUs, the selection of address units in sample PSUs, and the determination of persons and households to be included in the sample.


Survey estimates are derived from a stratified, multi-stage cluster sample. The PSUs composing the first stage of the sample are formed from counties or groups of adjacent counties based upon data from the decennial census and the American Community Survey (ACS). The larger PSUs are included in the sample with certainty and are considered to be self-representing (SR). The remaining PSUs, called non self-representing (NSR) because only a subset of them are selected, are combined into strata by grouping PSUs with similar geographic and demographic characteristics. For the NCVS, decennial census counts, ACS estimates, and administrative crime data drawn from the FBI’s Uniform Crime Reporting (UCR) Program are also used to stratify the PSUs.

Stage 1. Defining and Selection of PSUs


Defining PSUs – Formation of PSUs begins with listing counties and independent cities in the target area. For the NCVS, the target area is the entire country. The counties are either grouped with one or more contiguous counties to form PSUs or are PSUs all by themselves. The groupings are based on certain characteristics such as total land area, current and projected population counts, large metropolitan areas, and potential natural barriers such as rivers and mountains. The resulting county groupings are called PSUs.


After the PSUs are formed, the large PSUs and those in large urban areas are designated SR. The smaller PSUs are designated NSR. Determining which PSUs are considered small and which are large depends on the survey’s SR population cutoff, whether estimates are desired for the state, and the size of the MSA that contains the PSU.

Stratifying PSUs – The NSR PSUs are grouped with similar NSR PSUs within states to form strata. Each SR PSU forms its own stratum. The data used for grouping the PSUs consists of decennial census demographic data, ACS data, and administrative crime data. NSR PSUs are grouped to be as similar or homogeneous as possible. Just as the SR PSUs must be large enough to support a full workload so must each NSR strata. The most efficient stratification scheme is determined by minimizing the between PSU variance and the within PSU variance.


Selecting PSUs – The SR PSUs are automatically selected for sample or “selected with certainty.” One NSR PSU is selected from each stratum with probability proportional to the population size using a linear programming algorithm. Historically, PSUs have been defined, stratified, and selected once every ten years.

Stage 2. Preparing Frames and Sampling within PSUs


Frame Determination – To ensure adequate coverage for the target population, the Census Bureau defines and selects sample from address lists called frames. The 2000 and 2010 sample designs use different frame systems. The 2000 sample design was selected from four frames: a unit frame, an area frame, a group quarters (GQ) frame, and a new construction or permit frame. The 2010 sample design was selected from a unit frame and a GQ frame.


In the 2000 design, each address in the country was assigned to one and only one of the four frames. Frame assignment depended on four factors:

  1. what type of living quarters are at the address

  2. when the living quarters were built,

  3. where the living quarters were built, and

  4. how completely the street address was listed.


The main distinction between the 2000 and 2010 frames is the procedure used to obtain the sample addresses. In the 2000 design, the unit and GQ frames were static address lists from the 2000 Census, the permit frame came from building permit office updates, and the area frame required field staff to canvass and list all addresses within specific census blocks. Research has shown that the MAF, which is the source for both 2010 design frames, provides similar coverage to the 2000 design frames but with reduced costs.


In the 2010 design, each address in the country was assigned to the unit or GQ frame based on the type of living quarter. Two types of living quarters are defined in the decennial census. The first type is a housing unit (HU). An HU is a group of rooms or a single room occupied as separate living quarters or intended for occupancy as separate living quarters. An HU may be occupied by a family or one person, as well as by two or more unrelated persons who share the living quarters. The second type of living quarters is GQ. GQs are living quarters where residents share common facilities or receive formally authorized care. About 3% of the population counted in the 2010 Census resided in GQs. Of those, less than half resided in non-institutionalized GQs. About 97% of the population counted in the 2010 Census lived in HUs.


Within-PSU Sampling – All of the Census Bureau’s continuing demographic surveys, such as the NCVS, are sampled together. This takes advantage of updates from the January MAF delivery and ACS data. In the 2010 sample design, about 28.6% of the HU sample is selected every year; although 57% of the cases selected for 2016 interviews were selected in 2015 to start the 2010 sample design. The GQ sample is selected every three years.


Selection of samples is done one survey at a time (sequentially). Each survey determines how the unit addresses within the frame should be sorted prior to sampling. For the NCVS, each frame is sorted by geographic variables. A systematic sampling procedure is used to select addresses from each frame. A skeleton sample is also selected in every PSU. Every six months new addresses on the MAF are matched to the skeleton frame. The skeleton frame allows the sample to be refreshed with new addresses and thereby reduces the risk of under-coverage errors due to an outdated frame.


Addresses selected for a survey are removed from the frames, leaving an unbiased or clean universe behind for the next survey that is subsequently sampled. By leaving a clean universe for the next survey, duplication of addresses between surveys is avoided. This is done to help preserve response rates by insuring no unit falls into more than one survey sample.


Stage 3. Sample within Sample Addresses


The last stage of sampling is done during initial contact of the sample address during the data collection phase. For the SFS, if the address is a residence and the occupants agree to participate, then an attempt is made to interview every person age 18 or older who lives at the resident address and completes the NCVS-1. The NCVS has procedures to determine who lives in the sample unit and a household roster is completed with names and other demographic information. If someone moves out (in) of the household during the interviewing cycle, he or she is removed from (added to) the roster.


State Samples


Beginning in January of 2016, BJS and Census increased and reallocated the existing national sample in the 22-largest states. The states receiving a sample boost include Arizona, California, Colorado, Florida, Georgia, Illinois, Indiana, Maryland, Massachusetts, Michigan, Minnesota, Missouri, New Jersey, New York, North Carolina, Ohio, Pennsylvania, Tennessee, Texas, Virginia, Washington, and Wisconsin. In 2015, each of these 22 states had a population greater than 5 million persons and in total these 22 states comprised 79% of the U.S. population.a In each of the 22 states, enough sample was selected to achieve a 10% RSE for a three year average violent victimization rate of 0.02. Sample sizes in the remaining 28 states and the District of Columbia were determined based on previous sample sizes. Unlike the 2000 sample design, no strata cross state boundaries and all 50 states and the District of Columbia will have at least one sampled PSU.


The underlying assumption of the subnational sample design is that three years of data will be needed to produce precise estimates of violent crime, which is experienced by about 1% of the population. BJS expects about 12% of respondents to be victims of fraud victimization and thus anticipates being able to produce state-level estimates of financial fraud victimization in the 22 largest states, with three months of data collection. As previously noted, this prevalence estimate is based on results from the 2011/2012 FTC survey which reported 11% of U.S. adults experience one or more types of fraud.b The FTC survey was conducted more than five years ago and is limited in the types of fraud covered in the survey so we anticipate a higher prevalence rate from the 2017 SFS data.


2. Procedures for Collecting Information


The SFS is designed to calculate national and 22 state-level estimates of financial fraud victimization for the target population – all persons age 18 or older living in NCVS households. The SFS is administered to all age-eligible NCVS respondents during the 3-month period from October through December of 2017.

DATA COLLECTION


Data collection includes a screener and incident survey. As previously mentioned, the Framework for a Taxonomy of Fraud publication informed this data collection. In the taxonomy, fraud is defined as –


Intentionally and knowingly deceiving the victim by misrepresenting, concealing, or omitting facts about promised goods, services, or other benefits and consequences that are nonexistent, unnecessary, never intended to be provided, or deliberately distorted for the purpose of monetary gain. c


The taxonomy classifies incidents of completed fraud based on three characteristics: the target of the fraud, the product or service offered, and the specific type of scheme used on the victim. The victim must lose money in the transaction in order for the incident to be considered fraud.


The screener items in the 2017 SFS are focused on the level 2 sub-categories of fraud described in the taxonomy. The seven level 2 fraud categories are mutually exclusive and can be summed to calculate a comprehensive estimate of fraud. The screener survey is divided into seven sections each asking questions focused on these types of fraud: (1) prize or grant fraud, (2) phantom debt collection fraud, (3) charity fraud, (4) employment fraud, (5) consumer investment fraud, (6) consumer products and services fraud, and (7) relationship or trust fraud. Each eligible person age 18 or older will be asked the screener questions for each of these seven types. When a respondent reports an eligible fraud victimization, the SFS incident instrument is then administered to collect detailed information about this type of fraud victimization.


The SFS is designed so that if a respondent indicates they experienced a specific level 2 type of fraud they will receive an incident instrument focused on that specific type of fraud. If they indicate they experienced two types of fraud in the screener instrument they will receive two incident forms focused on those two types, and so on. Each incident form has questions specific to that type of fraud but there are also some questions included on all of the incident forms, including (1) how much money was lost in the transaction, (2) if the incident was reported to law enforcement, (3) if the incident was reported to a consumer protection agency, (4) negative social or emotional consequences associated with the incident, and (5) negative financial consequences of the incident.


3. Methods to Maximize Response Rates


Census Bureau staff mails an introductory letter (NCVS-572(L) or NCVS-573(L)) (see Attachment 3 and Attachment 4) explaining the NCVS to the household before the interviewer's visit or call. When they go to a house, the interviewers carry cards identifying them as Census Bureau employees. The Census Bureau trains interviewers to obtain respondent cooperation and instructs them to make repeated attempts to contact respondents and complete all interviews. SFS response rate reports will be generated on a monthly basis and compared to the previous month’s average to ensure their reasonableness.

As part of their job, interviewers are instructed to keep noninterviews, or nonresponse from a household or persons within a household, to a minimum. Household nonresponse occurs when an interviewer finds an eligible household but obtains no interviews. Person nonresponse occurs when an interview is obtained from at least one household member, but an interview is not obtained from one or more other eligible persons in that household. Maintaining a high response rate involves the interviewer’s ability to enlist cooperation from all kinds of people and to contact households when people are most likely to be home. As part of their initial training, interviewers are exposed to ways in which they can persuade respondents to participate as well as strategies to use to avoid refusals. Furthermore, the office staff makes every effort to help interviewers maintain high participation by suggesting ways to obtain an interview, and by making sure that sample units reported as noninterviews are in fact noninterviews. Also, survey procedures permit sending a letter to a reluctant respondent as soon as a new refusal is reported by the interviewer to encourage their participation and to reiterate the importance of the survey and their response.


In addition to the above procedures used to ensure high participation rates, the Census Bureau implemented additional performance measures for interviewers based on data quality standards. Interviewers are trained and assessed on administering the NCVS-1 and the NCVS-2 exactly as worded to ensure the uniformity of data collection, completing interviews in an appropriate amount of time (not rushing through them), and keeping item nonresponse and “don’t know” responses to a minimum. The Census Bureau also uses quality control methods to ensure that accurate data is collected. Interviewers are continually monitored by each Regional Office to assess whether performance and response rate standards are being met and corrective action is taken to assist and discipline interviewers who are not meeting the standards.

For the core NCVS, interviewers are able to obtain interviews with about 84% of household members in 78% of the occupied units in sample in a given month. Only household members age 18 or older who have completed the NCVS-1 will be eligible for the SFS.


We expect the total NCVS sample from October to December 2017 to be approximately 57,599 households yielding approximately 103,678 persons age 18 or older in NCVS interviewed households. Of these, we anticipate that 77%, or about 79,832 persons, will complete both the NCVS-1 and the SFS.


Upon completion of the 2017 SFS, the Census Bureau will conduct complete analyses of nonresponse, including nonresponse and response rates, respondent and nonrespondent distribution estimates, and nonresponse bias estimates for various subgroups. Should the analyses reveal evidence of nonresponse bias, BJS will work with the Census Bureau to assess the impact to estimates and ways to adjust the weights accordingly.

4. Final Testing of Procedures


Using the taxonomy as the basis for instrument development, BJS, working in collaboration with the Financial Fraud Research Center (FFRC), a joint project of the Stanford Center on Longevity and FINRA Investor Education Foundation, developed an instrument to measure the key categories and attributes of financial fraud. The resulting instrument was designed to measure the annual prevalence of seven level two types of financial fraud – consumer investment fraud, consumer products and services fraud, employment fraud, prize and grant fraud, phantom debt collection fraud, charity fraud, and relationship and trust fraud. The instrument was also designed to capture more detailed information about the fraud incident experienced most recently, including –

  • Information needed for coding detailed fraud types based on the taxonomy

  • Mode of initial contact

  • Method used for transferring funds

  • Monetary losses

  • Victim reporting behaviors


The FFRC used the instrument to conduct initial cognitive testing in 2015, and to administer the survey to approximately 2,000 web-based respondents in early 2016. The FFRC study found a much higher prevalence of fraud than anticipated based on prior research. However, the study also included a narrative option in the web-based survey that allowed respondents to provide a description of what happened to them. The narratives revealed that a large proportion of respondents who responded affirmatively to the screening questions about fraud did not in fact experience something that would rise to the level of criminal fraud.


In September 2016, BJS revised the SFS screener instrument to address the type I errors identified through the FFRC’s web survey. The revised 2017 SFS instrument underwent cognitive testing conducted by RTI International, under the BJS’ National Crime Victimization Survey-Redesign Research (NCVS-RR) generic clearance program (OMB number 1121-0325), from September of 2016 through April of 2017. The cognitive testing was focused on the screener and incident instruments. The purpose of the cognitive testing was to (1) establish validity and finalize the screener questions, (2) fully test the instrument, and (3) examine if the questions were well understood by the expanded target population of persons age 18 or older.


First, BJS obtained OMB approval to cognitively test the new version using crowdsourcing techniques. From October 2016 through February 2017, three iterative rounds of crowdsourcing (via Mechanical Turk or MTurk) were conducted with a total of 300 respondents. Round 1 was tested with 150 respondents, round 2 with 75 respondents, and round 3 with 75 respondents. The results of this crowdsourcing informed the screener versions included in this clearance.


The first version of the screener included questions on negative financial experiences not rising to the level of fraud to allow respondents to report on these experiences separately from the items used for fraud estimation (see Attachment 5). This version was tested in round 1 with 150 respondents (version 1 screener). The screener used a top down approach for estimation and respondents were asked specific behavioral questions measuring the seven general types of financial fraud (level 2 in the taxonomy).d Respondents were also asked about negative financial experiences and identity theft victimization; these questions were included to determine if respondents were excluding these experiences when asked the fraud questions. If respondents experienced any of these behaviors, they were asked to provide a few sentences describing their situation. These narratives were helpful when determining if a particular experience constituted fraud.


Findings from this round indicated that respondents often experienced negative financial experiences and that the distinctions in question wording between the negative financial experience questions and fraud questions were not clear enough (see Attachment 6). The narratives suggested high levels of false positive responses to the fraud items. In an attempt to reduce the false positive responses, follow-up questions were added after each screener to further refine the measures based on whether the respondent was reimbursed for the losses by the person or company involved in the potential fraud.


Round 2 of crowdsourcing tested this revised screener with 75 respondents (version 2 screener) (see Attachment 7). Respondents were administered the behavioral fraud questions included in round 1 along with follow-up questions asking if they received any of their money back following this experience or were still in contact with the person or entity that took their money. These follow up items are intended to capture the legal definition of fraud, which requires demonstration that there was an explicit intent to deceive for monetary gain. Follow-up questions were only included with the behavioral questions measuring fraud and not with the questions focused on negative financial experiences or identity theft victimization. As with round 1, the round 2 screener utilized a top down approach to estimation and would allow BJS to produce prevalence estimates for the fraud types at level 2 in the taxonomy and summing all level 2 fraud types would allow for a comprehensive estimate of financial fraud.


Overall, the round 2 screener performed well and it appeared that the follow-up questions narrowed the scope of the types of experiences that were considered fraud (see Attachment 8). However, in some instances the follow-up questions also appeared to screen out cases of fraud that should have been included.


Round 3 of crowdsourcing tested a different screener than round 2 (see Attachment 9). The round 3 screener was tested with 75 respondents (version 3 screener). Questions in this screener asked about more general experiences with fraud rather than specific types of fraud. Ultimately, the round 3 instrument would allow for estimates of certain types of level 3 fraud on the taxonomy but an overall measure of personal financial fraud would be limited to the summation of the certain types being measured. This screener would not produce a comprehensive estimate like the round 2 screener.


To ensure that respondents are reporting incidents that rise to the level of fraud, the follow up questions measuring the legal criteria for fraud were again included in the round 3 screener. From a legal perspective, if the company returns the individual’s money or if the individual never tried to get it back in the first place, it is not possible to demonstrate that the offender intended to defraud the victim. Correspondingly, the follow up items ask whether the victim was reimbursed by the person or company and if not, whether he or she tried to get their money back. These follow up items eliminated the need for the questions about negative financial experiences.


Overall, the round 3 screener worked equally well as the round 2 screener but demonstrated some evidence that the follow up items could be reducing type I error but potentially introducing type II error (see Attachment 10). Additional in-person cognitive testing of this screener version will clarify the extent to which this is or is not occurring.


After these three rounds of crowdsourcing were completed, a new version of the screener was developed as a hybrid of round 2 and round 3 to maintain the focus on specific categories of fraud in the screener. This approach addressed the challenges of moving respondents from the screener to the crime incident report when negative financial experiences were also included in the screener in addition to fraud victimization. The new screener version of the instrument is known as version 4. Because the version 4 screener asks about experiences with particular categories of fraud, it was possible to eliminate the follow-up items for certain types of fraud in which solely endorsing the screener item should be sufficient for classifying an individual as a fraud victim. For instance, if a victim donated money to a charity and later found out that the charity never actually existed, the victim experienced charity fraud and it is not necessary to ask whether he or she got or tried to get the money back.


The version 4 screener, in addition to the version 3 screener, and their corresponding incident report questionnaire were tested during a round of in-person cognitive interviews conducted by staff at RTI International. The purpose of crowdsourcing was to test the screeners to see if respondents were correctly reporting fraud. For in-person interviewing, we tested both the screener and the body of the survey, which was called the incident form. The incident form was developed by BJS, with input from RTI, for the version 4 screener. RTI then adapted the skip logic and added the incident form to the version 3 screener. The incident forms for both version 3 and version 4 are the same. See Attachment 11 for the version 3 protocol and Attachment 12 for the version 4 protocol.


Participants for these in-person cognitive interviews were recruited via ads posted on craigslist.com. The posted ad described the survey content and included a link to a screener survey. The screener survey for these interviews was the version 4 screener with some additional questions for demographic and contact information. Interested volunteers would complete the screener survey and those who were eligible would be called by an RTI recruiter in order to set up a date and time to complete the interview. Ads were posted in the corresponding craigslist sites for three cities where RTI interviewers were located: Raleigh/Durham/Chapel Hill, NC; Charlotte, NC; and Portland, OR. In Raleigh/Durham/Chapel Hill, NC and Portland, OR, participants came into an RTI office whenever possible to complete the interview. Whenever that was not possible or when there was a volunteer in Charlotte, NC, interviews took place in a private room at a public library. The majority of interviews took place in the Raleigh/Durham/Chapel Hill, NC area as that was where the most interviewers were located. RTI conducted 18 in-person cognitive interviews in March of 2017.


When comparing the cognitive testing results from the administrations of version 3 and version 4, version 4 performed better than version 3. Version 3 was designed to whittle down to level 3 types of fraud in the taxonomy through a few follow-up questions. Though the version 3 questions worked conceptually, in practice, there was more confusion than there was clarification. It is clear that one fraudulent incident can have aspects of multiple types of fraud, making it extremely challenging to categorize a specific type of fraud in only a few questions. That being said, after the cognitive testing was completed, BJS decided to move forward with version 4 of the instrument. Some respondents thought this version of the survey was repetitive because many screener questions are worded in a similar manner, however, they did not seem to have difficulty answering the questions. Version 4 includes level 2 types of fraud, which are not as specific as level 3 types, and summing all level 2 fraud types would allow for a comprehensive estimate of financial fraud.


The majority of the recommendations from cognitive testing consisted of changes to the wording or structure of a question to remove any confusion. For instance, a clarification was included on the prize or grant fraud screener instructing respondents to think about non-monetary prizes such as iPads and trips. These changes were recommended to improve the quality of the data collected. The final report outlining these and other recommendations of the cognitive testing is included with this package as Attachment 13.

By September 2017, the Census Bureau will translate the survey instrument into an automated CAPI instrument. Census Bureau staff, including instrument developers and the project management staff, will conduct internal testing of the CAPI instrument.


Interviewers will be provided with an SFS self-study which is mandatory to complete prior to initiating any interviews. Interviewer training is usually conducted a month prior to the first month of interview. This allows the interviewers time to familiarize themselves with the survey content and any special instrument functionality that is specific to conducting interviews for the SFS.


5. Contacts for Statistical Aspects and Data Collection


The Victimization Statistics Unit at BJS takes responsibility for the overall design and management of the activities described in this submission, including developing study protocols, sampling procedures, and questionnaires and overseeing the conduct of the studies and analysis of the data by contractors.


The Census Bureau is responsible for the collection of all data. Ms. Meagan Meuchel is the NCVS Survey Director and manages and coordinates the NCVS and SFS. BJS and Census Bureau staff responsible for the SFS include:

BJS Staff:

all staff located at-

810 7th Street NW

Washington, DC 20531

Census Staff:

all staff located at-

4600 Silver Hill Road

Suitland, MD 20746

Jeri Mulrow

Acting Director

Bureau of Justice Statistics

202-514-9283

Meagan Meuchel

NCVS Survey Director

Associate Directorate for Demographic Programs – Survey Operations

301-763-6593

Lynn Langton, Ph.D.

Chief

Victimization Statistics Unit

202-353-3328

Jill Harbison

NCVS Assistant Survey Director

Associate Directorate for Demographic Programs – Survey Operations

301-763-4285

Rachel Morgan, Ph.D.

Statistician

Victimization Statistics Unit

202-616-1707

David Hornick

Lead Scientist

Demographic Statistical Methods Division

301-763-4183


a Annual Estimates of the Resident Population: April 1, 2010 to July 1, 2015. Source: U.S. Census Bureau, Population Division. Release Dates: For the United States, regions, divisions, states, and Puerto Rico Commonwealth, December 2015. For counties, municipios, metropolitan statistical areas, micropolitan statistical areas, metropolitan divisions, and combined statistical areas, March 2016. For Cities and Towns (Incorporated Places and Minor Civil Divisions), May 2016.

b Anderson, K.B. (2013). Consumer fraud in the United States, 2011: The third FTC survey. Federal Trade Commission staff report. Retrieved from https://www.ftc.gov/sites/default/files/documents/reports/consumer-fraud-united-states-2011-third-ftc-survey/130419fraudsurvey_0.pdf.

c Beals, M., DeLiema, M., & Deevy, M. (2015). Framework for a Taxonomy of fraud. Retrieved from Financial Fraud Research Center at http://fraudresearchcenter.org/2015/07/framework-for-a-taxonomy-of-fraud/.

d By using a top down approach for estimating financial fraud BJS is able to calculate a comprehensive, or summative, estimate of financial fraud and is also able to breakdown this comprehensive estimate and calculate estimates for each of the seven general types of fraud included in the survey.

7


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authorlangtonl
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy