Consumer Fraud Survey 2011 SS final _mtd

Consumer Fraud Survey 2011 SS final _mtd.pdf

Consumer Fraud Survey

OMB: 3084-0125

Document [pdf]
Download: pdf | pdf
Paperwork Reduction Act Supporting Statement for a
FTC Survey on Consumer Fraud
(Requested Reinstatement OMB Control No. 3084-0125)
The Federal Trade Commission (“FTC” or “Commission”) proposes to conduct a third
survey about consumer experiences with consumer fraud. The Commission’s first consumer
fraud survey was conducted in 2003 and a report, “Consumer Fraud in the United States: An
FTC Survey,” was issued in August 2004.1 The second survey was conducted in November and
December of 2005. A report, “Consumer Fraud in the United States: The Second FTC Survey,”
detailing the results of the second survey was issued in October 2007.2 As with the two previous
surveys, the results of the proposed survey will be used by the FTC to estimate the incidence of
various types of fraud in the general population and among various demographic groups. The
survey will also help the FTC target its enforcement and consumer education efforts concerning
consumer fraud.
After the 2005 survey was conducted, at the FTC’s request, the Office of Management
and Budget (“OMB”) granted discontinuation of the prior clearance extended to this survey. The
FTC now requests, however, that OMB reinstate that clearance with changes, to conduct the
follow-up survey.
A.

JUSTIFICATION
1. & 2. Necessity for Information Collection/How the Data Will Be Used

The FTC proposes to survey 3,500 consumers in order to gather specific information on
the incidence of consumer fraud in the general population. All information will be collected on a
voluntary basis and no identifying information about the participants will be collected. Subject
to OMB approval for the survey, the FTC has contracted with a consumer research firm to
identify consumers and conduct the survey. The results will assist the FTC in determining the
incidence of consumer fraud in the general population and whether the type and frequency of
consumer frauds is changing, and will inform the FTC about how best to combat consumer
fraud.
As was done in 2005, the proposed survey will oversample demographic groups that the
2005 and 2003 surveys found to be at an elevated risk of becoming victims of consumer fraud,
including Hispanics, African Americans, and Native Americans. The purpose of the
oversampling is to acquire information on what additional factors affect victimization within
those demographic groups, and which frauds they are most likely to experience.
The questions will be similar to those in the 2005 survey. In preparing the current survey
instrument, some questions from the 2005 survey were dropped and some were changed to add

1

The Report is available at
http://www.ftc.gov/reports/consumerfraud/040805confraudrpt.pdf.
2

This report is available at http://www.ftc.gov/opa/2007/10/fraud.pdf.

clarity to the questions and therefore the results. Moreover, some new questions have been
added to this survey to explore the prevalence of some frauds that were not included in the 2005
survey and to explore possible differences between consumers who are victimized and those who
are not.
When repeating a survey at different points in time, there is a choice and trade-off
between leaving the questions unchanged to further comparisons between the results obtained
from each survey and, alternatively, altering the topics covered to better reflect a more current
landscape, in this case, for consumer fraud. Favoring the latter aim, and to better reflect
fraudulent offers currently being promoted to consumers, the Commission proposes to: (i)
eliminate a fraud category and associated questions that were included in the 2005 survey; and
(ii) add questions about three other types of fraud that were not included then. In addition,
changes have been made to the questions asked about several other types of fraud to improve the
degree to which the questions reach the types of fraud in which the Commission is interested.3
3.

Information Technology

Though use of electronic media to conduct the survey is theoretically possible, it would
be infeasible. One frequently used method of conducting a survey via electronic media involves
the use of the Internet and participants in a contractor's Internet panel. This, however, is
unsatisfactory in a survey such as the proposed fraud survey where the goal of the project is to
estimate the prevalence of fraud victimization in the population as a whole.4
The Commission explored the feasibility of using an automated interactive technology –
either interactive voice recognition (“IVR”) or telephone dial entry (“TDR”) – to conduct the
survey. Rather than reducing respondent burden, one reason to use IVR or TDR technology
would be to reduce the cost of administering the survey. Also, there have been some suggestions
in the literature that consumers may provide more honest answers when the questions are asked
by an impersonal computer than when the questions are asked by a real person as in a traditional

3

The changes being made are discussed in more detail in response to A. 15 (“Program
Changes or Adjustments”) below. As noted therein, the areas of overlap between the 2005 and
currently proposed survey account for 80 percent of all incidents of the 14 specific types of
frauds covered by the 2005 survey.
4

See, e.g., Dillman, Don A., Glenn Phelps, Robert Tortora, Karen Swift, Julie Kohrell,
Jodi Berck, and Benjamin L. Messer, “Response rate and measurement differences in mixedmode surveys using mail, telephone, interactive voice response (IVR) and the Internet,” Social
Science Research, 38 (2009), p. 2. (“Internet access in the US has been increasing with about
67% of American adults (18 and older) having access to the Internet access from home in March
2007, but this coverage is not sufficient for general public surveys.”)
2

telephone interview.5 Studies, however, have also found that response rates are lower when
automated systems, rather than people, are used to administer surveys.6 Ultimately, the
Commission concluded that the benefits of using IVR or TDR technology to administer the fraud
survey were more than offset by the reduced response rates that would likely result from using
these technologies.
While actual persons will be used to conduct the interviews, the contractor will use
CATI (Computer Assisted Telephone Interviewing) technology to assist the process.
4.

Efforts to Identify Duplication/Availability of Similar Information

There is no current information available elsewhere that can be used to explore and
compare consumers’ fraud experiences, other than the FTC’s 2003 and 2005 surveys. The 2011
survey is designed to improve and expand upon the information gained through the earlier
surveys. Efforts to identify duplicate sources of information included a review of studies, data,
hearing transcripts, news articles, and information found through contacts with consumer groups,
governmental agencies, and academic researchers.7
5.

Efforts to Minimize Small Organization Burden

Not applicable. Only individual consumers will be interviewed.

5

See, e.g., Kreuter, Frauke, Stanley Presser, and Roger Tourangeau, “Social Desirability
Bias in CATI, IVR, and Web Surveys,” Public Opinion Quarterly, 72 (Special Issue 2008), pp.
847-865.
6

See, e.g., Kreuter, Presser, and Tourangeau; Dillman, Don A., Glenn Phelps, Robert
Tortora, Karen Swift, Julie Kohrell, Jodi Berck, and Benjamin L. Messer, “Response Rate and
Measurement Differences in Mixed-Mode Surveys Using Mail, Telephone, Interactive Voice
Response (IVR) and the Internet,” Social Science Research 38 (March 2009), pp. 1-18; and
Tourangeau, Roger, Darby Miller Steiger, and David Wilson, “Self-Administered Questions by
Telephone,” Public Opinion Quarterly, 66 (Summer 2002), pp. 265-278.
7

The Commission is familiar with recent work on consumer fraud published by the
AARP. See, Karla Pak and Doug Shadel, “AARP Foundation National Fraud Victim Study,”
AARP Foundation, March 2011. While this research is studying consumer fraud, it is not
seeking to measure the prevalence of fraud as is the FTC work. Rather, the AARP work seeks to
explore possible differences between those who become victims of different types of frauds and
the general population.
3

6.

Consequences to Federal Program and Policy Activities/Obstacles to
Reducing Burden

If this information is not collected, the FTC will lack information to address important
issues and to more effectively target future law enforcement and consumer education actions.
The survey scope and burden has been reduced as much as possible short of sacrificing the
statistical value of the information to be collected.
7.

Circumstances Requiring Collection Inconsistent with Guidelines

The collection of information in the proposed survey is consistent with all applicable
guidelines contained in 5 C.F.R. § 1320.5(d)(2).
8.

Public Comments/Consultation Outside the Agency
a.

Public Comments

As required by 5 C.F.R. § 1320.8(d), the FTC published a notice seeking public comment
on the proposed collections of information. See 75 Fed. Reg. 53,697 (September 1, 2010). No
comments were received. Pursuant to the OMB regulations that implement the PRA (5 CFR Part
1320), the FTC is providing a second opportunity for public comment while seeking OMB
approval for study.
b.

Consultation Outside the Agency

The design of the 2011 survey, as well as the instruments used in the earlier surveys,
were reviewed by FTC’s survey consultant, Manoj Hastak, Ph.D., Professor of Marketing at
American University’s Kogod College of Business Administration in Washington, DC.
Moreover, the methodology of the 2005 survey, which forms the basis for the 2011
survey, has been well received by academia and government agencies. For example, the results
of the 2005 survey were presented at a meeting on “Elder Mistreatment and Abuse and Financial
Fraud,” held at the National Academy of Sciences in June of 2010, where it was favorably
reviewed by Professor Roger Tourangeau, Director of the Joint Program in Survey Methodology,
at the University of Maryland. The 2003 survey was cited in a commentary in the Fall 2004
issue of the University of Maryland Law Journal of Race, Religion, Gender & Class. Also, the
study was a reference for research by the Canadian Competition Bureau and it was mentioned in
a British report for the Organisation for Economic Co-operation and Development. The 2003
survey was also reviewed by officials with the U.S. Department of Justice and the AARP.
9.

Payments or Gifts to Respondents

Not applicable.

4

10. & 11.

Assurances of Confidentiality/Matters of a Sensitive Nature

Survey participants will not be asked to provide any personally identifying information.
The Commission will not receive any information about the identity of individual respondents.
12.

Estimated Annual Hours Burden

The FTC will pretest the survey on approximately 100 respondents to ensure that all
questions are easily understood. This pretest will take approximately 17 minutes per person on
average8 and 28 hours as a whole (100 respondents x 17 minutes each). Answering the final
survey will also require approximately 15 minutes per respondent on average and 875 hours as a
whole (3,500 respondents x 15 minutes each). Thus, cumulative total burden hours for the first
year of the clearance will approximate 903 hours.
13.

Estimated Annual Cost Burden

The cost per respondent should be negligible. Participation is voluntary, and will not
require any labor expenditures by respondents. There are no capital, start-up, operation,
maintenance, or other similar costs to the respondents.
14.

Estimated Cost to the Federal Government

The total cost to the Federal government for the information collection will be
approximately $217,500. To obtain a contractor to review the survey questionnaire, identify the
consumers, conduct the surveys, and provide the resulting data to Commission staff will cost
$199,000. In addition, the Commission staff time necessary to identify a contractor and to assist
the contractor in completing its duties is estimated to require approximately 300 staff hours and
cost approximately $18,500. The cost of Commission staff time is necessarily an estimate
because several factors in this calculation may vary, including the number of staff involved and
the actual amount of time required. Clerical and other support services and costs of conducting
the study are included in this estimate.
15.

Program Changes or Adjustments

As noted in conclusion to A. 1 & 2 above, the Commission faces a choice between not
changing the survey questions so that the results from the proposed survey can be used to
compare corresponding results from the earlier surveys -- and altering the questions and/or topics
to reflect more upon the current landscape of frauds. The Commission proposes to eliminate a
fraud category and associated questions that were included in the 2005 survey in order to

8

Staff originally estimated 15 minutes to complete the pretest, the same time as that
needed for the actual survey. The revised estimate takes further into account the presumed
added time required to respond to questions unique to the pre-test itself.
5

accommodate other questions about several types of fraud that were not included in that prior
survey.9
Questions have been added to learn about the extent consumers are being victimized by
several types of fraud not included in the 2005 survey. These include: (i) mortgage foreclosure
rescue frauds; (ii) false representations that a person can help a consumer obtain a free
government grant; and (iii) situations where a consumer is sent a check that turns out to be
counterfeit, with instructions to cash the check and forward some of the proceeds to another
person. In recent years, the Commission has brought numerous law enforcement actions against
persons who are operating the first two types of fraud. Further, a large number of complaints
about each of these types of frauds have been received in the Commission’s Consumer Sentinel
database of consumer complaints.10
The 2005 survey included questions about instances in which a consumer had been billed
for adult information services – such as adult entertainment, gambling or psychic services –
provided either over the Internet or by pay-per-call telephone service that the consumer had not
agreed to purchase. None of the participants in the 2005 survey, however, reported encountering
this problem. Additionally, the Commission’s enforcement experience suggests that this
problem has been largely, if not totally, eliminated. Thus, it does not appear necessary to
maintain the questions about this problem in the current proposed survey.
Beyond adding new fraud categories and deleting one of the prior ones from the 2005
survey, questions covering two other fraud categories from the 2005 survey have been revised:
(i) debt relief services; and (ii) prize promotions and lotteries.
The Commission has devoted significant resources to problems in the debt relief area in
the past few years, including amending the Commission’s Telemarketing Sales Rule to prohibit
debt relief firms from collecting fees before they provide the relief they promise. While some
questions about debt relief scams were included in the 2005 survey,11 the increase in knowledge
about this sector made it clear that the questions needed to be improved.

9

The “accommodation” is a recognition of both practical time limits (obtaining full
respondent cooperation) and contractual ones (regarding provisional terms with the contractor
pending OMB’s grant of clearance).
10

A total of 16,584 complaints received in the Consumer Sentinel database in 2010 were
classified in the category “Mortgage Modification/Foreclosure Relief.” This is 1.24 percent of
all complaints received into Consumer Sentinel during 2010. Complaints in the category
“Grants: Non-Educational” totaled 3,719 during 2010. A total of 28,649 complaints received
during 2010, 2.14 percent of all complaints, were classified as “Counterfeit Check Scams.”
Federal Trade Commission, “Consumer Sentinel Network Data Book for January - December
2010,” March 2011, Appendix B-3.
11

Questions about debt relief frauds were not included in the 2003 survey.
6

The 2005 questions about lotteries and prize promotions included only the possible use of
counterfeit checks in questions about foreign lotteries. It is now clear, however, that such
fraudulent checks may also play roles in other prize promotions. The questions about prize
promotions are therefore altered to capture this possibility.
Dropping one type of fraud and changing the questions on others will, of course, affect
the degree to which the Commission can compare the results of the current survey to those from
the 2005 survey. It still will be possible to draw comparisons, however, for 10 of the 14 specific
frauds included in the 2005 survey, including the three that were responsible for the greatest
number of incidents of fraud among all fourteen types.12 These ten specific frauds accounted for
80 percent of all incidents of the 14 specific frauds in the 2005 survey. In the Commission’s
view, the loss of comparability that results from the changes proposed is an acceptable trade-off
to enable us to obtain a more accurate picture of the types of fraud being promoted to consumers.
Besides changes in the types of frauds covered by the survey and the questions used to
identify them, we propose adding to the survey certain questions to help identify characteristics
that might be associated with an increased or decreased likelihood of consumers falling victim to
fraud. One such set of questions seeks to measure participants’ numerical ability. There are five
questions in this set and they have been used in several other studies. A paper by Banks and
Oldfield looking at assets held by older adults in England found that those who performed better
on these numerical ability questions tended to have greater financial wealth and were more likely
to hold some of these assets in more complex financial instruments, such as stocks.13 Another
study found that people with greater numerical ability were less likely to have run into
difficulties with a mortgage.14 A second series of questions asks participants to self-rate
themselves on such characteristics as being a planner, being impulsive, being self-controlled, and

12

The 2005 survey included questions about 14 specific types of frauds. It also
contained questions about two more general types of problems – (i) Whether a person had paid
for a product or service not specifically covered in the survey but not received it; and (ii)
Whether a person had been billed for an item not specifically covered in the survey that s/he had
not agreed to purchase.
Beyond the large changes discussed here, minor changes have been made in some cases
to the wording of questions in order to increase the clarity of the questions. While the
Commission does not believe that these changes make comparisons with the results of the 2005
survey inappropriate, the fact that changes have been made will be noted when the results are
reported.
13

Banks, James, and Zoe Oldfield, “Understanding Pensions: Cognitive Function,
Numerical Ability and Retirement Savings,” Fiscal Studies, 28 (June 2007), pp. 143-170.
14

Gerardi, Kristopher, Lorenz Goette, and Stephan Meier, “Financial Literacy and
Subprime Mortgage Delinquency: Evidence from a Survey Matched to Administrative Data,”
Federal Reserve Bank of Atlanta, Working Paper 2010-10, April 2010.
7

enjoying spending money. Questions similar to these have been used in several studies
including papers by Puri, by Bertrand and Morse, and by Stilley, Inman, and Wakefield.15
Additional proposed questions ask participants to rate themselves in terms of how willing they
are to take risks and how patient they are. Finally, following some of the research done by
AARP, we also propose to add a question asking participants whether they have experienced a
serious negative life event in the last two year.16
16.

Plans for Tabulation and Publication

The results of the surveys will be used to inform the FTC about the incidence of
consumer fraud in the general population. The summary of findings will be released to the
public to inform the efforts of other consumer protection agencies, including other federal
agencies. The collection of the information would begin subject to OMB approval after it
completes its review of this clearance request.
17.

Display of Expiration Date for OMB Approval

Not applicable.
18.

Exceptions to Certification

Not applicable.

15

Puri, Radhika, “Measuring and Modifying Consumer Impulsiveness: A Cost-Benefit
Accessibility Framework,” Journal of Consumer Psychology, 5 (1996), pp. 87-113; Bertrand,
Marianne, and Adair Morse, “Information Disclosure, Cognitive Biases, and Payday Borrowing,
The University of Chicago Booth School of Business, Working Paper 10-01, October 2009;
Karen M. Stilley, J. Jeffrey Inman, and Kirk L. Wakefield, “Planning to Make Unplanned
Purchases? The Role of In-Store Slack in Budget Deviation,” Journal of Consumer Research 37
(August, 2010), pp. 264-278.
16

See Karla Pak and Doug Shadel, “AARP Foundation National Fraud Victim Study,”
AARP Foundation, March 2011. While Pak and Shadel ask a series of questions about different
negative life events, limitations on the length of the survey necessitated limiting this to a single
question in the FTC survey.
8

B.

COLLECTION OF INFORMATION EMPLOYING STATISTICAL
METHODOLOGY
1.

Description of Sampling Methodology

The potential respondent universe for this survey is all U.S. adults age 18 and over. As
was done in the 2005 survey, the survey will oversample demographic groups that the earlier
surveys found to be at an elevated risk of becoming victims of consumer fraud, including
Hispanics, African Americans, and American Indians.
The sample frame will consist of all residential telephone numbers – both cell and
landline – in the United States. Numbers will be generated by first selecting a block of
telephone numbers – a set of 100 numbers with the same first eight digits – that has at least one
listed residential number. Within a block, the specific number to be called will be generated by
randomly selecting the last two digits of the number.
Blocks of numbers that include landline phones will first be divided into blocks that
cover areas where a high percentage of members of one of four racial or ethnic minority groups
– American Indians or Alaska Natives, Blacks or African Americans, Hispanics or Latinos, and
Asians – reside and those where the percentage of minorities who reside there is lower. Blocks
that include a high percentage of minorities will be assigned to separate strata for each of the
four identified minority groups. Within each of these groups, blocks will be further stratified
between those that cover urban locations and those that are non-urban. Blocks that do not have a
high percentage of minorities will be assigned to one of eight strata depending on their
geographic location and whether they are located in an urban or non-urban area.
The number of expected number of interviews to be completed in each stratum are
included in Table 1.
When calls are made to landline phones, the specific individual to be interviewed will be
randomly selected from among the individuals who reside in the household. This will be done
by asking to speak with the household member who most recently had a birthday.
Because it is presumed that cell phones are used by only one individual, no such mostrecent-birthday adjustment is needed on calls made to cell phones. However, individuals who
answer calls made to cell-only numbers will be screened to determine whether they live in a
household that has landline telephone service. Only those who do not have landline service will
be included in the cell-only strata.
Sampling will be done on a stratified basis with the strata being defined by the
characteristics of the phone line being called. Specifically, each block of telephone numbers will
be assigned to one of the 20 strata identified in Table 1. First, blocks that are assigned solely for
cell phone service will be separated from those used for landline service. Blocks used for cell

9

service will be assigned to one of four strata depending on the geographic location of the area
code of the number.17
Table 1. Planned Stratification of Sample

Description of Strata

Number of
Interviews in
Each Stratum

Calls to Landline Phones, Those With Only Landline Phones or With Both Cell and
Landline
Area Codes and Exchanges with a High Proportion of American Indians or Alaskan Natives
Urban Locations

38

Non-urban Locations

252

Area Codes and Exchanges with a High Proportion of Asians
Urban Locations

140

Non-urban Locations

35

Area Codes and Exchanges with a High Proportion of Black or African Americans
Urban Locations

567

Non-urban Locations

123

Area Codes and Exchanges with a High Proportion of People of Hispanic or Latino Origin
Urban Locations

561

Non-urban Locations

89

Area Codes and Exchanges Without a High Proportion of Any Minority Group
Northeast
Urban Locations

172

Non-urban Locations

70

17

Additional stratification of cell-only numbers is not feasible because of limitations on
the data on the number of cell-only households.
10

Description of Strata

Number of
Interviews in
Each Stratum

Midwest
Urban Locations

172

Non-urban Locations

118

South
Urban Locations

183

Non-urban Locations

174

West
Urban Locations

154

Non-urban Locations

65

Total Landline Only or Both Landline and Cell

2913

Calls to Those with Cell Phones Only
Northeast

67

Midwest

140

South

257

West

122

Total Cell Phone Only

586

2.

Description of the Information Collection Procedures

The FTC has, subject to OMB approval, contracted with a survey firm to survey 3,500
consumers. The contractor will conduct a telephone survey of a random sample of adult
respondents drawn from all 50 states and the District of Columbia. Random-digit dialing,
including an initial call plus up to seven callbacks, will be employed. Respondents who state a
preference for Spanish will be interviewed in Spanish.

11

The 2011 survey will involve slightly fewer interviews than were conducted during the
2005 survey – 3,500 interviews rather than 3,888. As such, the precision of the estimates in the
new survey should be only slightly less precise than the estimates in the 2005 survey.18
3.

Methods to Maximize Response Rates/Reliability of Sample Data

Several steps have been taken to maximize the survey response rate.19 First, in drafting
the survey questionnaire, staff has taken length into account. Hopefully, this will limit the
number of respondents who drop out before completing the questionnaire.
Second, in conducting the survey, anyone who initially refuses to participate will be recontacted after approximately one week to see if their cooperation can be obtained by a second
effort. These second calls will be made by experienced interviewers who have shown strong
performance and low refusal rates.20
Third, for each number that is called, up to seven attempts will be made to reach
someone there. These call-back attempts will be staggered over different times of the day and
over different days of the week, sometimes calling on a weekday evening, sometimes during the
day on a weekday, and sometimes during a weekend. In addition to increasing the response rate,
making multiple attempts to reach someone at each number should increase the
representativeness of the resulting sample if members of certain demographic groups are more
difficult to reach.
Besides taking steps to maximize the response rate, data will be collected to permit an
exploration of whether those who are more difficult to reach and those who refuse to participate
in the study likely differ significantly from those who do participate. A sample of 100 of those
who refuse to participate even after being contacted a second time will be asked to answer a
small number of questions – primarily demographics. The responses of these people will then be
compared to the characteristics of those who complete the survey to see if there are any
differences between the two groups.
Further, for each telephone number where an interview is completed, data on how many
calls were needed before the interview was completed will be collected. Using these data, it will
be possible to examine whether those who are more difficult to reach have different
characteristics – and different experiences – than those who were reached more easily.

18

Reducing the sample size from 3,888 to 3,500 should result in an increase in the
standard error of approximately 5.4 percent – the square root of 3,888 divided by 3,500, minus 1.
19

The response rate in the 2005 survey was 0.23 using Response Rate 3, as defined by
the American Association for Public Opinion Research.
20

Those who provide a “hard” refusal when first contacted – e.g., those who ask the
caller to take them off of their list or not to call again – will not be contacted again.
12

As noted above, the 2011 survey will involve interviews with 3,500 individuals, a slight
decrease from the 3,888 interviews conducted in the 2005 survey. Because of the decrease in the
number of interviews, there will be a slight decline in the precision of the resulting estimates.
However, this decline will be very small. The 2005 survey found that 9.4 percent of all U.S.
adults who were at least 18 years of age had been a victim of one or more of the specific frauds
covered by the survey in the previous year. The standard error of this estimate was 0.54
percentage points.
4.

Testing of Procedures or Methods Undertaken

Staff will pretest the survey by sampling 100 respondents to ensure that all questions are
easily understood. This pretest is also discussed in Part A above, and is part of the collection of
information for which the FTC seeks OMB approval.
5.

Individuals Consulted on Statistical Aspect of the Surveys

The study design has been prepared by Keith Anderson, Senior Economist, Bureau of
Economics (202-326-3428). It has been reviewed internally by Erez Yoelli, Economist, Bureau
of Economics (202-326-2418) and Pauline Ippolito, Deputy Director, Bureau of Economics
(202-326-3477). The contractor (Synovate, Contact: Tim Amsbury, 703-663-7290) is
experienced in conducting statistically rigorous telephone surveys.
As noted in A. 8 b. above (“Consultation Outside the Agency”), the design of the 2011
survey was reviewed by the FTC’s survey consultant, Manoj Hastak, Ph.D., Professor of
Marketing at American University’s Kogod College of Business Administration in Washington,
DC. Moreover, the methodology of the 2005 survey, which forms the basis for the 2011 survey,
has been well received by academia and government agencies. For example, the results of the
2005 survey were presented at a meeting on Elder Mistreatment and Abuse and Financial Fraud,
held at the National Academy of Sciences in June of 2010, where it was favorably reviewed by
Professor Roger Tourangeau, Director of the Joint Program in Survey Methodology, at the
University of Maryland.

13


File Typeapplication/pdf
File TitleH:\Consumer Fraud Survey - ka\Consumer Fraud Survey 2011 SS final _mtd.wpd
Authorggreenfield
File Modified2011-07-26
File Created2011-07-26

© 2024 OMB.report | Privacy Policy