Download:
pdf |
pdfReview of Federal Survey Program Experiences with Incentives
Nhien To
Bureau of Labor Statistics
July 23, 2015
In 2009, the Bureau of Labor Statistics (BLS) Consumer Expenditure Survey (CE) initiated the multi-year
Gemini Project for the purpose of researching, developing, and implementing an improved survey
design. One area of research being included in the project is the use of incentives. The research
conducted in this review of incentive structures is intended to provide administrative and experiential
input and background information for consideration in the design of the FY 2016 Incentives Field test
and the FY 2019 Large Scale Feasibility test and redesign. The purpose of this review was to summarize
the information gathered, not to recommend or propose a test design.
The report addresses the following research questions:
1. What has been the experience of other federal surveys using incentives, including major
findings from prior CE incentive studies?
2. What are the OMB requirements or paperwork requested of other surveys leading up to
their introduction of incentives?
The Federal Surveys that have had experience with incentives and are included in this review are listed
below:
1.
2.
3.
4.
5.
6.
7.
8.
9.
Medical Expenditure Panel Survey (MEPS) *
National Adult Training and Education Survey (ATES) ^
National Health and Nutrition Examination Survey (NHANES) **
National Household Food Acquisition and Purchase Survey (FoodAPS) *
National Survey on Drug Use and Health (NSDUH) **
National Survey of Family Growth (NSFG) **
Survey of Consumer Finances (SCF) *
Survey of Income and Program Participation (SIPP) *
Consumer Expenditure Survey (CE) ^
For each survey, the following information is provided:
A.
B.
C.
D.
E.
Sponsor
Data Collection Vendor
Resource(s)
Summary of Incentive Experience
Summary of OMB Experience in Incorporating Incentives
1
The Summary of Incentive Experiences1 (D) addresses the first research question. It summarizes
experiences of other federal surveys using incentives, including during testing and post testing in the
field, e.g. did the cost and response rate impacts observed in the testing hold out in the field? There is
concern that sometimes benefits seen in tests (whether in terms of response rates, data quality, or cost
savings) don’t carry forward into production, either because of “interviewer effects” during the testing
(e.g. interviewers are temporarily motivated or energized by the intervention) or for some other reason,
such as operational complications. In addition to other federal surveys, prior incentives research
conducted within CE was reviewed. See Appendix A for a summary table of incentive amounts and
sample sizes.
The Summary of OMB Experiences in Incorporating Incentives2 (E) addresses the second research
question. It summarizes the OMB experience of other federal surveys using incentives. The information
shared in this summary was obtained through correspondence and telephone conversations with
representatives of those surveys. Among the Federal Surveys that were reviewed for this project, all
currently use incentives in production except ATES and CE (denoted with a caret (^). The surveys
marked with an asterisk (*) are surveys in which OMB information was obtainable by the time this
summary was written; a double-asterisk denotes those surveys for which OMB information was not
obtainable. In several cases, contact was made with representatives from the survey but detailed
information on the OMB experience was not available.
Based on the information obtained from the representatives of the 3 surveys that currently use
incentives in production via OMB approval, the process can vary based on each survey. However, 3
major themes seem to hold true for all of them:
1. Incentive use was approved with burden serving as the primary justification
2. OMB was involved with the planning and implementation of incentive use early on
3. No additional materials are needed outside of the regular OMB submission
1. Medical Expenditure Panel Survey (MEPS)
A. Sponsor: Agency for Healthcare Research and Quality (AHRQ) , an agency of the U.S. Public
Health Service in the U.S. Department of Health and Human Services (DHHS)
B. Data Collection Vendor: Westat, Inc.
C. Resources:
Respondent Payment Experiment with MEPS Panel 13 (October 13, 2010)
http://meps.ahrq.gov/data_files/publications/rpe_report/rpe_report_2010.shtml
The Utility of the Integrated Design of the Medical Expenditure Panel Survey (MEPS) to
Inform Trends in Nonresponse. Frances M. Chevarley, Ph.D. and Karen E. Davis, M.A.
1
This summary was intended for research purposes and internal use, therefore much of the substantive
portion of the summaries (text and table) include verbatim narratives from the survey documentation and reports
cited in each section under “C. Resources”
2 This summary was intended for research purposes and inte rnal use, therefore much of the substantive portion of the
summaries include verbatim responses from the survey representatives. Contact names can be provided upon request.
2
Proceedings of the 2013 FCSM Research Conference
D. Summary of Incentive Experience:
From 2007 - 2010, the Medical Expenditure Panel Survey (MEPS) included a monetary gift of
$30 per household per interview in its data collection procedures for the household
component of the survey (prior to that, a respondent payment of $25 was offered).
At the request of the Office of Management and Budget, MEPS conducted an incentive
experiment with the MEPS panel that began in 2008 for the entire 5 rounds of MEPS. The
experiment included three different respondent payment amounts: $30, $50 and $70.
Each sampled household was randomly assigned one of the three different levels of
payment. In all cases, the field interviewer provided the respondent payment at the
completion of the interview in each round. The full panel sample of 9,939 households was
included in the experiment. All households in the Panel 13 sample were randomly assigned
to one of three respondent payment groups. The $30 group served as the control group.
Assignments to groups were made at the NHIS segment level to help reduce, but not
eliminate, the risk that neighboring sample households in the same MEPS panel would
receive different amounts.
The results across the full sample showed that the composite response rate across all 5
rounds of data collection was higher for both the $50 and $70 respondent payment group
relative to the $30 group. In addition, the difference in response rates between the $70 and
$50 respondent payment groups was also statistically significant, with a composite response
rate of 71.13 percent and 66.74 percent for the $70 and $50 groups, respectively. When
looking at the cost per case, the $20 increase in the respondent payment, or the $50 gift to
respondents, ultimately (after Round 5) results in almost no estimated increase in cost per
case. However, the estimated cost per case increases more dramatically with the $70 gift to
respondents. Much of the increase in costs in the $70 group reflects the positive aspect of
the $70 payment group – there are simply more cooperating households in the $70 group.
Because of the favorable results of the $50 group, OMB approved an increase in the
incentive from the $30 that MEPS had been using to $50 starting with the Panel that began
in 2011.
The questions that researchers then asked were: Did nonresponse rates change from 2010
to 2011 overall and for subgroups and did the significance of predictive variables for
nonresponse vs. a reference group change from 2010 to 2011. They found that with the
increased incentives in 2011, nonresponse rates had a relative percent decrease of 17.4
percent.
E. Summary of OMB Experience in Incorporating Incentives:
The very first version of MEPS was fielded in 1977 and another fielding in 1987, but as an
annual survey the series began in 1996 and offered incentives immediately. The 1987
survey also used incentives, but the representative from MEPS wasn’t sure whether it was
included in 1977.
3
When asked whether there was a need to make an argument for using incentives, the
representative said that at the time there was no discussion. It was just accepted that MEPS
was demanding – a 90 minute interview 5 times, plus record keeping in between interviews.
No additional paperwork or required materials needed to be submitted in addition to the
standard OMB submission procedure when incentives was first introduced with the survey.
The amount and justification was included in the regular OMB submission. The
representative also serves as AHRQ’s clearance officer and said that NHIS is in the same boat
of considering incentives and that right now OMB is not particularly interested in expanding
incentives.
2. National Adult Training and Education Survey (ATES)
A. Sponsor: National Center for Education Statistics (NCES), U.S. Department of Education
B. Data Collection Vendor: Westat, Inc.
C. Resources:
The Adult Training and Education Survey (ATES) Pilot Study Technical Report (April 2013)
http://nces.ed.gov/pubs2013/2013190.pdf
D. Summary of Incentive Experience: The ATES Pilot Study contained an experiment to
evaluate the relative effectiveness of three levels of promised incentives together with the
effects of having informed the respondent in the screener mailing of the potential incentive
for completion of the extended interview and having provided a first stage prepaid
incentive. Households were randomly assigned to $0, $10, or $20 incentive treatment
groups (approximately 20 percent to the $0 group, 40 percent to the $10 group, and 40
percent to the $20 group); these incentives were to be issued if the sampled adult
completed the extended interview. All adults assigned to the $10 or $20 group who
completed the extended survey were sent a check for the incentive amount.
In order to provide information about the effect of the level of incentive and the effect of
notification, these two conditions were tested experimentally. In about 60 percent of the
screener mailings (75 percent of the 80 percent of households assigned to the $10 or $20
incentive groups), the letter enclosed in the mailing notified the household that if an adult in
the household was selected for the extended telephone survey, the adult would be offered
a specified amount to complete the survey. Half of these respondents were notified of a $10
incentive, and half were notified of a $20 incentive. Respondents were reminded of the
incentive at the start of the telephone interview. In the remaining 25 percent of the
households assigned to the $10 or $20 group, the screener letter contained no mention of
the incentive for completing the extended interview, and respondents were notified of the
incentive amount only when they were contacted for the extended interview.
Although promised incentives have been shown in random-digit-dial (RDD) surveys to be
less effective than prepaid incentives (Berk et al. 1987; Church 1993), their relative
effectiveness in two-phase surveys such as the ATES Pilot Study is unknown. Due to the fact
that the household will have already received an incentive in the initial screener mailing,
and a relationship with the household will have already been established, it is possible that
4
the relative effectiveness of the promised incentive to a prepaid incentive may be different
in this context.
The effect of the incentive and also the effect of pre-notification (given a particular incentive
level) were each tested using chi-square tests. This approach was used because at these
levels, there was no a priori belief that there should be an interaction effect and there is
more power to test for the separate main effects. That is, the issues addressed through this
set of tests are, first, whether the incentive has an effect and, second, if an incentive is used,
should it be communicated to the respondent with a pre-notification? For the national
sample, the level of incentive was found to have a statistically significant effect on the
screener response rate and on the percentage of respondents providing a phone number in
the screener, but it did not affect the extended interview response rate. For those
designated to receive an incentive for completing the extended interview, notification of the
incentive was found to have a significant effect on both the screener and extended
interview response rates and on the percentage of respondents providing a phone number
in the screener. Post-hoc analyses to examine individual differences were not conducted.
However, it should be noted that factors such as sponsorship, topic salience, the length of
the data collection period, the number and sequenced nature of the screener follow-up
mailings, the use of incentives, specifics of the survey materials, and the use of FedEx
delivery service for the final screener follow-up mailing may each have had important
effects on the ability to attain high unit response rates. Additionally, strategies not used in
the Pilot Study, such as offering a bilingual or dual (English and Spanish) screener, and
offering a larger screener incentive in the initial screener mailing, might result in higher unit
response rates.
3. National Health and Nutrition Examination Survey (NHANES)
A. Sponsor: Centers for Disease Control and Prevention (CDC)
B. Data Collection Vendor: Westat, Inc.
C. Resources:
Interviewer Procedures Manual (March 2013)
http://www.cdc.gov/nchs/data/nhanes/nhanes_13_14/Intrvwr_Proc_Manual.pdf
D. Summary of Incentive Experience: Sample Persons (SPs) who agree to the exam component
of the survey, which is conducted in mobile examination centers (MECs) that travel to
fifteen survey locations per year, may qualify for several monetary incentives. The number
of incentives that apply to each SP is determined by when s/he is scheduled for an exam,
where s/he lives, if s/he has special transportation needs, and the number of special study
components for which he/she qualifies.
The chart below details the incentive amount that an SP may receive based on the specified
criteria:
SP Exam Incentives
SPs 16+ who agree to be examined at preselected time $125
SPs 16+ who refuse to be examined at preselected time $ 90
5
SPs 12-15 who agree to be examined at preselected time $ 75
SPs 12-15 who refuse to be examined at preselected time $ 60
SPs under age 12 $ 40
Parental Incentive
Non SP parents of SPs under 16 years $ 20
Other Exam Incentives
Child/Adult Care $ 5.25/hr
Dietary Phone Follow Up $ 30
Physical Activity Monitor $ 40
Second Urine Collection $ 50
SP Transportation Allowance Mileage to MEC
Mileage
Cities
Rural Areas
<15 Miles
$30
$25
16 – 30 Miles
$45
$40
31 – 59 Miles
$55
$50
>60 Miles
$70
$65
A family is eligible for the Non Parental SP incentive ($20) if neither parent is an SP. This
payment is to encourage parents who have not been chosen to complete the questionnaire
and escort their children to the examination.
In addition to the exams, SPs may participate in follow-up studies with additional
compensation. A dietary phone follow up (about 30-40 minutes) will be conducted for all
English and Spanish speaking examinees three to ten days after their MEC dietary interview.
They will be asked the same questions that they were asked during their primary exam. An
incentive of $30 will be paid for each completed interview.
4. National Household Food Acquisition and Purchase Survey (FoodAPS)
A. Sponsor: United States Department of Agriculture (USDA)
B. Data Collection Vendor: Mathematica Policy Research
C. Resources:
Design of the National Household Food Acquisition and Purchase Survey (June 2, 2011)
Presentation to Committee on National Statistics, Household Survey Producers
Workshop
http://www.bls.gov/cex/hhsrvywrkshp_cole.pdf
Documentation on the USDA website regarding the Use of Incentives
http://www.ers.usda.gov/data-products/foodaps-national-household-food-acquisitionand-purchase-survey/documentation.aspx#incentives
D. Summary of Incentive Experience: The National Food Acquisition and Purchase Survey
(FoodAPS) offered households monetary incentives to complete the various components of
the data collection activities. The incentives were designed both to maximize responses to
6
the household screening interview and to encourage participation throughout the weeklong survey.
FoodAPS offered a $5 token of appreciation to all households that were contacted for
screening. Initially, this token of appreciation was provided unconditionally, aiming to
prevent refusals at the first point of contact instead of attempting to convert refusals
afterwards. Although the timing of the incentive offer changed midway through field
operations, all contacted households continued to receive the $5 even if they did not
complete the screening interview.
Households that were eligible to participate in the study were offered a multi-part incentive
designed to encourage initial agreement to participate in the week-long survey and to
motivate households to stay engaged throughout the data collection week. This multi-part
incentive included a base incentive (a $100 check) for primary respondents; up to three $10
gift cards to encourage primary respondents to initiate the telephone call-ins for food
reporting on Days 2, 5, and 7; one $10 gift card for each additional household member age
11-14 who tracked their food acquisitions; and one $20 gift card for each additional
household member age 15 and older who tracked their food acquisitions. Households
received all incentives at the end of the data collection week during the final visit from the
field interviewer.
Prior to implementation of the incentives a Field Test design was conducted to determine
optimal incentives for the full-scale survey. Below are two tables showing 1. The incentive
design and 2. Incentive levels during the field test
7
E. Summary of OMB Experience in Incorporating Incentives:
At the beginning of the design process for FoodAPS, it was recognized that it was going to be
a burdensome data collection task for respondents in terms of time commitment and that
obtaining cooperation might be difficult. Not only would they need to convince the
household’s main food shopper to agree to collecting data over a seven-day period, but they
also needed the daily cooperation of all other members of the household in order to capture
all acquisitions of “food-away-from-home” (e.g., restaurants, coffee shops, vending
machines …). Therefore, it was expected that an incentive would be needed to encourage
cooperation and achieve reasonable response rates. This was communicated to OMB prior
to submission of the OMB package. The representative from FoodAPS encourages CE to let
OMB know of any such plans as early as possible.
FoodAPS conducted a field test and a full-scale test to test incentives. The results showed
that providing a larger incentive improved response rates. These findings were
incorporated into the OMB package for the full survey. The representative does not recall
any additional paperwork that needed to be filed other than the OMB package itself.
One other important consideration is that FoodAPS had a Technical Work Group (TWG) of
survey and subject-area experts from academia and government who reviewed plans and
offered suggestions. The TWG supported conducting the incentive experiment as part of
the field test and, based on results from the field test, they also supported offering the $100
base incentive as part of the overall incentive package during the full survey.
5. National Survey on Drug Use and Health (NSDUH)
A. Sponsor: Substance Abuse and Mental Health Services Administration (SAMHSA), an agency
of the U.S. Public Health Service in the U.S. Department of Health and Human Services
(DHHS)
8
B. Data Collection Vendor: Research Triangle Institute (RTI)
C. Resources:
Effects of Incentives on Data Collection: A Record of Calls Analysis of the National Survey
on Drug Use and Health, Presented at AAPOR 2003
https://www.amstat.org/sections/srms/Proceedings/y2003/Files/JSM2003-000792.pdf
Appendix C of the 2002 National Findings report:
http://www.samhsa.gov/data/nhsda/2k2nsduh/results/appC.htm
D. Summary of Incentive Experience: The National Survey on Drug Use and Health (NSDUH)
conducted an incentive experiment in 2001. Based on the outcome of the experiment,
NSDUH decided to implement an incentive in production beginning in 2002.
In 2001 a randomized, split-sample, experiment was conducted during the first six months
of data collection. The sample was overlaid on the NHSDA main study data collection
sample (at the time, the survey’s name was National Household Survey on Drug Abuse). The
experiment was designed to compare the effectiveness of $20 and $40 incentive treatments
with a $0 control group on measures of respondent cooperation and survey costs.
The results of the experiment showed that both the $20 and $40 incentives increased
overall response rates while producing significant cost savings when compared to the $0
control group (Eyerman et al. 2002a). Both treatments had significantly lower refusal rates
than the $0 group, and the $40 treatment had significantly lower noncontact rates than the
$0 group. Field Interviewers also reported that the incentives reduced the amount of effort
required to complete a case and that the payments influenced the respondent's decision to
cooperate.
Cost savings were seen with a lower data collection cost per completed case, including
incentive payment, in the $20 and the $40 treatments than the control which meant that
the incentives paid for themselves.
Based on the outcome of the 2001 experiment, NSDUH implemented a $30 incentive
payment in 2002. Their analysis showed that a $30 incentive would strike a balance
between gains in response rates and cost savings.
The lead letter, study description, informed consent item of the screening script, interview
introduction and informed consent documents, and question-and-answer brochure were
altered to include the information that, at the conclusion of the CAI interview, the
respondent is given the $30 incentive payment and one copy of an interview payment
receipt. Information about the incentive also was added to the videos sent to managers of
properties to which the interviewers could not gain access
9
In addition to implementing the incentives in 2002, the survey made the following changes:
The name of the survey was changed in 2002 from the National Household Survey on
Drug Abuse (NHSDA).
Improved data collection quality control procedures were introduced in the survey
during 2001 and 2002.
Population data used in NSDUH sample weighting procedures are based on the 2000
decennial census for the first time in the 2002 NSDUH.
The results of the 2002 survey, as well as more recent analyses of data from the 2001
experiment, suggest that the incentive, and possibly the other survey changes, did have an
impact on the estimates produced from the 2002 survey. However, due to the multiple
changes made to the survey simultaneously, it would not be possible to measure the effects
of each change or to develop a method of "adjusting" pre-2002 data to make them
comparable for trend assessment.
6. National Survey of Family Growth (NSFG)
A. Sponsor: Centers for Disease Control and Prevention (CDC)
B. Data Collection Vendor: University of Michigan Institute for Social Research
C. Resources:
Attachment C: Incentive Experiments in the National Survey of Family Growth (NSFG)
D. Summary of Incentive Experience: The National Survey of Family Growth (NSFG) has a
history of offering incentive payments to respondents and conducting incentive experiments
to test their effectiveness and alternative levels of payment. This attachment confirms that
the experiments that have been conducted have established that incentives reduce costs in
the NSFG, increase response rates, and increase the representativeness of the NSFG sample.
The most recent experiments in 2002-2003 and 2006-2007 suggested that without an
increased incentive for a small proportion of respondents (6-8 percent), groups such as
childless and college-educated women, and Hispanic men are not as well represented in the
standard Phase I sample as they are when $80 is offered to a sub-sample of nonrespondents. Bringing these groups into the sample improves the representativeness of the
sample, and raises the response rate, while avoiding the high costs of repeated visits to nonresponding households. This appears to justify the use of the $80 amount for a small sub-set
of the sample. Field conditions have not changed materially since that time, so the survey
plans to continue the incentive structure used in 2007-2010 for this 3-year period, or until
field conditions necessitate a change. If that occurs, they will propose a new experiment to
OMB.
Following are summaries of four major incentive experiments conducted by NSFG.
Incentives in the NSFG are in the form of cash payments at the time the interview begins.
10
These experiences showed cost-effective increases in response rates and representativeness
when incentives were offered to potential respondents.
1) 1993 (Cycle 5) Pretest: In a field experiment in the 1993 pretest for NSFG Cycle 5, a $20
cash incentive was found to produce a significantly higher response rate (67.4 percent)
than when no payment was offered (58.9 percent). For women who were offered $20,
response rates were higher, and field costs per case were lower than for women who
received no incentive.
2) 2001 (Cycle 6) Pretest: In a field experiment in the 2001 Pretest for Cycle 6, a $20
payment was contrasted with a $40 payment. The response rate for those offered $20
was 62 percent, and for those offered $40, it was 72 percent. There was variation in the
differences across demographic groups as well. Women offered the $20 incentive had
a response rate of 62 percent, while women offered $40 incentive had a response rate
of 81 percent. Those receiving the higher amount were also less likely to express
objections or reluctance to the interview than those receiving $20.
3) Cycle 6 Main Study: In the 2002-2003 Cycle 6 Main Study, a $40 incentive was used, but
response rates were still lagging in key groups after seven months of interviewing. NSFG
staff requested and received from OMB permission to use an $80 incentive in a halfsample of the cases remaining in the final four weeks of data collection during February,
2003. The $80 incentive raised the weighted response rate from 64 percent to 79
percent. The sample in the last 4 weeks had a higher proportion of married women,
Hispanic men and women, and full-time workers of both sexes.
4) 2006-2007: The basic experimental design operated within the Continuous NSFG 12week quarter. During Phase 1 (weeks 1-10 of each quarter), potential respondents were
offered a $40 incentive to complete an interview. During Phase 2 (weeks 11 and 12 of
each quarter), a “double sample” of approximately one-third of the remaining (nonresponding) cases was selected. Some sample cases selected into Phase 2 were still at
the screener stage, and others were at the main interview stage. The Phase 2 sample
in quarters 2, 3, and 4 was then randomly divided into two groups:
o
Group 1 received $10 prepaid in addition to the standard $40 (a total of $50 for the
main interview);
o
Group 2 received $40 prepaid in addition to the standard $40 at completion of
interview (a total of $80 for the main interview).
If a household in either group had not completed a screener at the end of Phase 1, they
were offered a $5 prepaid token to complete the screener in Phase 2. These two groups
are designated as the $5/$10/$40 and the $5/$40/$40 experimental conditions
respectively. For brevity, we will refer to these as the $50 and $80 groups respectively
(only a small subset got the additional $5 incentive for completing the screener.)
Cases selected for Phase 2 were sent a final letter via express mail with the prepaid
token enclosed. The letter stated that the enclosed token was for the respondent to
keep, in appreciation for their help.
11
Despite relatively small samples in the two experimental groups, consistent results were
obtained across three consecutive quarters: the $80 incentive raised response rates and
recruited different people into the sample than the Phase 1 effort alone ($40 incentive)
or the $50 incentive. Further, the results are broadly consistent with findings from Cycle
6 (2002 and 2003). The results suggest that busy, college-educated, childless women,
and high-income men and Hispanic men, are not as well represented in the standard
Phase I sample as they are in the $80 follow-up sample. It takes the $80 amount to
bring more of these people into the sample. Bringing them in improves the
representativeness of the sample, and raises the response rate. This appears to justify
the use of the $80 amount for a small sub-set of the sample.
7. Survey of Consumer Finances (SCF)
A. Sponsor: Federal Reserve Board (FRB)
B. Data Collection Vendor: National Opinion Research Center (NORC)
C. Resources:
Survey Incentives, Survey Effort, and Survey Costs by Jese Bricker, Federal Reserve
Board
D. Summary of Incentive Experience: The Survey of Consumer Finance (SCF) conducted a quasiexperiment varying which families received an incentive offer letter. Field effort outcomes
were compared between 2007 and 2010 after the base incentive increased from $20 in
2007 to $50 in 2010.
The SCF includes an area-probability (AP) sample and a list (LS) oversample of expectedly
wealthy families. All families in the AP sample are offered an incentive to participate, while
only a small fraction of LS families are offered incentives.
This quasi-experiment compares some LS families that received the incentive offer to other
observably identical families that did not receive the offer. The families that received the
initial incentive agreed to participate quicker than families that did not receive the initial
offer, both in terms of the number of contact attempts and in time since first contact. Data
quality measures and respondent effort are little affected by the offered incentive.
Since there is no sampling frame for wealth, wealth cannot be used in the sampling process.
Therefore, the sample frame for income is used and the oversampling mechanism depends
on modeling wealth as a function of income. Families are then arranged into one of seven
strata of increasing predicted wealth and are oversampled according to this wealth
prediction. While families in strata one and two were offered the incentive, the families in
strata three through seven were believed to be too wealthy for an incentive to have an
impact on response. These families received the same advance mailing as the strata one
and two families, except the incentive offer was not included. Field staff was also not
authorized to verbally offer an incentive to these families. Once wealth is measured in the
12
SCF, though, some of the families in stratum three were actually as wealthy as (and
observably equivalent to) families in stratum two that received the incentive offer.
Comparing the stratum two families (who received the incentive offer) to the observably
equivalent stratum three families (who did not) serves as the basis for the quasi-experiment.
Further evidence comes from a change in the base incentive rate from $20 in 2007 to $50 in
2010. On average, the 2010 families treated with a $50 offer needed four fewer attempted
contacts before agreeing to participate, relative to the untreated families. In 2007, families
treated with a $20 offer also agreed to respond more readily than the untreated families,
but the difference was smaller: only two contacts were saved. Increasing from a $20 offer to
a $50 offer saved two attempted contacts. Typically, very few wealthy people respond to
surveys. But because the quasi-experiment is among an over-sample of expectedly wealth
families, it is possible to comment on the impact of incentives at varying degrees of wealth.
The experimental results imply that a $50 incentive offer is most salient to families above
the median but below the top decile of wealth.
The experiment also compared the 2010 AP sample to the 2007 AP sample. In 2010 AP
families were offered $50 while in 2007 the offer was $20. The 2010 AP families agreed to
participate much quicker than the 2007 AP families, supporting the idea that a larger
incentive is of more assistance than a smaller incentive.
In summary, the results imply that a larger monetary incentive offer helps reduce contact
attempts and time in the field while maintaining data quality and effort during the survey by
the respondent.
E. Summary of OMB Experience in Incorporating Incentives:
The Federal Reserve Board has delegated authority from OMB for the Paperwork Reduction
Act, which is what usually governs the conduct of federal government surveys. The Board
has not obtained OMB approval to pay incentives for responding to the SCF because it does
not need to. Instead, it simply determines the payment and amount of incentives internally
with oversight/approval from senior management, procurement, and its legal department.
This is unique and differs from typical government agency survey approval procedures.
8. Survey of Income and Program Participation (SIPP)
A. Sponsor: U.S. Census Bureau
B. Data Collection Vendor: U.S. Census Bureau
C. Resources:
Monetary Incentives for Survey Respondents presented at International Field Directors
& Technologies Conference.
13
The Use of Monetary Incentives in Census Bureau Longitudinal Surveys presented at
2000 FCSM
D. Summary of Incentive Experience: SIPP conducted incentive experiments in 1996 and 2001.
In 2004, incentives became standard in production. In 2008, in an effort to again increase
response rates, SIPP conducted another incentive experiment.
The SIPP questionnaire was redesigned, and a new sample design was introduced starting
with the 1996 panel. Households selected for the SIPP 1996 panel were in sample for a total
of four years with lengthy interviews at 4-month intervals. The 1996 panel consisted of
36,700 households, which were interviewed 12 times from April 1996 through March 2000.
With each wave of the 1996 SIPP Panel, cumulative household nonresponse increased and
reached the highest level ever - nearly 34 percent at the end of 12 waves. SIPP believes it
would have been even higher if they had not used incentives in several waves of the panel.
In the 1996 SIPP panel, the effects of three types of incentives on response rates were
evaluated experimentally: unconditional incentives given at the initial contact, “booster”
incentives given at a later wave, and incentives targeted to households that failed to
respond in a prior wave. The first, Wave 1 experiment was conducted to test whether use of
incentives could improve SIPP response rates. Subsequent experiments were motivated by
unusually high sample attrition in the 1996 panel. In the first interview of the 1996 panel,
wave 1, the Census Bureau obtained 36,700 interviews or 92 percent of eligible households.
Based on prior experience, a 30 percent non-interview rate had been projected by the end
of the 4-year panel. However, even with the use of incentives for half the sample in wave 1,
the household non-interview rate was over 26 percent by the end of wave 6, much higher
than in wave 6 of prior panels. If it continued, sample attrition at this level would
compromise the longitudinal uses of the data. Several incentive experiments were
embedded in waves 7 through 12 to stem further sample loss and to arrive at the most
effective method. One experiment was independent of the initial wave 1 experiment,
permitting estimation of their separate effects. By the end of the panel, the total cumulative
nonresponse rate had stabilized at 33.6 percent. The design and results of the incentive
experiments are as follows:
14
Field representatives will distribute the incentive to sample households prior to the first
interviews. The experimental design calls for stratifying PSU's according to size into 3 strata.
Within each strata, treatments will randomly be assigned to PSU's. Positive effects of
incentives were found across all waves on response rates and reducing attrition. By wave
12, sample loss stood at 35 percent highest
Based on the higher response in the incentive treatment groups vs. the control SIPP planned
to use them in the 2001 panel.
The 2001 panel began in February 2001 and consists of 36,700 households to be
interviewed nine times. Due to budget constraints, the sample was cut by 15 percent in
Wave 2 (the second interview and beyond). The survey also instituted in this panel two
incentive experiments, one discretionary controlled by field representatives and one
triggered by refusal in the previous waves in order to reduce sample attrition.
The treatments used in 2001 were:
Treatment 1: $40 debit card issued at RO/(S)FR discretion, conditioned on obtaining a
completed interview
Treatment 2: Non-discretionary, unconditional $40 debit card sent via mail to previous
wave non-respondents, Waves 4-9
Control: No incentive eligibility (W1-9).
The following was the SIPP 2001 Incentive Program Sample:
15
SIPP found that when comparing the Treatment 1, discretionary households with the control
group, the response rates for the conditional incentive households were significantly higher
than the control in waves 1-5. In addition, when looking at unconditional incentive
household conversion rates by wave, treatment 2 conversion rates were significantly higher
than control group for wave 5.
In 2004, incentives became standard rather than an experiment, so there are no results as
to the effectiveness of incentives for this panel.
In 2008, SIPP implemented a randomized experiment in the first and second waves of the
SIPP Panel in an effort to determine an effective incentive for increasing response rates. 50
percent of the sample received no incentive, 25 percent received a $20 debit card with an
advance letter, and the remaining 25 percent were eligible for a $40 debit card conditional
on their participation.
Overall, the $20 unconditional incentive proved to be more effective compared to the
control group whereas the $40 conditional incentive did not.
E. Summary of OMB Experience in Incorporating Incentives:
SIPP designed its incentive experiment in collaboration with OMB and went back and forth
with OMB to hammer out the objectives for their general use of incentives. The experiment
is being incorporated into SIPP’s current production data collection. Because SIPP is a
longitudinal survey, the objectives differ between the waves. In Wave 1 of the 2014 panel,
they provided either $20 or $40 to eligible households, conditional on their completion of
the interview. Next, they will be offering the incentives for the duration of the Wave 2
interview period (February-May 2015). Not all of the sample is eligible to receive an
incentive; they are testing model-based incentives and adaptive design. Those households
eligible for an incentive will receive $40 upon completion of the interview.
They have not yet decided what they will do for Waves 3 and 4 -- whether to offer an
incentive at all and, if they do offer one, how to apportion them.
16
9. Consumer Expenditure Survey (CE)
A. Sponsor: Bureau of Labor Statistics
B. Data Collection Vendor: U.S. Census Bureau of Labor Statistics
C. Resources:
The Effects of Incentives on the Consumer Expenditure Diary Survey – Final Report (June
2007) \\filer1\dces\DCES-BRPD\Research Library\Documents\Final Report on Diary
Incentives.doc
The Effects of Incentives on the Consumer Expenditure Interview Survey – Final Report
(December 2009) \\filer1\dces\DCES-BRPD\Research Library\Documents\Goldenberg CEQ Incentives FINAL - 2009.pdf
BLS Incentive Procedures Wrap-up by the Bureau of Labor Statistics (2006)
\\filer1\dces\DCES-BRPD\Research Library\Documents\BLS_Incentives-ProceduresWrapup_2006.pdf
Incentive Wrap-Up Report by Josephine Ruffin, Field Division (Oct. 10, 2006)
\\filer1\dces\DCES-BRPD\Research Library\Documents\Ruffin_Census-Incentive-wrapup-report_Field_2006.pdf
Report on the CE Incentive Test by Demographic Surveys Division (Sep. 25, 2006)
\\filer1\dces\DCES-BRPD\Research Library\Documents\DSD_Incentives-test_2006.pdf
D. Summary of Incentive Experience:
From 2005-2007, CE conducted 2 incentive field experiments. One on the CE Diary Survey
and one on the CE Quarterly Survey. Below is a summary of the field experiments followed
by summaries of the lessons learned from the CEQ experiment reported by BLS, the Census
Bureau Field Division, and the Census Bureau Demographic Surveys Division.
a. Summary of the Consumer Expenditure Diary Survey (CED) Incentive Experiment
From March through November 2006, a CED incentives experiment was field tested in
the production sample. The experimental design contrasts a control group,
approximately half of the Diary sample receiving no incentive, with two incentive groups
of equal size that received either $20 or $40 debit cards which were prepaid and
unconditional. The debit cards resembled credit cards and could be used in stores or to
collect cash at an ATM machine. United States Postal Service (USPS) Priority Mail was
used to distribute the incentive along with the survey’s advance letter prior to an
interviewer contacting the potential survey respondent for the initial Diary interview.
17
The incentives experiment increased response rates by just over one percent, which was
considered disappointing. However, the $40 group had 2 ½ percentage points more
completed interviews (the major component of good responses) than the control group.
Still, the impact of Diary incentives on response rates was somewhat disappointing.
The incentives had a very small impact on respondent composition. The incentive
seemed to increase the representation of black respondents, and this is considered
important as black CUs were under-represented in both CE surveys. Respondents who
received incentives more thoroughly reported their income data, shown by the
significantly higher proportion of CUs that were complete income reporters.
Respondents who received an incentive reported both a higher number of expenditures,
as well as higher levels of expenditures.
For total spending across the 2-week Diary period, mean reported spending for the
incentive cases was about $60 higher, whereas median spending for incentive cases was
about 10 percent higher than the control group ($110). As expected, the $40 group
reported more spending than the $20 group. Consumer Units who received the
incentive also reported more purchased items when compared to the control group.
Those in the $20 incentive group reported an average of 3.5 more items than the
control (an increase of 5.4 percent), while the $40 incentive group reported
approximately 5.9 more item (an increase of 9.1 percent).
Additionally, households who cashed their debit cards (before, during, or after the
survey period) reported more expenditures than the control group and those incentive
recipients who never cashed their cards.
The incentives contributed to improved data quality as measured by available data
quality indicators. Incentive CUs were more likely to have entries in their diaries at
pickup, use their Information Book, complete the interview part of the survey in person,
and not be a double placement. Also, increased reported spending in categories of
expenditures that were probably not influenced by the incentive is another important
indicator of improved data quality. The best measure of Diary data quality is likely to be
the actual amount of reported spending collected in the diaries. As stated above, mean
Diary expenditures for the incentive groups were about $60 more than the control
group, although only about $30 of the increase was for expenditures that may be
potentially biased by the incentive.
b. Summary of the Consumer Expenditure Quarterly Survey (CEQ) Incentive Experiment
The CE conducted a CEQ field experiment which provided prepaid unconditional
incentives to four different treatment groups of approximately equal size: a no-incentive
control group, a no-incentive treatment group that received the advance letter by
Priority Mail, an incentive treatment group that received a $20 debit card, and an
incentive treatment group that received a $40 debit card. CE mailed debit card
incentives to the incentive treatment groups along with the Interview Survey’s wave 1
advance letter by Priority Mail between November 2005 and July 2006, and collected
18
data from the last recipients' wave 5 interviews in July, 2007.
All treatment groups were compared on response rates, data quality, and sample
composition. For the $40 incentive treatment group, response rates and many
indicators of data quality were higher than those of the no-incentive groups. Most of
the effects for the $40 incentive lasted through wave 5 of the Interview Survey. On the
other hand, the $20 incentive treatment group was not statistically different from the
no-incentive groups on response rates and most other measures. Although the $20
incentive did not have the desired effect, a model-based analysis suggests that the cost
of the $40 debit cards could potentially be covered by lower field costs for the
respondents who received them.
The incentive had a positive impact on response rates. Respondents received an
incentive only in the first wave, but the effect lasted through all five waves of the
survey. The effect of the $40 incentive was most pronounced on response and
noncontact rates in the first wave. The $40 incentive group's response rate remained
significantly higher than the control group in waves 2, 4, and 5, ranging from three to
five percentage points higher than the control group and leveling off at about 79
percent. The $20 incentive group had response rates 1 to 2 percentage points higher
than the no-incentive groups (not significant).
After the first wave, refusal rates were 3 to 4 percentage points lower for the $40
incentive recipients than for any of the other treatment groups, and the effect remained
through wave 5. The persistence of the incentives effect on response rates across
interview waves was the most important finding from the experiment.
Overall, the CE experiment was successful. Response rates were higher and refusals
lower in the $40 incentive group than in the no-incentive and $20 incentive groups. The
positive effects that resulted from providing a $40 incentive in wave 1 remained through
all five interviewing waves. Because of the positive effect on response rates, a $40
incentive is recommended for all respondents.
c. Lessons Learned from the CEQ Incentive Experiment
After the conclusion of the experiment, 3 reports were written to document successes
and problems that occurred during the test and in most cases proposals for solutions to
19
those problems. One was written by BLS, a second by the Census Bureau Field Division,
and a third by the Census Bureau Demographic Surveys Division.
i.
BLS wrote an Incentive Procedures wrap up which included a table listing
topics/problems, the category it fell under (e.g. Materials, NPC, RO), and possible
solutions as well as a table listing topics for discussion for production.
ii.
A wrap-up report was written by a member of the Field Division. It stated that
three-fourths of the regional offices recommend that the incentives be
implemented. However, it indicated that the methodology should be improved.
The wrap-up report included a list of problems and solutions offered by regional
office staff and field representatives. Some major topics included insufficient
information via ROSCO, debit cards perceived as junk mail or not real, debit cards
expiring too soon, some debit cards not working and some unfunded, perception
that the debit cards were a waste of tax money, instructions for redemption of the
debit cards were difficult to understand, and mailing problems. In addition, Field
Representatives were asked to respond to an incentive debriefing questionnaire at a
refresher training four months after the test.
iii. A Report on the CE Incentive Test was written by the Demographic Surveys Division.
It listed successes, problems, and suggestions for improvement by topic. The topics
included funding, activation of cards, and cancellation of cards; creating vendor file
for BLS; and NPC’s role.
20
Appendix A. Summary Table of Incentive Amounts and Sample Sizes for Incentive Tests and
Implementation
* Incentives are conditional unless specified otherwise
Survey
Medical Expenditure Panel
Survey (MEPS)
Before 2007 Production
Medical Expenditure Panel
Survey (MEPS)
2007-2010 Production
Medical Expenditure Panel
Survey (MEPS)
2008 Experiment
Amount*
$25
Sample
Production
$30
Production
$30 (Control)
$50
$70
9,939
(full panel sample of household)
Medical Expenditure Panel
Survey (MEPS)
2011 Production
$50
Production
National Adult Training and
Education Survey (ATES)
Sep 2010 – Jan 2011 Pilot Study
$0 (20 percent)
$10 (40 percent)
$20 (40 percent)
20,000 sampled
9,113 completed
National Health and Nutrition
Examination Survey (NHANES)
Production
Sample Persons may receive an
Production
incentive based on a specified set
of criteria and varies based on the
type of participation (exam, phone
interview, physical activity
monitor, urine collection, etc)
which ranges from $5.25/hour to
$125
National Household Food
Acquisition and Purchase Survey
(FoodAPS)
Production
$5 unconditional at screening
Mulit-part incentive:
Base $100 check for Primary
Respondent (PRs)
Up To three $10 gift cards to
encourage PRs to initiate phone
call-ins on Days 2, 5, and 7
One $10 gift card for each add’l
HH member age 11-14 who
tracked their food acquisitions
One $20 gift card for each add’l
HH member age 15+ who
tracked their food acquisitions
21
Production
Survey
National Household Food
Acquisition and Purchase Survey
(FoodAPS)
Field Test Design
Amount*
Base incentive:
Low $50
High $100
Sample
Field Test: Feb-May 2011 (400)
Full Scale: Mar-Sep 2012 (5,000)
Add’l HH member:
Age 11-14 $10
Age 15+ $20
National Survey on Drug Use and
Health (NSDUH)
2001 Experiment
Telephone bonus to encourage
inbound calls $10/call
$0
$20
$40
a sample of 251 of the 900 primary
strata used in the 2001 survey
National Survey on Drug Use and
Health (NSDUH)
2002 Production
National Survey of Family
Growth (NSFG)
1993 (Cycle 5) Pretest
$30
National Survey of Family
Growth (NSFG)
2001 (Cycle 6) Pretest
Cash incentive at the beginning of
an interview
$20
$40
National Survey of Family
Growth (NSFG)
2002-2003 Cycle 6 Main Study
Cash incentive at the beginning of
an interview
$40
$80
National Survey of Family
Growth (NSFG)
Sep 2006 – Jun 2007
Group 1: $10 prepaid in addition
to standard $40
($50 total)
Screener Interview Cases Phase 2:
$50 - 208 sampled, 103 completed
$80 - 207 sampled, 152 completed
Group 2: $40 prepaid in addition
to the standard $40 ($80 total)
Main Interview Cases in Phase 2
$50 - 192 sampled, 100 completed
$80 - 215 sampled, 137 completed
National Survey of Family
Growth (NSFG)
Production
Production
Cash incentive at the beginning of
an interview
$0
$20
If a HH in either group had not
completed a screener at the end
of Phase 1, they were offered an
additional $5 prepaid token to
complete the screener in Phase 2
$80 total ($40 prepaid in addition
to the standard $40)
22
Production
Survey
Survey of Consumer Finance
2007-2010 Experiment
Amount*
Incentive amount was increased
from $25 to $50
Sample
The SCF includes an area-probability
(AP) sample and a list (LS) oversample
of expectedly wealthy families. All
families in the AP sample are offered
an incentive to participate, while only
a small fraction of LS families are
offered incentives
The full AP sample incentive amount
was increased from $25 to $50
The list (LS) oversample, was
arranged into seven strata of
increasing predicted wealth. Families
in strata one and two were offered
the incentive, but families in strata
three through seven were not
because they were believed to be too
wealthy for an incentive to have an
impact on response. These families
received the same advance mailing as
the strata one and two families,
except the incentive offer was not
included.
Survey of Income and Program
Participation (SIPP)
1996 Experiment
Wave 1: $0, $10, $20 tested in
rotations 3-4
36,700 interviews were obtained (92
percent of eligible households)
Wave 7: $20 booster given in
Wave 1 low income recipients
Wave 8-9: $0, $20, $40 to convert
noninterviews
Wave 10-12: $20 and $40 to
convert noninterviews
Survey of Income and Program
Participation (SIPP)
2001 Experiment
Treatment 1: $40 debit card issued
at RO/(S)FR discretion,
conditioned on obtaining a
completed interview
36,700 to be interviewed 9 times (due
to budget constraints the sample was
cut by 15 percent in Wave 2 and
beyond)
Treatment 2: Non-discretionary,
unconditional $40 debit card sent
via mail to previous wave nonrespondents, Waves 4-9
Waves 1-3, Control: 25,020
Waves 4-9, Control: 12,510
Control: No incentive eligibility
(W1-9)
23
Waves 1-3, Treatment 1: $25,020
Waves 4-9, Treatment 1: 25,020
Waves 4-9, Treatment 2: 12,510
Survey
Survey of Income and Program
Participation (SIPP)
2008 Experiment
Amount*
$0 (50 percent)
$20 (25 percent)
$40 (25 percent)
Sample
Consumer Expenditure Diary
Survey
March – November 2006
Experiment
3 Treatments received prepaid,
unconditional debit card
incentives:
Treatment 1: $0 (Control)
Treatment 2: $20 w/ advance
letter by Priority mail
Treatment 3: $40 w/ advance
letter by Priority mail
Field Tested in the Production sample
Treatment 1: 50%
Treatment 2: 25%
Treatment 3: 25%
Consumer Expenditure Quarterly 4 Treatments received prepaid,
Survey
unconditional debit card
November 2005 – July 2007
incentives:
Experiment
Treatment 1: $0 w/ advance
letter by First Class mail
(Control)
Treatment 2: $0 w/ advance
letter by Priority Mail
Treatment 3: $20 w/ advance
letter by Priority Mail
Treatment 4: $40 w/advance
letter by Priority Mail
24
Treatment 1: 2,376
Treatment 2: 2,261
Treatment 3: 2,284
Treatment 4: 2,282
Total: 9,203
File Type | application/pdf |
File Modified | 2019-10-30 |
File Created | 2015-10-22 |