Attachment Q Incentives FCSM Evaluation

Attachment Q - 2015 FCSM - Incentives.pdf

2014 Survey of Income and Program Participation (SIPP) Panel

Attachment Q Incentives FCSM Evaluation

OMB: 0607-0977

Document [pdf]
Download: pdf | pdf
Attachment Q

Designing a Multipurpose Longitudinal Incentives Experiment for
the Survey of Income and Program Participation
Ashley Westra, Mahdi Sundukchi, and Tracy Mattingly
U.S. Census Bureau 1
4600 Silver Hill Rd
Washington, DC 20233
Proceedings of the 2015 Federal Committee on Statistical Methodology (FCSM) Research Conference
Abstract
The U.S. Census Bureau has experimented with the use of monetary incentives in the Survey of Income and
Program Participation (SIPP), a demographic longitudinal survey, since the 1996 Panel. As with most surveys, the
main goal of using incentives is to increase response rates, especially when facing a steady increase in nonresponse
over the course of a panel. For the most recent SIPP panel, the 2014 Panel, the survey has been extensively
redesigned, with households being interviewed only once a year instead of every four months. Since this redesign
could have an impact on the effect of incentives, a new incentives experiment is introduced for Waves 1 and 2 of the
SIPP 2014 Panel. In addition to investigating the effect of incentives on response rate, we design a way of assigning
incentives using a response propensity model with the purpose of reducing nonresponse bias. This new methodology
is made possible due to the longitudinal design of the SIPP. We will outline the design of the multipurpose
incentives experiment for Waves 1 and 2 of the SIPP 2014 Panel and provide preliminary results.
I. Introduction
The Survey of Income and Program Participation (SIPP) is a demographic longitudinal survey conducted by the
U.S. Census Bureau. 2 The main goal of the SIPP is to provide accurate and comprehensive information about the
income and program participation of individuals and households in the United States. SIPP data provide the most
extensive information available on how the nation’s economic well-being changes over time, a defining
characteristic of the survey since its inception in 1983. To achieve this goal, the SIPP provides both cross-sectional
and longitudinal estimates for households, families, and persons in the civilian noninstitutionalized population living
in the United States. 3
The SIPP is administered in panels, with each panel typically running from 3 to 5 years. Prior to the 2014 SIPP
Panel, the sample was divided into four equally sized rotation groups, with one rotation being interviewed each
month. One round of interviewing the entire sample, a four-month interval, is called a wave. The purpose of the
rotation groups was to distribute the interviewing workload and reduce bias in transition estimates. However, in
order to reduce both the burden on respondents and program costs, the SIPP was re-engineered beginning with the
2014 Panel. The sample is no longer divided into rotation groups, and a household is only interviewed once a year
instead of three times. Rather than year round, interviewing runs for 4 months of each year, February through May,
and respondents are asked about each month of the previous calendar year.
Since 1996, the U.S. Census Bureau has conducted numerous experiments on using incentives in the SIPP. These
experiments were designed to test the effect of monetary incentives on overall response rates and conversion rates.
Both unconditional and conditional incentives were tested, where conditional incentives are only given if a response
is received. Also, we tried both random assignment as well as discretionary incentives, where Field staff were given

1

Any views expressed are those of the authors and not necessarily those of the U.S. Census Bureau.
For more information about the SIPP visit its webpage at < http://www.census.gov/sipp/>.
3
Statistics from surveys are subject to sampling and nonsampling error. For further information on the source of the
data and accuracy of the SIPP estimates, including standard errors and confidence intervals, see
.
2

1

the decision to determine which households needed an incentive to get a response. We furthermore experimented
with the monetary amount of the incentive, with $10, $20, and $40 being the typical choices.
After all of these previous experiments, there was no conclusive result as to the best way to implement incentives as
a standard practice for the SIPP. In addition, the 2014 Panel marks extensive changes in the design of the survey.
Therefore, we felt it was worthwhile and the Office of Management and Budget (OMB) required that we conduct
another incentive study for the SIPP 2014 Panel before any recommendations could be made on a standard practice
for implementing incentives.
This paper discusses the design of a longitudinal incentives experiment for the 2014 SIPP Panel. First we look at the
design of the experiment and provide preliminary results for Wave 1. We then go on to describe the Wave 2
experiment, which we have designed in an innovative way that will hopefully give us a different perspective on
assigning incentives, one that has not been looked at by many researchers and certainly not for the SIPP. While
previous SIPP experiments focused mainly on the goal of improving response rate, this new 2014 Panel experiment
also looks into using incentives assigned by a propensity model to improve the characteristics, or distribution, of the
final sample and potentially decrease nonresponse bias.
II. Background
There are a number of reports and analyses done on the different uses of incentives in past SIPP panels, with varying
conclusions on their effectiveness.
In the 1996 SIPP Wave 1, we compared a control group to groups receiving $10 and $20 unconditional incentives,
both paid in advance to households at their door. The finding was that the $20 incentive increased response rates for
key SIPP respondents (i.e. those tending towards poverty) by 3.4% to 6.0% (James, 1997; Mack et al., 2008;
Flanagan, 2007). On the other hand, the $10 incentive did not significantly reduce nonresponse (Mack, et al., 1998).
In addition, the $20 incentive reduced overall household, person, and item nonresponse rates. Also, the $20
incentive was determined to have a strong effect in helping with attrition of households in the high poverty stratum
by reducing the nonresponse rate from 9.32% to 5.94% (James, 1997).
In the SIPP 2001 Panel, we evaluated the effectiveness of two types of $40 incentives, conditional discretionary
incentives in Waves 1-9 and unconditional incentives mailed to prior wave nonrespondents in Waves 4-9. Due to
inconsistent Field practices, in the early waves few discretionary incentives were given out (for example, 1.94% in
the first wave), which resulted in an increase of only 0.9% to 1.9% in response rates in six of the first eight waves
and no significant differences in the other two waves, Waves 1 and 4 (Killion, 2008; Lewis, et al., 2005). The later
wave unconditional incentives had no significant impact on conversion rates.
In the SIPP 2004 Panel, $40 discretionary incentives were used in the production of the survey rather than as an
experiment. The Field staff had enough $40 debit cards to cover approximately 20% of their workload (Creighton,
2003). There are no results as to the effectiveness of the incentives themselves for this panel. We did find that
households that are chosen by the Field staff to receive the $40 discretionary incentives in an earlier wave are more
likely to be chosen again to receive an incentive in later waves. In Wave 6, an experiment to improve conversion
rates was conducted. Nonrespondents in both Waves 4 and 5 were mailed a letter that promised a $40 incentive upon
completion of the Wave 6 interview. The finding was that there was no evidence of an improvement of the Wave 6
conversion rates with the use of this conditional incentive (Flanagan, 2007).
Finally, in the SIPP 2008 Panel, an incentive experiment was conducted to test two types of incentives, a $20
unconditional incentive with advance letter in Wave 1 and $40 discretionary conditional incentives in every wave of
the panel. The $20 unconditional incentive was effective at improving response rates in all waves but one (Wave 5),
with small improvements of 1.0% to 1.8% compared to the control. The $40 discretionary incentive began to have a
significant effect in Waves 4 to 7, with an increase in response rates from 1.6% to 3.1% compared to the control
(Mattingly, 2011). There was some inconsistency among the regional offices in the distribution and effectiveness of
the $40 discretionary incentive.

2

III. Design of the SIPP 2014 Experiment – Wave 1
Since the 2014 SIPP Panel marks the start of a new survey design, the Office of Management and Budget (OMB)
requires an analysis of the effectiveness of incentives in the new panel design before a full implementation can
occur. Our experiment plans to distribute conditional incentives in at least the first two waves of the 2014 Panel. To
avoid the inconsistency in the distribution of the incentives by the regional offices that we had in the last panel, debit
card incentives and separate PIN numbers are mailed out by the National Processing Center (NPC).
To facilitate both of our goals for Wave 1 and Wave 2 incentive experiments, we divided the 53,070 sampled
housing units randomly into four approximately equal sized incentive groups, which we call Groups 1 through 4.
Group 1 is designed to be a control group for all waves, so the households in it will never receive an incentive in any
wave. Group 2 will become part of the experiment treatment in Wave 2, but for Wave 1 receives no incentive.
Meanwhile, in Wave 1, Group 3 households will receive $20 conditional incentives and Group 4 households will
receive $40 conditional incentives. Table 1 summarizes the Wave 1 treatment groups and the number of housing
units and eligible households in each of those groups.
Table 1. Number of Housing Units and Eligible Households for each Incentive Group in the SIPP 2014 Panel
Wave 1 of Interviewing
Group
Incentive Amount
Number of Housing
Number of Eligible
Units
Households
1
$0
13,549
10,798
2
$0
13,471
10,766
3
$20
13,470
10,697
4
$40
12,580
10,197
SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1

In Wave 1, the main goal was to determine whether the use of conditional incentives, and at what amount ($20 or
$40), significantly improves response rates relative to the control group (Groups 1 and 2 combine to form the control
group that receives no incentive in Wave 1). The results of the Wave 1 experiment are summarized in Section IV. In
the 2014 SIPP Panel experiment, we are also interested in exploring the effect of incentives on nonresponse bias and
whether we can use incentives to improve the distribution of the final sample in terms of overall coverage. These
goals will be explored further in the Wave 2 experiment, the design of which is explained in Section V.
IV. Wave 1 Results for the 2014 SIPP Incentive Study 4
In this section, we present our findings from the incentive experiment for Wave 1 of the SIPP 2014 Panel. Note that
in Wave 1, we combine Groups 1 and 2 to form the $0 incentive group. We compare response rates across incentive
groups overall, by regional office (RO), and for subgroups of the population. Next, we compare the Type A
noninterview distributions, specifically the household refusals, across incentive groups. Among the interviewed
households, we compare distributions of key variables across incentive groups.
In addition to looking at the effectiveness of incentives, we also want to evaluate a few operational issues associated
with incentives. First, we want to compare the average number of contacts per respondent household across
incentive groups. We want to know if using incentives actually reduced Field costs by reducing the number of times
a field representative (FR) had to travel to a household to complete the interview. Also, we want to look at how
many of the households that received an incentive actually cashed them.
Response Rates by Incentive Group
Table 2 presents the weighted response rates across incentive group for Wave 1 of the 2014 SIPP Panel. Table 2
shows that both the $20 and $40 conditional incentives significantly increase the Wave 1 response rate compared to
the control group by 1% and 3%, respectively. Also, the $40 incentive significantly improves the response rate by
2% compared to the $20 incentive.
4

All comparative statements in this report have undergone statistical testing, and, unless otherwise noted, all
comparisons are statistically significant at the 10 percent significance level.
3

Table 2. Weighted Wave 1 Response Rates by Incentive Amount
Incentive Amount
Response Rate
Difference from $0
$0
67.9%
-(0.38%)
$20
69.0%
1.1%*
(0.51%)
$40
70.9%
3.0%*
(0.48%)

Difference from $20
--1.9%*

SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1
*Significant at the 10% level of significance
Response rates are weighted using the base weights. Standard errors are shown in parentheses.

Response Rates by Incentive Group and Regional Office
In the 2008 Panel incentive experiment, it was found that the implementation of incentives differed by regional
office and so did the incentives’ effectiveness (Mattingly, 2011). We already believe that there are fundamental
differences between the respondents in Census regions, as evidenced by the fact that we use region as a variable in
the weighting nonresponse adjustment. Therefore, in this experiment, although we controlled the distribution of the
incentives through the National Processing Center, we want to see if the same differences in incentive effectiveness
are still present across regional offices as they were in the 2008 Panel. Table 3 shows the weighted response rates for
each incentive group in each regional office for Wave 1 of the SIPP 2014 Panel.
Table 3. Weighted Wave 1 Response Rates by Regional Office and Incentive Amount
Regional Office
$0
$20
$40
Significant Differences
New York (22)
59.4%
59.8%
58.6%
Philadelphia (23)
71.0%
70.4%
74.2%
†, ‡
Chicago (25)
72.3%
74.1%
75.2%
†
Atlanta (29)
66.2%
68.6%
70.6%
*, †
Denver (31)
68.5%
69.6%
72.8%
†, ‡
Los Angeles (32)
68.0%
69.6%
71.6%
†
SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1
*Significant difference between $0 and $20 at the 10% level of significance
†Significant difference between $0 and $40 at the 10% level of significance
‡Significant difference between $20 and $40 at the 10% level of significance
Response rates are weighted using the base weights.

We can see from Table 3 that the effectiveness of incentives differs by regional office. In New York, we saw no
significant difference in response rate between the three incentive groups. While we saw that the overall effect of a
$20 conditional incentive on response rate was significant at the national level, when looking by regional office,
Atlanta was the only regional office where a $20 incentive showed a significant increase in response rate over the
control group. This is in part due to a smaller sample size, but also gives some credence to the idea that different
regions of the country respond differently to incentives.
Response Rates by Incentive Group for Subgroups
In the Wave 1 incentive assignment, incentives were randomly distributed to households regardless of their
characteristics. Assuming incentives do not equally affect all households, we want to examine subgroups of the
households to see if we can determine the effectiveness of incentives based on the characteristics of the household.
Table 4 shows the response rates by incentive group for various subgroups of the population. Examined
characteristics were poverty stratum, urban or rural status, Census region, Metropolitan Statistical Area (MSA)
status, household size, tenure, race and gender of the household reference person. These variables are chosen
because we have their values for both respondents and nonrespondents from either the frame or as reported by
interviewer observation.
4

Table 4. Weighted Wave 1 Response Rates by Incentive Amount for Subgroup Characteristics
Variable
Level
$0
$20
$40
Significant
Differences
Low Income
70.7%
72.8%
75.6%
*, †, ‡
Poverty Stratum
Non-Low Income
66.2%
66.8%
68.2%
†
Urban
67.1%
68.5%
70.1%
*, †, ‡
Urban / Rural
Rural
71.2%
72.0%
74.1%
†
Northeast
62.5%
63.2%
63.3%
Midwest
70.7%
72.7%
73.7%
*, †
Census Region
South
68.5%
69.3%
72.2%
†, ‡
West
68.2%
69.4%
72.0%
†, ‡
Central City of MSA
66.0%
68.2%
70.0%
*, †
Balance of MSA
66.9%
68.1%
69.6%
†
MSA Status /
Place
Place
75.1%
74.4%
76.4%
Other
75.0%
73.3%
77.9%
‡
1
74.3%
74.7%
73.7%
2
65.7%
67.4%
71.2%
†, ‡
Household Size
3
72.4%
74.2%
74.8%
†
4+
75.1%
76.8%
77.4%
†
White
70.4%
71.1%
72.4%
†
Race
Black
72.2%
76.4%
78.8%
*, †
Other
70.3%
72.1%
69.8%
Male
68.4%
70.1%
71.2%
*, †
Gender
Female
71.5%
72.1%
73.8%
†, ‡
Owned
70.3%
70.8%
72.3%
†, ‡
Tenure
Rented
71.9%
74.8%
75.8%
*, †
Occupied
88.4%
92.8%
90.7%
*
SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1
*Significant difference between $0 and $20 at the 10% level of significance
†Significant difference between $0 and $40 at the 10% level of significance
‡Significant difference between $20 and $40 at the 10% level of significance
Response rates are weighted using the base weights.

Table 4 shows that the effect of the incentive amount on response rate changes based on household characteristic.
For example, a $20 incentive increases response rate compared to no incentive for low income and urban
households. However, a $20 incentive does not significantly increase the response rate in non-low income and rural
households. This implies that we may be able to use household characteristics to determine which households are
more likely to respond with incentives.
Distribution of Type A Cases by Incentive Group
We found that giving an incentive tends to decrease the overall nonresponse rate, so now we want to take a look at
the distribution of the Type As by incentive group to see if that decrease occurs in a specific nonresponse status,
such as household refusal. Table 5 shows the Wave 1 percent of Type As by nonresponse status for each incentive
group.
Table 5. Wave 1 Percent of Type As by Nonresponse Status and Incentive Group
Incentive
Language
Unable to
No One
Temporarily
Household
Amount
Problem
Locate
Home
Absent
Refused
$0
0.76%
0.40%
10.83%
1.44%
79.07%
$20
0.97%
0.28%
11.27%
1.04%
79.25%
$40
0.46%
0.43%
11.17%
1.11%
79.38%

Other
7.49%
7.19%
7.46%

SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1

5

A Chi-square test shows that there is no evidence of a significant difference between the distributions of the Type As
by incentive group. In particular, the percent of households refused does not significantly decrease due to an offered
incentive.
Distribution of Interviewed Households by Incentive Group
In addition to increasing response rates, it is believed that incentives can improve the characteristics of the final
sample. If the coverage of the final sample is higher due to incentives this can reduce the size of the nonresponse
adjustments needed in weighting and ultimately decrease nonresponse bias. For this reason, we want to compare the
distributions of the interviewed households by key characteristics that are used in SIPP nonresponse weighting
adjustments. For Wave 1, these key variables are within PSU stratum code, race of the reference person, tenure,
Census region, MSA Status / Place, and household size. In addition to these variables, we looked at other
demographic characteristics such as age, gender, educational attainment, and marital status of the household
reference person.
Table 6 shows the distributions of the interviewed households by incentive group for each of these key variables.
Chi-square tests were used to test for a significant difference in the distributions across incentive groups. When we
look at the distribution of interviewed cases, it is difficult to find any differences between the incentive groups. We
did find difference in the distribution of householder race between incentive groups. However, for a majority of the
variables, incentives did not significantly change the distribution.
This leads us to conclude that while incentives do appear to increase response rates they are not having an effect on
reducing nonresponse bias, if it exists. This issue will be the focus of our Wave 2 incentive experiment, where we
will analyze the use of a response propensity model to assign incentives.

6

Table 6. Wave 1 Distribution of Key Variables by Incentive Group for the Interviewed Sample
Variable
Level
$0
$20
$40
Low Income
38.3
39.1
38.9
Poverty Stratum
Non-Low Income
61.7
60.9
61.1
Urban
19.6
19.5
19.9
Urban / Rural
Rural
80.4
80.5
80.1
Northeast
16.6
16.6
16.3
Midwest
23.5
24.0
23.3
Census Region
South
37.3
37.1
37.9
West
22.6
22.3
22.4
Central City of MSA
32.5
33.4
33.2
Balance of MSA
50.8
50.6
50.5
MSA Status /
Place
Place
8.9
8.7
8.3
Other
7.8
7.3
8.0
1
30.1
29.3
28.0
2
32.6
32.3
33.9
Household Size
3
15.0
15.2
15.2
4+
22.3
23.1
23.0
White
80.5
80.4
80.4
Race*
Black
11.9
12.8
12.8
Other
7.6
6.7
6.8
Male
47.4
46.7
47.4
Gender
Female
52.6
53.3
52.6
Owned
63.5
62.5
63.7
Tenure
Rented
34.2
35.0
34.0
Occupied
2.3
2.5
2.3
Under 25
4.8
4.9
4.8
25-34
14.4
14.7
14.0
Age
35-54
35.8
35.6
36.7
55+
45.0
44.9
44.4
Less than High School
4.2
4.1
4.5
High School, no diploma
7.5
6.8
6.9
Educational
Attainment
High School graduate
57.0
57.5
56.2
College graduate
31.3
31.6
32.3
Married, spouse present
46.7
47.2
47.7
Married, spouse absent
1.7
1.9
1.9
Widowed
10.6
10.1
9.9
Marital Status
Divorced
17.4
17.0
17.3
Separated
2.7
2.7
3.2
Never Married
21.0
21.2
20.1
SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1
*Significant difference in distribution between incentive groups at the 10% level of significance

Number of Contacts per Household by Incentive Group
We now switch our attention away from how incentives can help improve data quality towards how they can
decrease total survey costs. One measure of survey cost is the number of contact attempts (in person or by
telephone) per household that are needed to obtain an interview. The hope is that offering an incentive decreases the
number of times a field representative needs to contact the household to complete the interview. Using collected
Contact History Instrument (CHI) data, we can retrieve the number of attempts made for each household. The
average number of contact attempts per interviewed household by incentive group is presented in Table 7 for Wave
1.

7

Table 7. Wave 1 Average Number of Contact Attempts per Interviewed Household
Incentive Amount
Avg # of Contact
Difference from $0
Difference from $20
Attempts
$0
4.87
--$20
4.72
-0.1467*
-$40
4.67
-0.1924*
-0.0457
SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1
*Significant at the 10% level of significance

The results in Table 7 show that both the $20 and $40 incentives significantly decreased the average number of
contact attempts needed to complete an interview compared to the control group. Across modes, it costs
approximately $44.31 per contact attempt. Therefore, we estimate that a $20 incentive saves $6.50 and a $40
incentive saves $8.53 per interviewed household.
Analysis of the Distribution of the Debit Cards
For the first time, we have information about who received the debit cards and cashed them, allowing us to analyze
how many of the incentives were cashed and the characteristics of those households that did not cash them.
First, we can look at how many households actually cashed the incentives they received. 69.4% of households that
were sent $20 incentives cashed them, and 80.5% of households that were sent $40 incentive cards cashed them. The
difference in cashed rates between the $20 incentives and $40 incentives is significant.
Table 8 compares the cashed rates for each incentive amount by RO. The same trend that held for the entire nation is
also true for each RO. Interestingly, the New York RO has the lowest percentage of cashed incentives. New York
was also the RO with the lowest response rate, and the only RO that had no significant differences between the
response rates of the control group and the incentive groups.
Table 8. Percentage of Households that Cashed Received Incentives by Incentive Amount and RO
Regional Office
$20
$40
Difference
New York (22)
66.2%
75.0%
8.9%*
Philadelphia (23)
69.0%
83.9%
15.0%*
Chicago (25)
71.1%
81.4%
10.3%*
Atlanta (29)
72.4%
81.8%
9.4%*
Denver (31)
69.5%
78.9%
9.4%*
Los Angeles (32)
67.1%
80.8%
13.7%*
SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1
*Significant difference between $20 and $40 at the 10% level of significance
Finally, it is interesting to look at the characteristics of the households that do not cash the incentives. A logistic
regression model predicting the probability of cashing a received incentive can provide insight into the types of
households that are more likely to cash an incentive. Table 9 shows the estimated coefficients for such a logistic
regression model. A significantly positive coefficient implies that the group is more likely to cash an incentive
compared to the reference group; whereas, a significantly negative coefficient implies that the group is less likely to
cash an incentive compared to the reference group.

8

Table 9. Estimated Logistic Regression Model for Predicting a Household’s Probability of Cashing an
Incentive
Estimate
90%
Confidence
Limits
0.43*
0.31
0.56
Intercept
Incentive Amount (ref=$20)
$40
0.62*
0.54
0.69
Race (ref=White)
Black
0.21*
0.07
0.35
Other
-0.05
-0.23 0.12
Gender of Reference Person
(ref=Male)
Female
0.21*
0.12
0.29
Age of Reference Person (ref= >55)
<25
-0.23*
-0.41 -0.04
25-34
0.02
-0.09 0.14
35-54
0.03
-0.06 0.12
Education of Reference Person
(ref=Bachelor’s degree)
No high school
-0.33*
-0.51 -0.15
High school, no diploma
0.28*
0.14
0.43
High school diploma
0.13*
0.05
0.21
Tenure (ref=Owned)
Rented
0.22*
0.14
0.31
Occupied w/o Payment
0.05
-0.21 0.31
Region (ref=Northeast)
Midwest
0.29*
0.18
0.40
South
0.22*
0.11
0.33
West
0.13
-0.01 0.28
Marital Status (ref=Married w/
spouse present)
Married w/o spouse present
-0.35*
-0.62 -0.09
Widowed
-0.47*
-0.59 -0.34
Divorced
0.05
-0.07 0.17
Separated
0.38*
0.13
0.63
Never Married
-0.06
-0.18 0.05
Urban/Rural Status (ref=Urban)
Rural
-0.16*
-0.25 -0.07
SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 1
*Significant at the 10% level of significance

Based on this logistic regression model, we can see that the likelihood of cashing an incentive differs by household
characteristic. Households with a black reference person are more likely to cash the incentive than those with a
white reference person. Renters are more likely to cash than house owners. Urban households are more likely to
cash than rural households. Female household reference persons are more likely to cash than male reference persons.
When the reference person is under age 25 they are less likely to cash the incentive. Also, a $40 incentive is more
likely to be cashed than a $20 incentive.
V. Design of the SIPP 2014 Experiment – Wave 2
Our Wave 2 experiment is aimed towards finding a way to use incentives to improve the overall representativeness
of our interviewed sample. In Table 6, we saw that for many of the key variables typically used in the weighting
nonresponse adjustment, a randomly assigned incentive does not change the distribution of the interviewed cases.
Therefore, instead of randomly assigned incentives, we can assign incentives based on a household’s characteristics.
9

We plan on developing a response propensity model to predict each household’s probability of responding in Wave
2 given their Wave 1 characteristics. We hope to use characteristics typically associated with potential nonresponse
bias as dependent variables in our model. By assigning incentives to households based on these characteristics, we
hope to improve the coverage of our final sample.
In addition to improving coverage, another goal of the propensity model is to eliminate the offering of unnecessary
incentives. By including the receipt of an incentive as a main effect and as an interaction with other variables in the
propensity model, we can determine how an incentive will affect a household’s propensity to respond. Then we can
target incentives towards the households in which receiving an incentive increases the probability of responding by
the most, i.e., those with the largest differences between the predicted probability of responding with the incentive
and the predicted probability of responding without the incentive according to the model. In this way we hope that
we can avoid giving incentives to the households that would have responded without them.
We have designed the SIPP 2014 Wave 2 experiment with these goals in mind. In Wave 2 we are only going to test
the results of assigning incentives based on a propensity model, we are not ready to actually use the model to assign
incentives. Therefore, we are still going to do a random assignment of conditional incentives. Since Wave 1 showed
$40 incentives are more effective than $20 incentives, we are only going to study the effect of $40 incentives in
Wave 2. We start with the four groups from the Wave 1 experiment, but decide to divide Group 4 randomly in half
into Groups 4a and 4b to create more options for our propensity model testing. Group 1 is again the control group
and is assigned no incentive. Groups 2 and 4a are assigned $40 conditional incentives. Groups 3 and 4b are also
assigned no incentive. Table 10 summarizes the Wave 2 treatment groups and the number of eligible households in
each.
Table 10. Number of Housing Units and Eligible Households for each Incentive Group in the SIPP 2014 Panel
Wave 1 of Interviewing
Group
Wave 1 Incentive
Wave 2 Incentive
Number of Eligible
Amount
Amount
Households
1
$0
$0
7726
2
$0
$40
7773
3
$20
$0
7781
4a
$40
$40
3932
4b
$40
$0
3856
SOURCE: U.S. Census Bureau, Survey of Income and Program Participation (SIPP), 2014 Panel Wave 2

Since we are doing a random assignment of incentives, in addition to testing a propensity-model based incentive, we
can also look at the effect of Wave 2 randomly assigned incentives. With this in mind, we can answer the following
questions:
•
•
•

Does the Wave 1 incentive effect carry-over to Wave 2? We can compare the Wave 2 response rates of
Groups 4b and 1 and Groups 3 and 1.
What is the effect of duplicate incentives? We can compare the Wave 2 response rates of Groups 4a
and 1.
What is the effect of a later incentive in Wave 2? We can compare the Wave 2 response rates of
Groups 2 and 1.

In order to test propensity-based incentives, the same model will be applied to both the control and treatment groups,
conditional on the Wave 1 incentive. Groups 1 and 4b are the control groups, with $0 and $40 Wave 1 incentives,
respectively. Groups 2 and 4a are the treatment groups that receive a $40 Wave 2 incentive, with $0 and $40 Wave 1
incentives, respectively. We can then look at what would have happened if we assigned only a portion of the
treatment group incentives based on the household response propensity instead of assigning incentives to the entire
group.
For example, suppose we want to assign 50% of the sampled households incentives, so we pick out the 50% of
households for which giving an incentive increases the probability of responding the greatest extent according to the
model. We select this 50% for both the control group (1 and 4b) and the incentive group (2 and 4a). We can observe
10

if there is a difference in the response rate or distribution of the interviewed households between the control and
incentive groups (Note: we are only using the 50% of these groups chosen by the model for this comparison) to see
the effect of the model-based incentive. We can do this for any percentage, not only 50%, to see which works best,
both in terms of resulting distribution, response rate, and cost.
VI. Conclusions
Overall, the Wave 1 incentive study for the SIPP 2014 Panel shows the following results:
•
•
•

Both the $20 and $40 conditional incentives significantly increase overall response rate compared to the
control, with the $40 incentive significantly increasing response rate compared to the $20 incentive.
The effectiveness of the incentive differed by regional office. For New York, neither incentive amount was
effective in increasing response rate. For Atlanta, both the $20 and $40 were effective.
Among interviewed households, the distribution of many key variables did not significantly differ among
incentive groups. The distribution of the householder reference person’s race was significantly different
across incentive groups.

We conclude that the conditional incentive approach is effective in improving response rate but does not change the
characteristics of the final interviewed sample. Therefore, if there is nonresponse bias, a randomly assigned
incentive will not affect the bias of our estimates.
Our Wave 2 experiment is aimed towards finding a way to use incentives to improve the overall representativeness
of our interviewed sample. We plan on developing a response propensity model to predict each household’s
probability of responding in Wave 2 given their Wave 1 characteristics. By assigning incentives to households based
on their response propensity instead of randomly, we hope to improve the coverage of our final interviewed sample.
In addition, we hope that we can get similar improvements in the response rate with fewer incentives by targeting
households that are affected by incentives the most.

References
Bates, Nancy (2001). Evaluation of 1996 SIPP Incentive Experiment Waves 8-12. Memorandum from Bates to King.
Census Bureau, June 22, 2001.
Creighton, Kathleen P. (2003). Letter from Creighton to Schechter (regarding use of monetary incentives in 2004
SIPP Panel). Census Bureau, April 8, 2003.
Creighton, Kathleen P., King, Karen E., and Martin, Elizabeth A. (2001). The Use of Monetary Incentives in Census
Bureau Longitudinal Surveys. Statistical Policy Working Paper 32: 2000 Seminar on Integrating Federal
Statistical Information and Processes. Washington DC: Federal Committee on Statistical Methodology,
Office of Management and Budget, April 2001, pp. 289-310.
Flanagan, Patrick (2007). SIPP 2004: Incentive Analysis (ALYS-3). Internal Memorandum from Flanagan to
Creighton. Census Bureau, March 27, 2007.
Gelman, Andrew, Stevens, Matt, and Chan, Valerie (2003). Regression Modeling and Meta-Analysis for Decision
Making: A Cost-Benefit Analysis of Incentives in Telephone Surveys. Journal of Business & Economic
Statistics, Vol. 21.
James, Tracy L. (1997). Results of the Wave 1 Incentive Experiment in the 1996 Survey of Income and Program
Participation. ASA Proceedings of the Section on Survey Research Methods, 834-839.
Lewis, Denise (2004). SIPP 2001 Panel: Final Results of the Incentive Experiments. Memorandum from Lewis to
Creighton. Census Bureau, July 14, 2004.

11

Lewis, Denise, and Creighton, Kathleen (2005). The Use of Monetary Incentives in the Survey of Income and
Program Participation. Paper presented at the annual meeting of the American Association for Public
Opinion Research, Fontainebleau Resort, Miami Beach, FL.
Mack, Stephen, Huggins, Vicki, Keathley, Donald, and Sundukchi, Mahdi (1998). Do Monetary Incentives Improve
Response Rates in the Survey of Income and Program Participation?. ASA Proceedings of the Section on
Survey Research Methods, 529-534.
Mattingly, Tracy L. (2011). 2008 Survey of Income and Program Participation Incentive Study Report (ALYS-7).
Internal Memorandum from Killion. U.S. Census Bureau, August 29, 2011.
Shettle, Carolyn (1996). Evaluation of Using Monetary Incentives in a Government Survey. American Statistical
Association, Aug. 1996.
Singer, Eleanor (2001). The Use of Incentives to Reduce Nonresponse in Household Surveys. Survey Nonresponse,
eds. R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little, New York: Wiley, pp. 163-177.

12


File Typeapplication/pdf
AuthorAshley M Westra
File Modified2016-05-16
File Created2016-02-26

© 2024 OMB.report | Privacy Policy