FMLA Employee Incentive Experiment Results

FMLA Employee Incentive Experiment Results Final (2).docx

Family and Medical Leave Act Employer and Employee Surveys, 2011

FMLA Employee Incentive Experiment Results

OMB: 1235-0026

Document [docx]
Download: docx | pdf



RShape1 esults from the 2012 FMLA Employee Survey Incentive Experiment









April 23, 2012











Prepared for:

Jonathan Simonetta

U.S. Department of Labor





Submitted by:

Abt Associates Inc.

55 Wheeler Street

Cambridge, MA 02138




Abt Associates Authors

Kelly Daley

Courtney Kennedy

Ali Ackermann

Jacob Klerman

Alyssa Pozniak



Overview

This report presents the findings from a randomized experiment to rigorously assess the viability of using a monetary incentive to enhance the response rate on the 2012 Family and Medical Leave Act (FMLA) Employee Survey. The survey design relies (in part) on a national landline RDD sample and is considering the use of a ten dollar incentive payment for respondents. The analysis that follows is based on interviews conducted from February 1 – April 10, 2012.

The balance of this document proceeds in three sections. Section 1 introduces the Family and Medical Leave Survey. Section 2 describes the design of the experiment. In Section 3, we present preliminary results including the effects of the incentive on response rates.


1. Family and Medical Leave Act (FMLA) Survey

Abt Associates is conducting the 2012 Family and Medical Leave Act (FMLA) Employee Survey for the Department of Labor (DOL). In conjunction with DOL, Abt is designing the survey to update and expand knowledge about employees’ actual and prospective use of FMLA leave benefits. Two prior studies conducted in 1995 and 2000 established a baseline understanding of these decisions and behaviors. However, changes in family and labor force structure coupled with constantly evolving decision-making dynamics makes this update essential.

The 2012 FMLA Employee Survey is being designed as an overlapping, dual frame landline and cell phone random digit dial (RDD) telephone survey. The target population is U.S. adults age 18 or older who were employed for pay in the past 12 months. The survey features both a screener and an extended interview. Adults who needed or took family/medical leave in the 18 months prior to the interview are oversampled and administered an extended interview roughly twice the length of the extended interview for respondents who did not need or take such leave. In order to identify the extended interview respondent, the screener includes a roster of all the adults in the household, including their relevant employment history and leave-taking behavior. Within-household selection is conducted for both landline and cell phone cases.

At the beginning of the screener, respondents sampled through the cell RDD frame are notified that if they qualify and complete the extended interview, they will be sent $10 as a token of appreciation.

2. Experimental Design

The practice of offering incentives to cell phone survey respondents is well established as a means for defraying the telephone service costs incurred by some respondents for participating (AAPOR Task Force 2010). Furthermore, the research literature on incentives in landline and cellular random digit dial (RDD) surveys suggests that incentives improve the cooperation rate in household surveys (Singer et al. 1999; 2000; Brick et al. 2005). Incentive payments to survey respondents have been used extensively for many years to improve survey response rates. There is considerable research-based evidence supporting the value of compensation for increasing cooperation and improving the speed and quality of response in a broad range of data collection efforts. The offer of a monetary incentive can help persuade the respondent to participate in the survey. The incentive may reduce the “cold call” effect, by piquing interest in the introduction to the survey, when vital information is conveyed about purpose, content and timing.

The incentive experiment was limited to the landline RDD sample because cell phone respondents were already scheduled to receive a monetary incentive to compensate them for the minutes used. Prior to the start of interviewing, each landline number in the first 148 replicates was randomly assigned to either the $10 incentive group or the no incentive control group. Telephone numbers were divided evenly with a 50% probability of being assigned to each group. The total count of sampled landline numbers assigned to each group is presented in Table 1.

Table 1. FMLA Employee Survey Landline Sample by Incentive Experiment Condition

Experimental Group

Total Count of Landline Numbers

$10 Incentive Group

40,219

$0 Control Group

40,197

TOTAL

80,416


Based on the research literature on survey incentives, we anticipated that the $10 incentive would potentially increase the response rate and/or reduce costs relative to the no incentive condition. In particular, we identified several metrics for use in evaluating the effect from the incentive:


  1. Co-operation rate (defined as the number of cases interviewed divided by the number of eligible units ever contacted)

  2. Response rate (defined as the number of complete and partial interviews divided by the number of eligible cases plus cases of unknown eligibility)

  3. Completion rate (the proportion of screener interviews and extended interviews by experiment condition)

  4. Level of effort (as measured by the mean number of attempts per completion)

  5. Incentive cost (time to administer the survey, actual cost of the incentive, and processing costs to administer the incentive payment)


Based on the literature, we hypothesized that when a monetary incentive is used, the cooperation rate would be higher and mean number of call attempts per completion would be lower. Decreasing the interviewer effort required to complete the landline interviews would lower the data collection costs, resulting in a net cost saving from the incentives. We also hypothesized that the promise of an incentive would motivate participants to complete the survey versus terminating early, resulting in a higher survey completion rate. A gain of at least five percentage points in the cooperation rate for the treatment group over the control group (based on contacted working, residential numbers) would suggest that we proceed with an incentive for all landline respondents. This is the minimum level at which we would expect to see cost savings result from reduced call attempts.



3. Experiment Results

This analysis for the incentive experiment is based on interviews conducted February 1 – April 10, 2012. Table 2 presents the dispositions observed for the incentive group versus the control group. There were a total of 344 and 305 completed interviews in the incentive and control groups, respectively (Row 15 of Table 2). Row 16 shows the number of completed interviews conducted with leave takers in the treatment (158) and control (126) groups, and row 17 shows the number of completed interviews conducted with leave needers in the treatment (48) and control (54) groups.

In short, there is no strong support for a beneficial effect from the incentive across a variety of tests. Therefore, we recommend discontinuing the incentive for the landline sample. We provide further detail below.

1. Co-operation Rate

The first test we considered was whether or not the incentive improved the cooperation rate, defined as the proportion of all cases interviewed of all eligible units ever contacted. As shown in Table 2, the cooperation rate (row 20) was not significantly different in the incentive (72.2%) and control (72.8%) groups (p=0.69). Similarly, the rate of refusals (row 21) was nearly the same in the incentive (9.2%) and control (8.7%) groups (p=.74). All significant tests assume alpha value of 0.05.

2. Response Rate

As defined by CASRO (Frankel, 1983) and other sources (Groves, 1989; Hidiroglou et al., 1993; Kviz, 1977; Lessler and Kalsbeek, 1992; Massey, 1995), the response rate is the number of complete interviews with reporting units divided by the number of eligible reporting units in the sample. The American Association of Public Opinion Research (AAPOR) has established six response rate calculations for use in survey research. For the purposes of this analysis, we use AAPOR Response Rate Calculation 2 (AAPOR2) which is defined as:


  • the number of completed interviews (I) plus

  • partial interviews (P) divided by

  • the number of interviews (complete plus partial) plus

  • the number of non-interviews, comprised of refusal and break-off (R), non-contacts (NC), and others (O) plus

  • all cases of unknown eligibility, comprised of (UH) = unknown if housing unit), unknown, other, UO.


This is represented in Equation 1:

(

Equation 1.

I + P)

(I + P) + (R + NC + O) + (UH + UO)



There are no statistically significant differences in response rates between the incentive and control groups. In fact, as shown in row 22 of Table 2, the response rates are virtually identical between the incentive and control groups (27.7% and 27.9%, respectively, p=.56).

3. Completion rate by interview type

The third factor we considered was the proportion of screened households and full interviews that were completed with respondents who took FMLA leave (takers) or needed to take FMLA leave (needers), as they represent a much smaller fraction of the total population. Here we did find a significant difference in the type of completed interview across experimental condition: Eight percent of screened households in the incentive group are takers or needers, while 6.8% of screened household in the control group are takers or needers (p=.048; not shown in Table 2). The difference does not hold up, however, when we look at completed extended interviews: 59.9% of the full interview respondents in the incentive group are takers or needers, while 59% of the full interview respondents in the control group are takers or needers (p=.41, Row 18).

Table 2. Dispositions by Sample and Experiment Condition


 

Landline Sample Incentive (10$)

Landline Sample Control ($0)


 


%Contacts


%Contacts

1

TOTAL NUMBERS DIALED

40,219

 

40,197

 


 


 


 

2

BAD NUMBERS (ex. business, disconnect)

27,593

 

27,307

 


 


 


 

3

TOTAL GOOD NUMBERS (sample frame)

12,626

 

12,890

 


 


 


 

4

NO CONTACT (ex. busy, no answer)

3,260

 

3,395

 


 


 


 

5

TOTAL CONTACTS

9,366

100.0%

9,495

100.0%


 


 


 

6

CONTACTS - NOT SCREENED

6,758

72.2%

6,815

71.8%







7

Dead Not Screened (ex. away for duration)

464

5.0%

463

4.9%

8

Live Not Screened (answering machine/vm)

2,473

26.4%

2,590

27.3%

9

Callback - Not Screened

3,008

32.1%

2,982

31.4%

10

Refusals - Not Screened

813

8.7%

780

8.2%


 


 


 

11

CONTACTS - SCREENED

2,608

27.8%

2680

28.2%

12

Screen-Outs

2,125

22.7%

2,215

23.3%

13

Qualified Refusals

46

0.5%

45

0.5%

14

Qualified Callbacks

93

1.0%

115

1.2%

15

Total Completes

344

3.7%

305

3.2%


Interview type among completed interviews:





16

 – leave taker

158

45.9% 

126

41.3% 

17

 – leave needer

48

14.0%

54

17.7%

18

 – leave taker or needer

206

59.9%

180

59.0%








Summary Measures


 


 

19

Survey Incidence (Screening Incidence)

18.5%

 

17.4%

 

20

Cooperation Rate 2

72.2%

 

72.8%

 

21

Totals Refusals

9.2%

 

8.7%

 

22

Response Rate 2

27.7%

 

27.9%

 



4. Level of Effort

Finally, we considered the total level of effort to reach a completion, as one measure of interview cost. Like the other factors, we found no significant difference between the two experiment groups. The mean number of attempts per completed full interview was 3.6 attempts in the incentive group versus 3.3 attempts in the control group (not shown in Table 2). The mean numbers of attempts to screen a household were virtually identical in the two groups (2.9 attempts in the incentive group and 3.0 attempts in the control group).

5. Incentive Cost

The cost of survey administration is measured by the total amount of time to administer each survey type, the cost of the incentive itself, and the material and handling costs associated with processing the incentive payment. The total interview administration time was nearly two minutes longer for the incentive group (19 minutes, 15 seconds) versus the control group (17 minutes, 11 seconds). The difference may be at least partially accounted for by the necessity of collecting and confirming name and contact information in order to mail the incentive check. Post-interview incentive payment and processing costs amount to $11.50 per completed interview.

Based on these considerations, we recommend discontinuing the incentive payment for the landline sample.


References

American Association for Public Opinion Research (AAPOR) 2010. “New Considerations for Survey Researchers When Planning and Conducting RDD Telephone Surveys in the U.S. With Respondents Reached via Cell Phone Numbers,” Prepared for AAPOR Council by the Cell Phone Task Force operating under the auspices of the AAPOR Standards Committee. http://aapor.org/AM/Template.cfm?Section=Cell_Phone_Task_Force&Template=/CM/ContentDisplay.cfm&ContentID=2818


Brick, J., Montaquila, J., Hagedorn, M., Roth, S., and Chapman, C. 2005. “Implications for RDD Design from an Incentive Experiment.” Journal of Official Statistics 21:571-589.

Frankel, Lester R., “The Report of the CASRO Task Force on Response Rates,” in Improving Data Quality in a Sample Survey, edited by Frederick Wiseman. Cambridge, MA: Marketing Science Institute, 1983. Groves, Robert M., Survey Errors and Survey Costs. New York: John Wiley & Sons, 1989.


Hidiroglou, Michael A.; Drew, J. Douglas; and Gray, Gerald B., “A Framework for Measuring and Reducing Nonresponse in Surveys,” Survey Methodology, 19 (June, 1993), 81-94.


Kviz, Frederick J., “Toward a Standard Definition of Response Rate,” Public Opinion Quarterly, 41 (Summer, 1977), 265-267.


Lessler, Judith and Kalsbeek, William D., Nonsampling Error in Surveys. New York: John Wiley & Sons, 1992.


Massey, James T., “Estimating the Response Rate in a Telephone Survey with Screening,” 1995 Proceedings of the Section on Survey Research Methods. Vol. 2. Alexandria, VA: American Statistical Association, 1995.


Singer, E., Van Hoewyk, J., Gebler, N., Raghunathan, T., and McGonagle, K. 1999. “The Effect of Incentives on Response Rates in Interviewer-Mediated Surveys” Journal of Official Statistics 15:217-230.

Singer, E., Van Hoewyk, J., and Maher, M. 2000. “Experiments with Incentives in Telephone Surveys.” Public Opinion Quarterly 64:171-188.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleAbt Single-Sided Body Template
AuthorJan Nicholson
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy