FMLA Wave 3 Survey Non-Response Bias Analysis

FMLA Wave 3 Survey NonResponse Bias Analysis_021318.docx

Family and Medical Leave Act, Wave 4 Surveys

FMLA Wave 3 Survey Non-Response Bias Analysis

OMB: 1290-0015

Document [docx]
Download: docx | pdf

FMLA Wave 3 Survey Non-Response Bias Analysis

            1. This document is a verbatim copy of Section 1.5.3 and 1.5.4 in the Wave 3 Methodology report (Daley et al. 2012). The only change is to renumber the exhibits and the references to them.
        1. Nonresponse Follow-up Survey (NRFU)

A nonresponse follow-up survey (NRFU) was conducted shortly after the Employee Survey was completed. The NRFU attempted to interview a subsample of nonrespondents to the Employee Survey in order to assess whether nonrespondents had different characteristics than respondents. Based on lessons learned from the 2000 Employee Survey NRFU, the 2012 NRFU focused exclusively on cases that had completed a screener but failed to respond to the extended interview. In 2000, attempts to reach nonrespondents to the screener yielded too few cases to support any analysis beyond the screener data: Only 2.2% of the households that had not responded to the screener in the main survey yielded a completed interview. This was “too few (interviews) to draw definitive conclusions” according to the methodology report.

In order to avoid this same result in 2012, the NRFU sample was limited to a subset (n=600) of the total n=1,077 households that completed the screener, contained at least one eligible adult, and had not responded to the extended interview. Both landline and cell phone cases were included in the NRFU sample, approximately in proportion to their relative shares of the entire pool of screened nonrespondents from the main survey. The landline cases sub-sampled for the NRFU were matched against a directory of postal addresses with known landlines, and a postal address was appended where available. Both address-matched and unmatched cases were included in the 2012 NRFU because adults living in households that are not in commercial address databases have different demographic and socioeconomic characteristics than individuals living in address-matched households. If the NRFU had been limited to address-matched cases, this could have led to confounds between the NRFU respondents and the main survey respondents. All landline sample cases that were matched to an address (though reverse lookup) received a letter encouraging them to cooperate with the NRFU interview.

NRFU sample cases were offered $20 post-paid remuneration for completing the interview. The NRFU was conducted via CATI and featured a shortened version of the Employee Survey instrument. The NRFU was conducted from July 9 to July 31, 2012 and yielded 137 completed extended interviews (98 with landline sample cases and 39 with cell phone sample cases) for a NRFU response rate of 22.8 percent.

About two-thirds of the completed NRFU interviews (64 percent) were with employed-only adults, 9 percent were with respondents with unmet need for leave, and 27 percent were with leave-takers. It is important to note that leave status of respondents (FMLA group) for these cases was known (from the screener data) prior to the fielding of the NRFU. Consequently, FMLA group is not considered an outcome variable in the NRFU analysis. The relationship between FMLA group and nonresponse to the extended interview is discussed in detail in the section on response propensity modeling below. The focus of the NRFU analysis is on employment and leave-related characteristics that were not captured in the screener.

Exhibit A.1 compares unweighted characteristics of all 2,852 Employee Survey respondents to the characteristics of the 137 eligible adults reached in the NRFU. The results suggest no major differences between the NRFU respondents and the Employee Survey respondents. The likelihood of being familiar with FMLA and of having an employer who is covered by FMLA are highly similar for these two groups. Some 77 percent of main survey respondents gave answers that indicated their employer was covered by FMLA, and this compared with 78 percent of the NRFU respondents.

Exhibit A.1 Characteristics of Main Survey Respondents versus NRFU Respondents


Main Survey Respondents

NRFU Respondents

Employer is covered by FMLA

77%

78%

Respondent is eligible for FMLA

67%

60%

Ever hear of the Family and Medical Leave Act?



Yes

74%

73%

No

25%

27%

Don't know/Refused

1%

0%

Total

100%

100%

Are you eligible to receive…



Flexplace or telecommuting (% Yes)

23%

22%

Paid family leave (% Yes) *

49%

42%

Paid vacation (% Yes)

78%

63%

Paid sick time (% Yes) *

70%

59%

Education



High school or less

27%

27%

Some college/Associate's

30%

34%

College graduate

42%

39%

Don't know/Refused

1%

0%

Total

100%

100%

Ethnicity



Hispanic (% Yes)

9%

12%

Total



Race



Black/African-American (% Yes) *

12%

7%

Number of children under age 18 in your care



None

59%

62%

One or more

40%

38%

Don't know/Refused

1%

0%

Total

100%

100%

Marital Status



Married/Living with partner

66%

73%

Divorced/Separated

14%

9%

Never married

15%

16%

Widowed

3%

1%

Don't know/Refused

1%

0%

Total

100%

100%

Sample size

2,852

137

Indicates that this variable was adjusted for in the Employee Survey weighting.

* Difference of proportions test p < .05

Some modest differences appear with respect to being eligible for FMLA and certain employer-provided benefits. NRFU respondents were somewhat less likely to give responses indicating that they are eligible for FMLA (60 percent versus 67 percent). The NRFU respondents were also somewhat less likely to self-report being eligible for paid family leave, paid vacation, and paid sick time. These results suggest that the main survey could possibly have somewhat over-estimated employee eligibility for FMLA and these benefits. However, it is also important to note that NRFU respondents were also less likely to have taken leave during the reference period relative to the main survey respondents. It seems plausible that people who take FMLA-related need are perhaps more likely to work for an employer that offers these kinds of benefits.

Exhibit A.2 presents estimates that are based on the leave-takers in the main survey and in the NRFU.1 There were only 37 such respondents in the NRFU and so the estimates can only be used to check for very large differences from the leave-takers interviewed in the main survey. No such large differences are evident. The leave-takers from the main survey appear to be quite similar to those from the NRFU with respect to the number of reasons they took leave, the nature of the condition for which the leave was taken, and the circumstances in which they returned to work.

Exhibit A.2 Characteristics of Leave Takers Interviewed in the Main Survey versus the NRFU


Main Survey Respondents

NRFU Respondents

Number of total REASONS leave-takers took leave from work in past 18 months



One reason

69%

72%

Two or more reasons

29%

25%

Don't know/Refused

1%

3%

Total

100%

100%

Number of total REASONS leave-takers took leave from work in past 12 months



One reason

77%

72%

Two or more reasons

22%

28%

Don't know/Refused

2%

0%

Total

100%

100%

Main reason for most recent leave



Own illness/disability/other health condition

57%

62%

Maternity-related

1%

5%

Maternity-related and newborn care

2%

0%

Miscarriage

0%

0%

Newborn care

12%

11%

To bond with newborn

2%

0%

To bond with newly placed foster child

0%

3%

Child's health condition

5%

0%

Spouse's health condition

7%

8%

Parent's health condition

9%

8%

Other relative's health condition

3%

0%

Deployment of military member

1%

0%

Don't know/Refused

1%

3%

Total

99%

100%

Nature of condition for the most recent leave



One-time health matter

44%

34%

Condition requiring routine scheduled care

16%

19%

Condition affecting work from time to time

25%

19%

Other

14%

28%

Don't know/Refused

1%

0%

Time off was taken…



In one continuous block of time

75%

78%

On separate occasions

24%

22%

Total

100%

100%

After your leave ended, did you…



Went back to work for sample employer

91%

92%

Went back to work for new employer

1%

0%

Did not return to work

8%

8%

Don't know/Refused

1%

0%

Total

100%

100%

Sample size

1,332

37

Note - None of the difference in proportions tests is statistically significant at the .05 level.


Roughly 70 percent of both groups took leave for exactly one reason during the past 18 months, and that tended to be for the respondent’s own disability or health condition. The clear majority of leave takers from both surveys took their leave in one continuous block and returned to work for the same employer when the leave ended. The proportion of main survey leave takers reporting that their leave was a one-time health matter (44 percent) is somewhat higher than the proportion of NRFU leave takers reporting this (34 percent), but that difference is not statistically significant and appears to be attributable to the high proportion of “Other” responses among the NRFU leave takers. On balance, the leave-takers from the main survey and the NRFU appear to have quite similar leave experiences. There is little evidence, based on the NRFU for potential nonresponse bias when looking at leave-taker estimates.

        1. Comparison of Easier to Reach versus Harder to Reach Respondents

The second technique used to assess the risk of nonresponse bias is an analysis of the level of recruitment effort. Here we compare the leave-related characteristics of respondents who were easy to reach with respondents who were harder to reach. The harder-to-reach cases serve as proxies for the nonrespondents who never completed the extended interview. If the harder-to-reach respondents do not differ from the easy-to-reach ones, then presumably the sample members never reached would also not differ from those interviewed. Support for this “continuum of resistance” model is inconsistent (Lin and Schaeffer 1995; Montaquila et al. 2008), but it can still be a useful framework for assessing the relationship between level of effort and nonresponse bias.

In this analysis the level of effort in reaching the respondent is considered with respect to three dimensions: (1) ease of “contactability” as defined by the number of calls required to complete the interview; (2) amenability as defined by whether or not the case was a converted refusal; and (3) in terms of both contactability and amenability as defined by a hybrid metric combining number of call attempts and converted refusal status. Just over half (54.9 percent) of the 2,852 extended interview respondents completed the interview on the first, second, or third call. The remainder (45.1 percent) required at least four calls, with a maximum of 14 calls. About 1 in 25 respondents (4.5 percent) was a converted refusal.2 Some 46.4 percent of the respondents either required four or more calls or was a converted refusal. These cases are referred to as “hard to reach” in this analysis, and respondents who never refused and completed the interview in three or fewer calls are referred to as the “easy to reach.”

Exhibit A.3 presents several leave-related characteristics for these various groups. In this table each respondent is represented three times according to number of attempts they required (contactability), whether or not they ever refused (refusal behavior), and whether they were easy or hard to reach (hybrid metric).

Exhibit A.3 Leave-related Characteristics by Level of Effort Groups


Contactability

Refusal Behavior

Hybrida

3 or fewer attempts

4 or more attempts

Never Refused

Converted Refusals

Easy to Reach

Hard to Reach

%

%

%

%

%

%

FMLA groupb,c


 


 


 

Leave-taker

41.3

35.6

38.8

36.2

41.3

35.7

Unmet need for leave

16.7

14.5

15.9

11.8

16.6

14.7

Employed-only

42.0

49.9

45.3

52.0

42.1

49.6

Total

100.0

100.0

100.0

100.0

100.0

100.0

Employer is


 


 


 

Not covered by FMLA

23.9

21.5

22.6

28.3

23.6

21.9

Covered by FMLA

76.1

78.5

77.4

71.7

76.4

78.1

Total

100.0

100.0

100.0

100.0

100.0

100.0

Employee is


 


 


 

Not eligible for FMLA

33.6

33.1

33.3

36.3

33.5

33.3

Eligible for FMLA

66.4

66.9

66.7

63.7

66.5

66.8

Total

100.0

100.0

100.0

100.0

100.0

100.0

Heard of FMLA


 


 


 

Yes

73.0

74.2

73.4

76.4

72.8

74.5

No

25.8

25.1

25.6

22.1

26.0

24.8

Don't know/Refused

1.2

0.7

1.0

1.6

1.2

0.8

Total

100.0

100.0

100.0

100.0

100.0

100.0

Minimum sample size

1,404

1,168

2,459

113

1,372

1,200

Source: Employee Survey, figures are unweighted

a The easy to reach group consists of respondents who completed on three or fewer attempts and never refused. The hard to reach group consists of respondents who required four or more attempts or were a converted refusal.

b Indicates that the chi-square test the difference between the Contactability classes is statistically significant at the .05 level.

c Indicates that the chi-square test the difference between the Hybrid classes is statistically significant at the .05 level.



The only significant results from the table is that respondents who completed the interview on the first few attempts were more likely to have taken leave (41.3%) or had unmet need for leave (16.7%) than those who took (35.6%) or had unmet need for leave (14.5%) but completed the interview after four or more attempts (combined 58.0% versus 50.1%, chi-square p<.001). One clear post-hoc explanation is that the employees with unmet need for leave and leave-takers may have felt that the survey was more relevant to them, and they may have therefore been more eager to participate than those who with no relevant leave experiences. This result does not hold when looking at converted refusers versus those who never refused. The pattern is significant for the hybrid measure, but this simply reflects the fact that the hybrid measure is largely a function of the number of call attempts.

The significant relationship between contactability and leave taking/unmet need for leave may be related to the fact that the survey introduction announces the fact that this was a survey about medical leave. Respondents were told, “We are conducting a national study to find out about employees’ use of, and attitudes about, family and medical leave policies in their workplace.” For many people who had not taken leave, this introduction may have led them to conclude that the survey was not important to them. This dynamic is predicted by Leverage Salience Theory (Groves et al. 2000), which posits that sample members base their cooperation decisions on the aspects of the survey that are made salient to them during recruitment. In this case, the topic was made very salient, possibly to the detriment of the composition of the responding sample. If future Employee Surveys are conducted, consideration should be given to not including such an explicit statements about the survey content. The balance between informed consent about the survey and the threat posed by differential nonresponse is one that should be discussed with relevant institutional review boards and oversight agencies (e.g., OMB).

The exhibit also shows how easy and harder to reach respondent groups compare with respect to employer coverage, employee eligibility for FMLA, and whether or not the respondent had heard of FMLA. On all three of these measures, there were no differences between the easy to reach respondents and the harder to reach respondents. The negligible differences observed for these other measures suggests that other survey variables are likely to be unrelated to this easy/hard to reach dimension, especially when the analysis is conditioned upon individuals belonging to a given FMLA group (leave-taker, unmet need for leave, employed-only).

        1. Response Propensity Modeling

The third technique used to assess nonresponse bias is response propensity modeling (Little 1986; Groves and Couper 1998; Olson 2006). Response propensity is the theoretical probability that a sampled unit will respond to the survey request. Many respondent characteristics can influence response propensity. The response propensity model allows the researcher to identify the most powerful predictors of response when all available predictors are tested simultaneously. In this analysis, the primary research question is whether or not employment-related or leave-related characteristics are associated with response propensity, especially when controlling for factors included in the weighting protocol. If employment-related or leave-related variables show a significant association with response to the extended interview (after controlling for other factors), this would be evidence of possible nonresponse bias. If, however, the employment and leave-related predictors do not have a significant effect, this suggests that the weighting adjustments are likely to have been effective in reducing nonresponse bias.

In order for a response propensity model to be informative, the researcher must know the values for respondents and nonrespondents on one or more predictors of survey response. In RDD surveys, propensity models are often quite limited because little information is generally known for the nonrespondents. This is the case for the screener component of the Employee Survey. The only types of variables known for the nonrespondents to the screener are sampling frame, region, and level of effort data. Nothing is known about the employment or leave-related characteristics of the screener nonrespondents.

A much richer model is possible, however, if we condition on cases completing the screener and model propensity to respond to the extended interview. Based on the screener, we know the age, gender, telephone service, FMLA group, employment sector, and other variables for both the respondents and nonrespondents to the extended interview. This response propensity analysis, thus, conditions on households completing the screener and examines which variables were associated with response to the extended interview.

A logistic regression was used to model response to the extended interview conditional upon completion of the screener. The results are presented in Exhibit A.4.3 The strongest predictor of response to the extended interview is the hand-off flag. The hand-off flag has value 1 for cases in which the screener respondent happened to be the person selected for the extended interview and value 0 for cases in which these screener respondent needed to hand-off the phone because someone else in the household was selected to complete the extended interview. This result reflects the fact that in each sample household, only one eligible adult was selected to complete the extended interview. The extended interview response rate was 33 percent among the cases requiring a hand-off versus 84 percent among the cases in which the screener and extended interview respondent were the same person.

Exhibit A.4 Logistic Regression Estimating the Probability of Response to the Extended Interview Conditional on Completion of the Screener

Parameter

Estimate

s.e.

Wald X2

p value

Intercept

0.528*

0.215

6.02

0.01

Sampling frame = Cell Phone RDD

-0.168**

0.057

8.65

<.01

Hand-off = Yes

-1.232***

0.043

835.07

<.0001

Sampling frame x Hand-off (interaction)

-0.133**

0.042

9.97

<.01

Region = Northeast

-0.125

0.067

3.52

0.06

Region = Midwest

0.105

0.066

2.52

0.11

Region = South

-0.027

0.058

0.21

0.64

Number of eligible adults in HH (log)

-0.262

0.144

3.33

0.07

FMLA group = leave-taker

0.030

0.053

0.31

0.57

FMLA group = unmet need for leave

-0.135*

0.066

4.18

0.04

R gender = male

-0.107**

0.037

8.15

<.01

Telephone service = landline and cell

-0.043

0.080

0.29

0.59

Telephone service = cell-only

0.116

0.116

1.01

0.32

R employment sector = government

0.038

0.070

0.29

0.59

R employment sector = non-profit

0.046

0.090

0.27

0.61

R age

0.003

0.003

1.37

0.24

Model Diagnostics





Area under ROC curve (c)

0.792

-2 Log Likelihood

4,590.6

Sample size

4,498

***p<.001 **p<.01 *p<.05

Reference groups for categorical variables: hand-off (no, the screener respondent was selected as the extended interview respondent), region (West), FMLA group (employed-only), telephone service (landline-only), R employment sector (private sector), R gender (female)

The sampling frame was also associated with extended interview response. Landline sample cases were somewhat more likely to complete the extended interview than cell phone sample cases. Based on the literature we expected that the hand-off issue would be more problematic in the cell phone sample than in the landline sample, so an interaction term was included in the model. The interaction of sampling frame and hand-off flag was statistically significant (p<.01). Indeed, while hand-offs decreased response propensity in both samples, the effect was stronger in the cell sample than the landline sample.

In terms of potential nonresponse bias, these findings do not necessarily represent cause for concern because the weighting protocol addresses the integration of the sampling frames, and it also includes a post-stratification to telephone service groups (landline-only, cell-only, dual service). Furthermore, there is no evidence that the incidence of hand-offs varied across the FMLA groups (based on a separate analysis, not shown in the exhibit).

The only other highly significant effect observed in the model was for the gender of the adult selected for the extended interview. Over two-thirds (68 percent) of the women selected completed the extended interview, compared with 59 percent of the men selected (not shown). It is important to note that the weighting included an extended interview nonresponse adjustment for both gender and age. This weighting adjustment is expected to have minimized the risk on nonresponse bias associated with this effect from gender shown in Exhibit A.4.

The other result of note in Exhibit A.4 is the marginally significant effect associated with having an unmet need for leave. In bivariate analysis, the extended interview response rates (conditional on screener completion) were 61 percent for employees with unmet need for leave, 64 percent for the leave-takers, and 64 percent for the employed-only. This variation is quite modest, and indeed the Chi-Square test in the bivariate analysis is not statistically significant (p=0.38). It is somewhat puzzling that this non-significant bivariate result becomes marginally significant in the multivariate model in Exhibit A.4. We were unable to identify any obvious post-hoc explanation for this result, especially in light of the lack of association between these FMLA groups and the main predictor of response, handing-off. The effects associated with region, the number of eligible adults in the household (log transformed), household telephone service, employment sector, and age were all non-significant at the alpha=0.05 level in the model.

In sum, the fact that the bivariate association between response and FMLA group is not significant and that the Wald test for the employees with unmet need for leave coefficient in the model is only marginally significant, amounts to little evidence of potential nonresponse bias. The risk to estimates from nonresponse bias appears to have been more substantial at the screener stage than at the extended interview stage.

        1. Comparisons to External Benchmarks

One limitation of the previous three techniques is that they analyze only a subset of all non-respondents to the survey. The NRFU analysis relies on the NRFU participants as proxies for nonrespondents; the level of effort analysis relies on the “harder-to-reach” respondents as proxies for nonrespondents; the response propensity model captures only variation between the screened extended interview respondents and the screened extended interview nonrespondents. In this section we make comparisons to external benchmarks in order to evaluate the total level of nonresponse bias in the 2012 Employee Survey.

Specifically, we compare the weighted final respondent estimates from the Employee Survey to the Current Population Survey (CPS). The CPS is considered to be a “gold standard” survey due to its more rigorous protocol (e.g., area-probability sampling with in-person interviewing) and a higher response rate than the 2012 Employee Survey. By virtue of its more rigorous design, the estimates from the March 2011 CPS are assumed to contain less nonresponse bias than those from Employee Survey.

The strength of this approach is that the benchmark survey (CPS) is well known to be a high quality federal survey, and so obtaining similar estimates would give some confidence about the 2012 Employee Survey (Groves 2006). One weakness of this approach is that not all of the key survey variables in the Employee Survey are collected in the CPS, and so the analysis is somewhat limited in scope. Another weakness is that the measurements collected in the 2012 Employee Survey are not identical to the measurements collected in the CPS. The CPS features in-person interviewing in addition to the CATI data collection used exclusively in the Employee Survey. Furthermore, the question wording for the comparison questions varies somewhat between the two surveys. Either of these factors may lead to measurement error differences contaminating the comparison. A third weakness of this approach is that the coverage and nonresponse characteristics of the CPS are not completely known. While the CPS provides the best available estimates for the comparison measures, the CPS estimates may themselves contain some level of error (beyond sampling error).

CPS weighted estimates were computed based on the population of adults aged 18 and old with a telephone who were employed for pay within the past 12 months (excluding self-employed) to match the target population of the Employee Survey target as closely as possible. Three variables identified in the CPS were also administered in the Employee Survey but not used in the weighting protocol.4 These variables are marital status, union membership, and employment status. The weighted estimates from both surveys are presented in Exhibit A.5.

Exhibit A.5 Weighted Estimates from the Current Population Survey and Employee Survey

Characteristic

Current Population Survey

Employee Survey

Difference

%

%

%

Marital Status




Married

55.1%

54.2%

-0.9%

Not married

44.9%

45.8%

0.9%

Labor Union Membership




Yes

11.4%

14.5%

3.1%

No

88.6%

85.5%

-3.1%

Current Employment Status




Employed

95.1%

88.4%

-6.7%

Unemployed/Not in Labor Force

4.9%

11.6%

6.7%

Sources: 2012 Employee Survey and March 2011 CPS. Estimates from both surveys are weighted. Estimates exclude item nonresponse.

The weighted Employee Survey estimate for percent married is highly similar to the estimate from the CPS (54.2 percent versus 55.1 percent). The full array of response options (e.g., separated, widowed) are not compared here because the list of options differed between the two surveys. With respect to married/unmarried, however, this comparison suggests minimal potential for nonresponse bias.

There was a somewhat larger discrepancy, however, with respect to union membership. The estimate from the Employee Survey is that 14.5 percent of the target population are union members, which compared with 11.4 percent from the CPS. One post hoc explanation for this difference is that union members may be more attuned to issues of benefits and employed leave policy, and they may have therefore been more interested in participating in the Employee Survey relative to non-union workers. Unfortunately, there are no data available to test that hypothesis.

There is also a noticeable difference for the estimated percent currently employed. The estimate from the CPS is nearly seven percentage points higher than the estimate from the Employee Survey. Based on the available information, this difference appears to be attributable to a definitional peculiarity in the CPS difference rather than nonresponse. When the CPS public use micro dataset is filtered on the population of adults employed for pay within the past 12 months (excluding self-employed), the estimate for percent currently employed does not include persons who are unemployed and not looking for work. Such individuals are considered out of the labor force and so they do not appear in the denominator of this estimate. In the Employee Survey, however, such individuals were asked about their current employment status and they are represented in the denominator of the estimate in Exhibit A.5. Given that this definitional difference may explain at least some of the discrepancy between these two estimates, we do not necessarily view the current employed metric as an especially informative point of evaluation. It is included here for the sake of comprehensiveness as well as to illustrate the kinds of issues that can limit generalizability from a benchmark comparison analysis.

A.1 Summary of Nonresponse Analysis for the Employee Survey

These analyses suggest that nonresponse bias may pose a small risk to some of the estimates from the Employee Survey. Perhaps the most informative result is the fact that harder-to-reach respondents (those requiring four or more call attempts) were less likely to have taken leave or have an unmet need for leave, relative to those who were easy to reach. As discussed above, one potential post hoc explanation is that the leave-takers and employees with unmet need for leave may have felt that the survey was more relevant to them, and they may have therefore been more eager to participate than those who with no relevant leave experiences.

Generally speaking, level of effort analysis is not particularly rigorous or definitive (e.g., Montaquila et al. 2008) because it only evaluates variation among the survey respondents and it relies upon a fairly strong assumption that the harder-to-reach respondents are good proxies for the nonrespondents who are never reached. That said, one advantage of the level of effort analysis in this context is that it does not speak to just the screener survey or just the extended interview; the harder-to-reach respondents are potentially signaling a pattern that carries over to the entire set of nonrespondents to the Employee Survey. In this way, the level of effort analysis is different from the NRFU and the response propensity model because in this analysis those two approaches only address nonresponse to the extended interview.

A nonresponse follow-up survey (NRFU) is quite similar in spirit to a level of effort analysis. In this study, however, the NRFU was purposefully limited to nonrespondents who had completed the screener. This decision was based upon our review of the 2000 Employee Survey NRFU that essentially found it impractical to attempt to reach the screener nonrespondents due to issues of low cooperation and low target population incidence. Analysis of the 2012 Employee Survey NRFU generally found small difference between the main survey respondents and the main survey nonrespondents who completed the NRFU. This was especially true for estimates based on leave-takers. That said, NRFU respondents were somewhat less likely to give responses indicating that they are eligible for FMLA and somewhat less likely to self-report being eligible for paid family leave, paid vacation, and paid sick time.

The response propensity analysis and the benchmark comparison analysis provided little evidence that nonresponse bias was a major threat to the Employee Survey estimates. Both analyses were limited, however, in the variables that were available. The response propensity modeled showed that hand-offs between the screener and extended interview respondent greatly decreased the likelihood of response, but there is no indication that this compromised any estimates from the survey. Testing revealed no association between the hand-offs and FMLA group, as would be expected. In the benchmark comparison analysis, the limited set of comparison variables and differences in measurement and definitions limit generalization to other estimates in the Employee Survey.

References:

Daley, K., C. Kennedy, M. Schalk, J. Pacer , A. Ackerman, A. Pozniak, J. Klerman. 2012. Family and Medical Leave in 2012: Methodology Report. Cambridge, MA: Abt Associates.

Groves, R. and Couper, M. 1998. Nonresponse in Household Interview Surveys. Wiley: New York.

Groves, R.M., Singer, E., and Corning, A. 2000. “Leverage-salience Theory of Survey Participation: Description and an Illustration.” Public Opinion Quarterly 64:299–308.

Groves, R.M. 2006. “Nonresponse Rates and Nonresponse Bias in Household Surveys.” Public Opinion Quarterly 70: 646-675.

Lin, I. and N. Shaeffer. 1995. “Using Survey Participants to Estimate the Impact of Nonparticipation.” Public Opinion Quarterly 59: 236-258.

Little, R.J. 1986. “Survey Nonresponse Adjustments.” International Statistical Review 54: 139-157.

Montaquila, J., J. Brick, M. Hagedorn, C. Kennedy, and S. Keeter. 2008. “Aspects of Nonresponse Bias in RDD Telephone Surveys,” in Telephone Survey Methodology, edited by J. Lepkowski, C. Tucker, J. M. Brick, E. de Leeuw, L. Japec, P. Lavrakas, M. Link, R. Sangster. New York, NY: John Wiley & Sons, Inc.

Olson, K. 2006. “Survey Participation, Nonresponse Bias, Measurement Error Bias, and Total Bias. Public Opinion Quarterly 70: 737-758.





1 The Abt team also sought to compare NRFU respondents with unmet need for leave to those interviewed in the main survey. Unfortunately only 13 such respondents completed the NRFU extended interview. This case base was too small to support any meaningful analysis.

2 In some of these cases the refusal may have come from the screener respondent rather than the extended interview respondent, if these happened to be different people.

3 Interviewing effort variables, such as the number of call attempts and an indicator for converted refusal cases, were intentionally excluded from this model because they are endogenous and also because a significant association with the outcome being modeled would not communicate any information about the potential risk to survey estimates from nonresponse bias. Interviewing effort variables are considered separately in the analysis of easier to reach versus harder to reach cases.

4 Several demographic variables such as age, gender, education, and race/ethnicity are measured in both the CPS and the Employee Survey. These variables were intentionally excluded from this analysis, however, because they were included in the raking ratio estimation for the Employee Survey weights. In other words, the Employee Survey was statistically adjusted to match external benchmarks on these measures, and so comparing those weighted characteristics to the CPS would not be informative about the risk of nonresponse bias.

15

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-15

© 2024 OMB.report | Privacy Policy