Field Test Response Propensity Modeling Experiment and Case Assignment

ELS-2002-3rd-Follow-up-2012-Field-Test-Appendix9.doc

Education Longitudinal Study (ELS) 2002 Third Follow-up 2011 Field Test

Field Test Response Propensity Modeling Experiment and Case Assignment

OMB: 1850-0652

Document [doc]
Download: doc | pdf





Appendix 9

Field Test Response Propensity Modeling Experiment and Case Assignment



Background of Approach

RTI, under a contract with NCES, is currently undertaking an initiative, modeled on the Responsive Design methodologies developed by Groves (Groves and Heeringa, 2006), to develop new approaches to improve survey outcomes that incorporate different responsive and adaptive features.

RTI has implemented several of these procedures on recent studies and has published preliminary results (Rosen, et al, 2011; Peytchev, et al., 2010). RTI’s experimental approach aims to reduce nonresponse bias by using multiple sources of data to produce models that estimate a sample member’s response propensity prior to and following the early phase of data collections. After sample members with the lowest response propensities are empirically identified, they are targeted with interventions in an attempt to encourage participation. While ELS has historically made strategic decisions on targeting cases (e.g. dropouts), this new approach developed for ELS uses more data and aims to produce more precise estimates of which cases, based on their likelihood of response, should be considered for special treatment. The response propensity approach developed for the ELS Third Follow-up Field Test (FT) calls for the estimation of sample members’ response propensity at two specific points in the data collection: 1) prior to the commencement of data collection, and 2) after the completion of the early response period. This approach has been developed to gather two important pieces of information. First, what benefit can be gained, in terms of response rate improvement and bias minimization, by implementing a protocol to target low propensity cases using only data from prior waves of a longitudinal study. Second, what additional benefit can be gained, again in terms of response rate improvement and bias minimization, by utilizing the most recent data on sample members (e.g. tracing information, panel maintenance information), in addition to data from prior waves.

The approach will be implemented experimentally at each of the two time-points, with a random half of the low propensity cases assigned to an experimental group and the other random half to a control group. In the early phase, the low propensity control group will be treated no differently from high propensity cases.

Step 1 - Prior to Data Collection, Estimate Sample Member Response Propensity

The first phase of the experiment is to estimate an initial response propensity for each sample member using the complete data that is available for all sample members (including both questionnaire respondents and nonrespondents from prior rounds). The employed data comes from the base year, first follow-up and second follow-up waves of ELS; primarily from the sampling frame and “paradata” or data which describe the survey interviewing process.

To estimate a case’s response propensity prior to the start of the ELS Third Follow-up Field Test, a sample member’s eventual response status in the ELS Second Follow-up was predicted. A logistic regression model was fitted with the sample member’s ELS Second Follow-up response status as the dependent variable. As independent variables, a range of information known for all respondents and nonrespondents from each prior wave including information from batch tracing activities were examined for significance. The following variables were considered as predictors of a sample member’s Second Follow-up response outcome: base year response status, first follow-up response status, whether the respondent ever refused, whether the respondent has ever scheduled an appointment, whether the respondent was classified as hard to reach, the number of calls made to the respondent in F2, high school completion status, parental level of education, high school type, urbanicity, dropout status, and the sample member’s postsecondary aspirations.

No information about the race, gender, or any other demographic characteristics of the sample members was used for prediction.

Results of Initial Response Propensity Estimates

Significant predictors of a sample member’s Second Follow-up response status were: base year response status, first follow-up response status, whether an appointment was made with the respondent, whether the respondent had ever refused to participate in a wave, the number of calls placed to a respondent in the second follow-up, whether mother attended college, dropout status, and urbanicity.

Predicted probabilities derived from the logistic regression model were used to get an estimate of a case’s response propensity. Sample members above the median predicted probability are classified as high propensity, and those below the median as low propensity. In total, 528 cases are classified as high propensity and 527 as low propensity. For the implementation of the experiment, the 527 low propensity cases will be randomly split into experimental and control groups. The experimental group will receive a prompting call from the ELS Call Center midway through the early (web-only) phase of data collection. No prompting calls will be attempted during the field period for the control group or for the high-propensity group. Whether the prompting calls increase the participation level for low-propensity cases during the early phase of data collection will be evaluated. The goal of this phase is to examine how well low propensity cases using only prior wave data can be predicted and how these cases can be treated in terms of bias minimization.

Since low propensity cases assigned to the experimental group will receive treatment, of interest is how those cases are distributed according to their prior response status. Exhibit1 shows the distribution.

Exhibit 1. Distribution of Low Propensity Cases by Prior Response Status


BY Response Status



First Follow-up Response Status

Second Follow-up Response Status


% Respondent

% Nonrespondent

% Respondent

% Nonrespondent

% Respondent

% Nonrespondent

Third Follow Up FT Low Propensity Cases

85% (447)

15% (80)







78% (412)







22% (115)

55% (292)







45% (235)

Third Follow Up FT High Propensity Cases

96% (507)

4% (21)







100% (528)







0% (0)

92% (488)







8% (40)

Note: Actual counts of cases in parentheses.

As shown in Exhibit 1, the low propensity cases consist of both respondents and nonrespondents in all prior waves of ELS. Also, high propensity cases are not limited to 2nd follow-up respondents. A number of nonrespondents are classified as high propensity. This suggests that for ELS, prior round response status, while important may not be sufficient as a predictor of response outcome in the 3rd follow-up and should not be the sole basis for partitioning cases into propensity categories.

Exhibit 2 shows the distribution of the case propensities across some demographic characteristics of interest. From the data, it is clear that the demographic distribution of the propensities approximates the distribution in the overall FT sample. There is no obvious skewing across these demographic characteristics.


Exhibit 2. Distribution of Response Propensities by Sample Member and High School Characteristics


Percent (and number) of cases in FT Sample

Percent (and number) of Cases in High Propensity Category

Sample Member Characteristics



Male

50.3 (531)

47.2 (249)

White

55.0 (550)

58.9 (293)

Black

18.8 (188)

17.7 (88)

Hispanic

19.4 (194)

15.7 (78)

Asian

6.2 (62)

7.0 (35)

School Characteristics



Urban

40.1 (431)

38.3 (202)

Public

84.3 (889)

86.5 (457)



Step 2 - After Early Response Period, Recalculate Response Propensities Using Current Wave Paradata and Data from the Panel Maintenance

Recent studies undertaken at RTI have demonstrated that using data from the current study wave, as opposed to relying exclusively on data from prior waves, results in more precise estimates of a case’s response propensity. This is intuitive as well. More recent data on sample members tends to be more predictive of their response patterns. A second estimate of a case’s response propensity will be produced taking into account events occurring during earlier stages of the current study wave. The second phase of the response propensity approach will commence immediately after the end of the web-only early response period. A new logistic regression model will be fitted using a sample member’s response outcome during the early response period as the dependent variable. As independent variables, each independent variable used in the first modeling phase, the sample member’s F2 response outcome, as well as the following will be considered:

  • 2010 Panel Maintenance (PM) response

  • 2010 PM- Student Data Confirmed/Updated

  • 2010 PM- Parent 1 Data Confirmed/Updated

  • 2010 PM- Parent 2 Data Confirmed/Updated

  • 2010 PM- Who provided contact information (i.e., student, parent, or both)

  • Completeness of contact information including address, phone, email

  • Recency of contact information including address, phone, email

  • F2 postsecondary enrollment status

  • F2 type of postsecondary institution attended (e.g., private, public, 4-year, 2-year)

  • F2 employment status

  • Whether the sample member logged into the F3 survey during early phase

  • Calls to the helpdesk during early phase


Predicted probabilities from the new model will again be used to categorize the remaining nonrespondents into newly established high and low propensity groups. The new model will be run on all sample cases, since early period respondents would be needed for comparison purposes. Early response period respondents would not be contacted again, but for modeling, their data will be needed to estimate new predicted probabilities.

Treatments for Low Propensity Cases

The basic premise of the response propensity approach is to identify low propensity cases as early as possible and assign to them “special treatments.” In theory, treating low propensity cases in the same manner as high propensity cases is inefficient and possibly harmful to overall data quality. The special treatments for ELS FT low propensity cases are prompting calls during the early response period and higher incentives in the subsequent period. Starting midway though week 2, outbound prompting of the low propensity experimental cases will begin. After the end of the early response period, the incentive level of low propensity experimental cases will be raised to $45 and eventually $55. This compares to the $25 incentive all cases receive during the early response period. High propensity and control group cases will be offered $25 until the 10th week of data collection, when the incentive will go up to $35. Exhibit 3 outlines the timing and levels of the different treatments.

Exhibit 3. ELS FT Treatment Schedule


High Response Propensity

Low Response Propensity

Week

All High Cases

Control Group

Experimental Group

1

$25

$25

$25

2-3

$25

$25

Treatment 1 – Prompting Calls

$25 + telephone prompting (begins midway thru wk 2)

4-9



$25



$25

Treatment 2 – Differential Incentives

$45

10+

$35

$35

$55



Analysis Strategy

The experimental results will be evaluated by examining how well the models predict response outcomes and by investigating whether the treatments minimized bias. First, the response rates will be examined for groups defined by estimated response propensity, i.e., how well the assigned response propensities actually predict the survey outcome. Then it will be examined whether the variance of the response propensity, was lowered and whether the association between the response propensity and any chosen survey variables, , was reduced, thus minimizing nonresponse bias in survey estimates of means and proportions.

References

Groves, R. M., & Heeringa, S. (2006). Responsive design for household surveys: tools for actively controlling survey errors and costs. Journal of the Royal Statistical Society Series A: Statistics in Society, 169(Part 3), 439-457.

Rosen, J. A., Murphy, J. J., Peytchev, A., Riley, S., & Lindblad, M. (2011). The effects of differential interviewer incentives on a field data collection effort. Field Methods, 23, 24–36. (doi:10.1177/1525822X10383390)

Peytchev, A., Riley, S., Rosen, J. A., Murphy, J. J., & Lindblad, M. (2010). Reduction of nonresponse bias in surveys through case prioritization. Survey Research Methods, 4(1), 21–29. http://w4.ub.uni-konstanz.de/srm/article/view/3037.



6


File Typeapplication/msword
File TitleAppendix 9
Authorjrosen
Last Modified By#Administrator
File Modified2011-02-15
File Created2011-02-15

© 2024 OMB.report | Privacy Policy