Experimental Testing Report

MFES experimental testing V2.docx

Marine Recreational Information Program Fishing Effort Survey

Experimental Testing Report

OMB: 0648-0652

Document [docx]
Download: docx | pdf











Marine Recreational Information Program

Fishing Effort Survey

Experimental Testing

9/26/2013

The MRIP Fishing Effort Survey (MFES) was implemented in Massachusetts, New York, North Carolina and Florida in October, 2012 to test a revised data collection design for monitoring marine recreational fishing effort. The survey, which collects information for two-month reference waves, included two experiments during the first two study waves, wave 5 (Sept-Oct 2012) and wave 6 (Nov-Dec, 2012), to test different survey design features aimed at maximizing efficiency and minimizing nonresponse error. Specifically, the experiments tested two versions of the survey instrument and four levels of cash incentives. Details of the experiments are provided below.


Instrument Testing


The MFES included an experiment to test two versions of the survey instrument. The objective of the experiment was to identify the instrument that maximized overall response rates while minimizing the potential for nonresponse bias resulting from differential nonresponse between anglers and non-anglers. One version of the instrument (Saltwater Fishing Survey) utilized a “screen out” approach that quickly identifies anglers (and non-anglers) and encourages participation by minimizing the number of survey questions, particularly for non-anglers. Person-level information, including details about recent fishing activity and limited demographic information, is collected for all household residents, but only if someone in the household reported fishing during the reference wave. The second version (Weather and Outdoor Activity Survey) utilized an “engaging” approach that encourages response by broadening the scope of the questions to include both fishing and non-fishing questions. This version collects person-level information for all residents of sampled households, regardless of whether or not household residents participated in saltwater fishing. Each wave, sampled addresses were randomly assigned to one of the two questionnaire types, which were evaluated in terms of response rates and reported fishing activity.


Table 1 provides the weighted response rates (AAPOR RR1 after excluding undeliverable addresses) and estimated fishing prevalence (percentage of households with residents who reported fishing during the wave) for the two versions of the instrument. Overall, the Weather and Outdoor Activity Survey achieved a significantly higher response rate than the Saltwater Fishing Survey, and there was no significant difference between instruments in estimated prevalence. The lack of a significant difference between instruments for estimated prevalence suggests that the gain in response for the engaging instrument cannot be attributed to increased survey participation by either anglers or non-anglers, but that both groups are more likely to respond to the Weather and Outdoor Activity Survey than the Saltwater Fishing Survey.


We also compared response rates and prevalence between instruments both among and within subpopulations defined by whether or not sampled addresses could be matched to state databases of licensed saltwater anglers – subpopulations expected to distinguish between households with anglers and households with no anglers or less avid anglers. As expected, both response rates and estimated prevalence were higher in the matched subpopulation than the unmatched subpopulation, confirming that a population expected to be interested in the survey topic - households with licensed anglers - is more likely to respond to a fishing survey and report fishing activity than a population that excludes licensed anglers1. Because we can identify household license status prior to data collection, we can account for differential nonresponse between matched and unmatched households in the estimation design by treating matched an unmatched domains as strata (Lohr, 2009).


Table 1. Weighted response rates and estimated prevalence overall and by domain for two versions of the survey instrument.


 

Saltwater Fishing

Survey

Weather and Outdoor Activity Survey

 

(%)

(n)

(%)

(n)

Response Rate





Overall

31.1 (0.4)

17,511

34.7 (0.4)*

17,510

Matched

45.4 (1.1)

3,160

45.0 (1.0)

3,247

Unmatched

30.3 (0.4)

14,351

34.0 (0.5)*

14,263











Prevalence





Overall

13.4 (0.5)

5,943

14.1 (0.5)

6,498

Matched

49.9 (1.7)

1,491

48.5 (1.6)

1,552

Unmatched

11.2 (0.6)

4,452

12.2 (0.6)

4,946

Notes – (1) standard errors are in parentheses. (2) Domains are defined by matching ABS samples to state databases of licensed saltwater anglers.

*Significantly different from Saltwater Fishing Survey (p<0.05).


There were no significant differences between instruments for either response rate or prevalence within the matched domain, suggesting that the inclusion of non-fishing questions in the Weather and Outdoor Activity Survey did not have an impact on response by either anglers or non-anglers. In the unmatched domain, the response rate was significantly higher for the Weather and Outdoor Activity Survey than the Saltwater Fishing Survey. However, the higher response rate did not translate to lower or higher estimates of prevalence; estimates of prevalence were not significantly different between instruments within the domain. This suggests that the engaging instrument uniformly increased the probability of response for anglers and non-anglers within the unmatched domain.


Differential nonresponse to a survey request between subpopulations will result in nonresponse bias if the subpopulations are different with respect to the survey topic. In the MRIP Fishing Effort Survey, we account for differential nonresponse between matched and unmatched households during sampling – matched and unmatched subpopulations are treated as independent strata. Subsequently, the potential for nonresponse bias is limited to differential nonresponse between anglers and non-anglers within the matched and unmatched subpopulations. While the Weather and Outdoor Activity Survey achieved a higher response rate than the Saltwater Fishing Survey, both overall and within the unmatched subpopulation, the gains in response do not appear to result from a higher propensity to respond to the survey by either anglers or non-anglers. As a result, we cannot conclude that one of the instruments is more or less likely to minimize differential nonresponse between anglers and non-anglers. However, higher response rates decrease the risk for nonresponse bias and either lower data collection costs (for a fixed sample size) or increase the precision of estimates (for a fixed cost)2. Consequently, we conclude that the Weather and Outdoor Activity Survey is superior to the Saltwater Fishing Survey and recommend that the instrument be utilized for subsequent survey waves. Because it collects person-level information for all residents of all sampled households, the Weather and Outdoor Activity Survey also supports post-stratification of survey weights to population controls, which is an additional benefit of this recommendation.


Incentive Testing

The MRIP Fishing Effort Survey included an experiment to test the impact of modest, prepaid cash incentives on survey response and survey measures. Each wave, sampled addresses were randomly allocated to incentive treatment groups of $1, $2, and $5, as well as a non-incentive control group. Incentives were only included in the initial survey mailing. As in the instrument experiment, the objective of the incentive testing was to identify an optimum level of incentive that maximizes overall response while controlling costs and minimizes the potential for nonresponse bias resulting from differential nonresponse between anglers and non-anglers. Response rates, estimated fishing prevalence and relative costs of completing an interview were compared among incentive treatments to quantify the impacts of incentives.


Table 2 shows weighted response rates and the results of a logistic regression model predicting the effects of incentives on the odds of obtaining a completed survey. Including an incentive in the initial survey mailing significantly increased the odds of receiving a completed survey, and the odds increased significantly as the incentive amount increased. Cash incentives of $1, $2, and $5 increased the odds of receiving a completed survey by 63%, 93% and 137%, respectively.


Table 2. Weighted response rates and odds of receiving a completed survey by incentive amount.


Incentive

Response Rate (%)

n

Odds Ratio

95 % CI

$0

22.6

8,760

1.00


$1

32.2

8,737

1.63*

(1.51, 1.77)

$2

36

8,738

1.93*

(1.78, 2.09)

$5

40.8

8,786

2.37*

(2.18, 2.56)

*Significantly different from the $0 control (p<0.05). Results of pairwise comparisons are as follows: $1>$0 (p<0.05), $2>$1 (p<0.05), $5>$2 (p<0.05).

Previous studies (Groves et al., 2006) have demonstrated that prepaid cash incentives can motivate individuals with little or no interest in a survey topic to respond to a survey request. Subsequently, we hypothesized that incentives would have a larger impact on non-anglers than anglers, minimizing differential nonresponse between the two populations. We initially explored this hypothesis by comparing estimated fishing prevalence among incentive conditions, expecting that gains in response in the incentive conditions would translate to lower estimates of fishing prevalence. The results do not support this hypothesis; there were no significant differences in prevalence among incentive conditions (Table 3).


Table 3. Overall estimated fishing prevalence by incentive amount.


Incentive

Prevalence (%)

n

$0

12.8

2,154

$1

14.1

3,065

$2

13.6

3,415

$5

14.1

3,807

Note – Differences in prevalence among treatments are not significant (p=0.05)


We further explored the interaction of topic salience and incentives by examining response rates and estimated fishing prevalence for the incentive conditions within domains defined by whether or not sampled addresses could be matched to databases of licensed saltwater anglers. We expected incentives to have a more pronounced effect in the unmatched domain, a population less likely to have an interest in the survey topic, than in the matched domain. Table 4 shows that incentives increased the odds of receiving a completed survey in both the matched and unmatched subpopulations. However, the value of the incentive seems to be more important in the unmatched domain, where the odds of receiving a completed survey increased uniformly and significantly as the value of the incentive increased ($0<$1<$2<$5). In contrast, the incentive amount was less significant in the matched domain, where the odds of receiving a completed survey were relatively flat among incentive conditions. These results are consistent with our expectations and suggest that a population with a low propensity to respond to a fishing survey can be motivated to participate by cash incentives, and that the motivation may increase as the incentive amount increases.




Table 4. Odds of receiving a completed survey by level of incentive for sample that could and could not be matched to state databases of licensed anglers.

 

Subpopulation

Comparison Pair

Matched

Unmatched

OR

OR

$1 vs. $0

1.75**

1.63**

$2 vs. $0

2.01**

1.93**

$5 vs. $0

2.11**

2.39**

$2 vs. $1

1.15

1.18**

$5 vs. $1

1.21*

1.46**

$5 vs. $2

1.05

1.24**

Notes – The second value in the comparison pair is the reference value.

Significance: *p<0.05, **p<0.0001


As noted previously, we expected that the gains in response in the incentive conditions would translate to lower estimates of fishing prevalence, particularly in the unmatched subpopulation. Once again, the results are not consistent with expectations; differences in fishing prevalence among treatments were not significant in either the matched or unmatched domain (Table 5). The lack of an effect of incentives on fishing prevalence suggests that the gains in response associated with increasing incentive amounts are uniform between anglers and non-anglers.


Table 5. Estimated fishing prevalence by incentive amount for a population of anglers (matched) and non-anglers (unmatched).


 

Subpopulation


Matched

Unmatched

Incentive

(%)

(n)

(%)

(n)

$0

49.2

533

10.7

1,621

$1

50.3

779

12

2,286

$2

48.6

837

11.6

2,578

$5

48.2

894

12.4

2,913

Note – Within subpopulations differences in prevalence among treatments are not significant (p=0.05)



We also examined the effect of cash incentives on overall data collection costs, specifically the direct costs of printing, postage, and the cash incentives themselves. Table 6 shows that the $5 incentive provided the largest gain in response, but the gain came at a relative cost of approximately $0.15 per completed interview. In contrast, the additional costs of the $1 and $2 incentives (20% and 38% higher cost than the $0 control, respectively) are more than offset by the associated gains in the number of completed surveys (42% and 58%, respectively). In other words, including a $1 or $2 cash incentive in the initial survey mailing actually decreased the cost of receiving a completed survey by 22% and 20%, respectively. These cost savings, which are conservative3, could be used to lower overall data collection costs (for a fixed sample size) or increase the precision of survey estimates (for a fixed cost).


Table 6. Effect of incentives on data collection costs

Incentive Amount

Relative Cost Difference

Relative Difference in Completed Surveys

Relative Cost per Completed Survey

$0

1.00

1

$1.00

$1

1.20

1.42

$0.78

$2

1.38

1.58

$0.80

$5

1.90

1.75

$1.15

Note – relative differences reflect the ratio of quantities (cost, completes) in the experimental treatments to the zero dollar control.


Including a modest prepaid cash incentive in survey mailings clearly has a positive effect on survey response rates; the odds of receiving a completed survey increased significantly as the incentive amount increased. We expected the incentives to have a greater effect on non-anglers than anglers and decrease the potential for nonresponse bias by minimizing differential nonresponse between these two populations. However, the results of the experiment suggest that incentives increase response propensities for non-anglers and anglers equally. While this result does not support our hypothesis, it does demonstrate that incentives can increase the quantity of data without having a negative impact on survey measures. The experiment also demonstrated that incentives can decrease overall data collection costs. Based upon these findings, we conclude that a $2 incentive is optimal in terms of both maximizing response rates and minimizing data collection costs.







References

Groves, R.M., M.P. Couper, S. Presser, E. Singer, R. Tourangeau, G.P. Piani, N. Lindsay. 2006. Experiments in producing nonresponse bias. Public Opinion Quarterly, 70: 720-736.


Lohr, S. 2009. Multiple Frame Surveys. Chapter 4 in Pfeffermann, D. (Ed.) Handbook of Statistics: Sample Surveys Design, Methods and Applications (vol. 29A). Elsevier, Amsterdam.





1 The classification of sample into domains is dependent upon matching ABS sample to license databases by address and telephone number. This process is unlikely to be 100% accurate, so the unmatched domain is likely to include some households with licensed anglers. The unmatched domain also includes households with residents who fish without a license.

2 Assuming that fixed costs are the same for the two instruments, which was the case in the experiment.

3 The cost comparison assumes that the non-incentive direct costs (postage and printing) are the same for all survey treatments and does not reflect the fact that incentive conditions may not require as many follow-up mailings.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRob_Andrews
File Modified0000-00-00
File Created2021-01-29

© 2024 OMB.report | Privacy Policy