0647 SS Part B 020216

0647 SS Part B 020216.docx

Alaska Recreational Charter Vessel Guide and Owner Data Collection

OMB: 0648-0647

Document [docx]
Download: docx | pdf





SUPPORTING STATEMENT

Alaska Recreational Charter Vessel Guide and Owner Data Collection

OMB Control No. 0648-0647



B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g. establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.


The potential respondent universe is all saltwater-based charter boat businesses in Alaska during the year(s) of interest. Each of these businesses must purchase a state license to provide fishing guide services. The sport fishing license program is administered by the Alaska Department of Fish and Game (ADF&G). In 2014 (the most recent year of data available),1 there were 571 licensed saltwater sport fishing charter businesses. Thus, the population consists of all saltwater charter boat businesses that were licensed to offer saltwater fishing charter boat trips off Alaska during the year.


Past iterations of this data collection were conducted as censuses of the population. For the current data collection, we will utilize stratified random sampling to reduce the burden on the population. The population of charter businesses will be divided into four strata based on the number of licensed guides, number of vessels, and the International Pacific Halibut Commission (IPHC) regulatory area in which the business operates (Area 2C or 3A). Data on licenses and vessels are available from state and federal license databases. The IPHC area of operation can be determined from license data as well.


The first two characteristics define the size of the charter business and are strongly positively correlated to the effort level (number of charter trips taken per year).2 Charter businesses in the two IPHC regulatory areas are subject to differing regulations. The strata (and percent of overall population) are the following:


  1. Stratum 1: Area 2C charter businesses with one vessel and one guide (~24.9% of population)

  2. Stratum 2: Area 2C charter businesses with more than one vessel or guide (~27.9% of population)

  3. Stratum 3: Area 3A charter businesses with one vessel and one guide (~20.8% of population)

  4. Stratum 4: Area 3A charter businesses with more than one vessel or guide (~26.5% of population)


These population strata each comprise between about 21 and 28 percent of the overall population. A stratified random sample of 427 charter businesses will be contacted to participate, consisting of 106 from Stratum 1, 89 from Stratum 2, 119 from Stratum 3, and 113 from Stratum 4. This represents 75% of the 2014 population size (the most recent year with data available) of 571 charter businesses, as well as 75% of each population strata. For the collection as a whole, an overall response rate of 31% is anticipated. This estimate is based on the response rate attained with the implementation of this data collection in 2011, 2012 and 2013 (Lew et al. 2015b), or 25%, adjusted upward by another 6 percentage points as a conservative estimate of the effect of including a small prepaid incentive. We expect this response rate to be obtained in each sample strata, leading to 33, 37, 27, and 35 respondents, respectively, for the four strata.


2. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Each year, a simple random sample will be drawn from each of the four population strata, which are determined by license data characteristics. The size of the random sample will be 75% of the stratum’s population size. Thus, each member of the stratum has a 75% chance of being selected in a given year to participate in the survey. The sample size of the overall sample, as well as for the individual strata, was determined to ensure sufficient data would be obtained to get a precise population estimate of a proportional variable calculated with the overall stratified sample, assuming an alpha of 0.1 with a margin of error of 0.08 or less.3 Additionally, sampling 75% from each strata with a response rate of 31% will yield sufficient responses for calculating precise population estimates (assuming an alpha of 0.1 with a margin of error of 0.1) for each IPHC area (Area 2C respondents only or 3A respondents only) or for each size of business (all one vessel/one guide businesses or all multiple vessel and/or guide businesses). Note that the overall expected sample size of 132 is large enough to ensure a beta of no more than 0.2 (at least 80% power) assuming a margin of error of 11 percentage points and an alpha of 0.1 (power is 81.7%).


Sample weighting will be used to adjust the sample for the stratified random sampling approach (base weight), non-response bias (non-response weights), and to match up with any known population distributions of importance, such as effort level (post-stratification weights). See Lew et al. (2015a) for an example of how previous year’s data were weighted to adjust for sample representativeness. Since sampling from 75% of the four population strata in multiple years will lead to a significant proportion of the population being asked to participate in multiple years (assuming the composition of the population remains static), past participation will be adjusted for in the data by its inclusion as a factor in response propensity estimation used in the construction of the non-response weights. Additional consideration of past participation will also be made in models explaining charter business behavior, such as input demand functions and exit-stay discrete choice models (i.e., fishery participation models). The exact manner in which these considerations will manifest in modeling and calculations will depend upon the type of analysis being done, but at a minimum dummy variables for past participation will be included as explanatory variables in econometric models to identify potential biases.


3. Describe the methods used to maximize response rates and to deal with nonresponse. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield "reliable" data that can be generalized to the universe studied.


Numerous steps have been, and will be, taken to maximize response rates and deal with non-response behavior. These efforts are described below.


Maximizing Response Rates


The first step in achieving a high response rate is to develop an appealing questionnaire that is easy for respondents to complete. Significant effort has been spent on developing a good survey instrument. The survey instrument benefited from input on earlier versions from focus groups and one-on-one interviews with members of the target population. In the focus groups, participants helped identify questions and concepts that needed to be clarified or modified to make them easier to fill out for them, as well as provided useful information about ways of making the survey more useful and attractive for them and other charter boat operators to want to fill it out. The interviews were used to fine-tune survey design issues related to specific wording, flow, and comprehension issues. Additionally, the interviews were used to ensure the survey was a comfortable length and easy to complete. The result is a high-quality and professional-looking survey instrument.


Also, charter boat operators have made it clear to us that the optimal time for conducting the survey to minimize burden on them and maximize the accuracy of the information they provide is April and May of each year. During April and May, they will have the previous year’s tax information (profit and loss sheets) available—much of which we ask for in one form or another in the survey. Moreover, this is the time of year where they are gearing up for the upcoming season, which usually begins in late May and ends in early to mid September. As a result, conducting the survey in April and May will ensure that most charter boat operators are able to provide accurate information and have the time to do so before the season begins.


The implementation techniques that will be employed are consistent with methods that maximize response rates. Implementation of the mail survey will follow the Tailored Design Method (Dillman, Smyth, and Christian, 2009), which consists of multiple contacts. The specific set of contacts that will be employed is the following:


      1. An advance letter notifying respondents a few days prior to the questionnaire arriving. This will be the first contact with the sample.

      2. An initial mailing sent a few days after the advance letter. Each mailing contains a personalized cover letter, instructions and credentials for accessing the online survey, a printed questionnaire, a small monetary incentive ($5), and a pre-addressed stamped return envelope,

      3. A postcard follow-up reminder to be mailed 5-7 days following the initial mailing.

      4. A follow-up phone call to encourage response.4

      5. A second full mailing will be mailed after the follow-up phone calls.


In addition to standard approaches to increasing response rates in mixed mode survey applications that are implemented both in the construction of the survey, and the number and types of contacts with potential respondents (e.g., Dillman et al. 2014), we will need to utilize incentives to boost response.


Incentives are consistent with numerous theories about survey participation (Singer and Ye 2013), such as the theory of reasoned action (Ajzen and Fishbein 1980), social exchange theory (Dillman et al. 2014), and leverage-salience theory (Groves, Singer, and Corning 2000). Inclusion of an incentive acts as a sign of good will on the part of the study sponsors and encourages reciprocity of that goodwill by the respondent. Although these incentives do not necessarily have to be monetary in nature, a substantial literature has shown that monetary pre-incentives (as opposed to promises of money or gifts following participation) are effective at increasing overall response rates.


A comprehensive review of the use of incentives in surveys was conducted by Singer (2002). She notes that giving respondents a small financial incentive (even a token amount) in the first mailing increases response rates in mail-based surveys and is cost-effective. Such prepaid incentives are more effective than larger promised incentives that are contingent on completion of the questionnaire. In a review of more recent studies analyzing the effects of incentives on survey response, Singer and Ye (2013) confirm earlier findings that incentives increase response rates across survey modes (including web), monetary incentives have a stronger effect than non-monetary incentives, and prepaid (upfront) incentives have a bigger effect than promised or lottery based incentives. Another recent meta-analysis by Mercer et al. (2015) confirms these findings, although they could not identify a statistically significant effect on response rates of promised incentives. Their results show that a prepaid incentive leads to at least a 6 percentage point increase in response rates for mail surveys.


For specialized populations, the effects of incentives have been studied recently in the context of physicians by Dykema et al. (2011). They found that small pre-incentives were not effective at increasing response rates in a web survey, while the largest proffered pre-incentive ($100) led to the highest statistically significant response rate increase. In another web-based study of a similar population of doctors, Halpern et al. (2011) could not find evidence that promised incentives increased response rates, while monetary pre-incentives were shown to effectively increase response rates. Another study using mail surveys was done by James et al. (2010) and found response rates are highest for prepaid incentives. However, their study design did not include a no incentive control, so it is unknown what effect the promised incentives had relative to no incentive. For a non-medical specialized population survey, a mail survey of owners of small construction companies, James and Bolstein (1992) showed that prepaid incentives increased response rates at an increasing rate with amount, but that the promised incentive did not affect response rates relative to the control group (no incentive).


Given these findings, we believe a small prepaid incentive will boost response rates relative to previous surveys and would be the most cost effective means to increase response rates. A uniform $5 prepaid incentive was chosen due to considerations for the specialized population being targeted. Communication between charter business owners in the population is prevalent, at least at the local level, but also through regional and statewide charter associations. An incentive that is too low will likely be viewed as too insignificant and perhaps insulting. $5 is an incentive level that is affordable within the funding available for the project and is an amount that is likely to be viewed as a sign of goodwill without being too low to be disregarded.



Non-respondents


We anticipate the use of monetary prepaid incentives will lead to response rates that are at least 6 percentage points higher than the average response rate achieved in the previous three implementations of this survey. As noted above, we expect at least a 31% response rate. Still, we acknowledge that this response rate is low in absolute terms. Although the relatively low unit response rates for these surveys are not uncommon among voluntary cost and earnings surveys of commercial fisheries (Holland et al. 2012), they are below usual benchmark levels, such as those recommended in Dolsen and Machlis (1991). This suggests that adjustments must be made for missing data in order for the population-level estimates to be calculated with confidence.


We addressed survey unit non-response through sample weighting methods described in more detail in Lew et al. (2015a)5. These methods involve applying weights to individuals in the sample that adjust for the missing data associated with unreturned questionnaires. The objective is to give more weight to underrepresented individuals in the sample and less weight to overrepresented individuals in the sample so that the sample better reflects the profile of the population. In this context, representativeness can be determined by sample selection, external data on the sample respondents and non-respondents, or some combination thereof. A handful of studies have applied weighting methods to adjust for unit non-response in economic surveys of participants in recreational (Fisher 1996, Hunt and Ditton 2002, Tseng, Huang et al. 2012) and commercial (Knapp 1996, Knapp 1997) fisheries.


The non-response adjustment weight is designed to account for any differences between charter businesses that responded and those from the population who did not. As was done with the previously collected survey data (Lew et al 2015b), in this study we will be exploiting an auxiliary dataset obtained from the ADF&G’s Saltwater Charter Logbook Program that contains information for the population of charter businesses concerning when fishing occurred during the year, the amount of fishing effort, the species of fish targeted, and clientele type. Since the auxiliary dataset provides information about both respondents and non-respondents, a logit regression model (response propensity regression) can be used to estimate the likelihood of a charter business responding to the survey as a function of auxiliary variables collected in the logbooks.


4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved OMB must give prior approval.


For the original OMB approval, we conducted several focus groups with fewer than ten members of the target population, as well as a handful of cognitive interviews, during the survey design phase to test survey materials. Moreover, the survey design and implementation plan have benefited from review by individuals with expertise in fishing economic survey design and implementation.


5. Provide the name and telephone number of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


The following individuals were consulted on the statistical aspects of the design:


Dr. Dan Lew

Economist

NOAA Fisheries

Alaska Fisheries Science Center

(530) 554-1842

[email protected]


Dr. Amber Himes-Cornell

Social Scientist (formerly with NOAA Fisheries, Alaska Fisheries Science Center)


Dr. Dan Lew is responsible for analyzing the data.


The survey will be conducted in cooperation with the Pacific States Marine Fisheries Commission:





David Colpo

Pacific States Marine Fisheries Commission

205 SE Spokane Street, Suite 100

Portland, OR 97202

(503) 595-3100



References:


Azjen, I., and M. Fishbein (1980) Understanding attitudes and predicting social behavior. Englewood Cliffs, New Jersey: Prentice-Hall.

Brick, J. and G. Kalton (1996). "Handling missing data in survey research." Statistical Methods in Medical Research 5(3): 215-238.

Church, A.H. (1993) “Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis.” Public Opinion Quarterly, 57(1): 62-79.

Dillman, D.A., J.D. Smyth, and L.M. Christian (2014) Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 4th Edition, Hoboken, New Jersey: John Wiley and Sons.

Dolsen, D. E. and G. E. Machlis (1991). "Response Rates and Mail Recreation Surveys - How Much is Enough?" Journal of Leisure Research 23(3): 272-277.

Dykema, J., J. Stevenson, B. Day, S.L. Sellers, and V.L. Bonham (2011) “Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians.” Evaluation and the Health Professions 34(4): 434-447.

Fisher, M. R. (1996). "Estimating the Effect of Nonresponse Bias on Angler Surveys." Transactions of the American Fisheries Society 125(1): 118-126.

Graham, J. W. (2012). Missing data: Analysis and design, Springer Science & Business Media.

Groves, R. M., D. A. Dillman, J. L. Eltinge and R. J. A. Little, Eds. (2002). Survey Nonresponse. New York, USA, Wiley.

Groves, R.M., E. Singer, and A. Corning (2000) “Leverage-Saliency Theory of Survey Participation: Description and an Illustration.” Public Opinion Quarterly, 64(3): 299-308.

Halpern, S.D., R. Kohn, A. Dornbrand-Lo, T. Metkus, D.A. Asch, and K.G. Volpp (2011) “Lottery-Based Versus Fixed Incentives to Increase Clinicians’ Response to Surveys.” Health Services Research 46(5): 1663-1674.

Holland, S. M., C.-O. Oh, S. L. Larkin and A. W. Hodges (2012). The Operations and Economics of the For-Hire Fishing Fleets of the South Atlantic States and the Atlantic Coast of Florida. University of Florida, Report prepared for the National Marine Fisheris Service 150.

Hunt, K. M. and R. B. Ditton (2002). "Freshwater Fishing Participation Patterns of Racial and Ethnic Groups in Texas." North American Journal of Fisheries Management 22(1): 52-65.

James, J.M., and R. Bolstein (1992) “Large Monetary Incentives and Their Effect on Mail Survey Response Rates.” Public Opinion Quarterly 56(4): 442-453.

James, K.M., J.Y. Ziegenfuss, J.C. Tilburt, A.M. Harris, and T.J. Beebe (2010) “Getting Physicians to Respond: The Impact of Incentive Type and Timing on Physician Survey Response Rates.” Health Services Research 46(1): 232-242.

Knapp, G. (1996). "Alaska Halibut Captains' Attitude Toward IFQs." Marine Resource Economics 11: 43-55.

Knapp, G. (1997). "Initial Effects of the Alaska Halibut IFQ Program: Survey Comments of Alaska Fishermen " Marine Resource Economics 12: 239-248.

Lew, D. K., A. Himes-Cornell, and J. Lee (2015a). "Weighting and Data Imputation for missing Data in a Cost and Earnings Fishery Survey." Marine Resource Economics 30(2): 219-230.

Lew, D. K., G. Sampson, A. Himes-Cornell, J. Lee, and B. Garber-Yonts. (2015b). Costs, earnings, and employment in the Alaska saltwater sport fishing charter sector, 2011-2013. U.S. Dep. Commer., NOAA Tech. Memo. NMFS-AFSC-299, 134 p.

Little, R. J. and S. Vartivarian (2003). "On weighting the rates in nonresponse weights." Statistics in medicine 22(9): 1589-1599.

Lohr, S. (2010). Sampling: design and analysis, 2nd edition. Boston, MA, Cengage Learning.

Mercer, A., A. Caporaso, D. Cantor, and R. Townsend (2015) “How Much Gets You How Much? Monetary Incentives and Response Rates in Household Surveys.” Public Opinion Quarterly 79(1): 105-129.

Singer, E., and C. Ye (2013) “The Use and Effects of Incentives in Surveys.” The Annals of the American Academy of Political and Social Science, 645: 112-141.

Singer, E. (2002) “The Use of Incentives to Reduce Nonresponse in Household Surveys.” In Survey Nonresponse, eds. Robert M. Groves, Don A. Dillman, John L. Eltinge, and Roderick J. A. Little. New York, NY: Wiley, 163–178.

Tseng, Y.-P., Y.-C. Huang and R. Ditton (2012). "Developing a Longitudinal Perspective on the Human Dimensions of Recreational Fisheries." Journal of Coastal Research: 1418-1425.




1 We expect to get 2015 license data in the near future.

2 Correlation coefficients between each of these variables and effort level were about 0.90.

3 Minimum sample size calculations were conducted using formula related to stratified sampling for proportions in Lohr (2010).

4 Since the survey is lengthy and requires information that the respondent may not have ready or available during the telephone call, the follow-up phone call is expected to primarily be a means to encourage response and clarify the purpose and need for the study, and not to collect data for the survey.

5 Lew et al. (2015a) apply survey statistical methods commonly employed in the survey literature to adjust for unit non-response in the 2012 survey data described in this report. For more information about dealing with unit and item non-response in the survey statistics literature, see Brick and Kalton (1996), Groves et al. (2002), Little and Vartivarian (2003), Lohr (2010), and Graham (2012).

7


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSarah Brabson
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy