CIBW NEW SS 041513 Part B revision

CIBW NEW SS 041513 Part B revision.docx

Cook Inlet Beluga Whale (CIBW) Economic Survey

OMB: 0648-0668

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

Cook inlet beluga whale Economic Survey

OMB CONTROL NO. 0648-xxxx



  1. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g. establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.


The potential respondent universe is all Alaska households (approximately 252,920 according to Census estimates). A stratified random sampling approach involving an initial mailing to 4,200 rural and urban Alaska households will be utilized that we expect will result in approximately 1,785 Alaska households completing the survey (based on the expected number of completed surveys in Part A, Question 12). Rural Alaskan households are oversampled to ensure the inclusion of their preferences, which may differ from those of urban households.1 Half of the 4,200 households contacted to participate in the survey will be from rural areas of Alaska, while the other half will be from urban areas.


For the collection as a whole, a response rate of 50% is anticipated for the mail survey. This estimate is based on the following:


  • Results from the $5 treatment of the formal pretest, which used nearly identical survey protocols and survey instrument, suggest a response rate of 43.7%. However, there were significant problems in administration of the survey that likely resulted in a significantly lower response rate than what we would expect in the final survey administration. Specifically, the survey contractor did not indicate NOAA sponsorship for the survey in the external mailing materials. While the advance letter and cover letters for the mailings did have the NOAA logo and wording to indicate the sponsorship of the survey by a U.S. government agency, the envelopes containing the advance letter and mailings did not have the NOAA logo or affiliation printed on them. As a result, it is our belief that a significant number of survey mailings were not opened, or otherwise ignored by sampled households. Our new survey contractor for the full survey implementation will utilize envelopes that clearly identify NOAA as a sponsor.

  • The survey literature suggests that government agency sponsorship of a survey leads to a significant boost in response rates. Scott (1961) found early evidence that government sponsored surveys yield a statistically higher response rates than university and commercial surveys. In a meta-analysis of the survey literature, Heberlein and Baumgartner (1978) supported this earlier finding, and estimated a meta-regression that suggested response rates may be 10 percentage points or higher for surveys sponsored by a government agency, all else being equal. Dillman et al. (2009) suggest this is because “a government survey often has greater legitimacy than a survey done by someone in the private sector, a nonprofit group, or a university” (page 389). Since it is likely that many households receiving the pretest survey were not aware that the survey was government-sponsored because the mailing envelopes indicated the survey was from “CIC Research” (the survey firm hired to administer the pretest), it is likely that the effect of government sponsorship was not reflected in the response rates achieved for the pretest. As a result, we would expect that with envelopes that clearly identify government sponsorship, that the response rate will be higher than 43.7%.

  • The Steller sea lion economic survey was successfully implemented by NOAA in 2007, achieving an overall response rate of 62% across all samples. That survey has many similarities to the Cook Inlet beluga whale survey, both in terms of content, length, and complexity, as well as the survey protocols that were used in the administration of the survey. Note that that survey achieved a response rate reaching 70% for Alaska households.


Given these factors, it is our belief that a response rate of 50% is a reasonable expectation.



2. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.


The full survey implementation will use a stratified random sample of approximately 4,200 households purchased from a professional sampling vendor. The population is stratified into rural and urban Alaska households with each stratum consisting of approximately 2,100 households. The cover letter accompanying the initial mailing will solicit the participation of a male or female head of household to complete the survey.


For each stratum, a sample of households will be purchased from Marketing Systems Group (MSG). The purchased samples are based on address-based sampling (ABS) using the second generation of U.S. Postal Service’s Delivery Sequence File (i.e., DSF2). MSG’s ABS-based sampling frame has more than 135 million residential addresses (for the entire U.S.), which represents 99% of the nation in terms of residential addresses. Their sampling frame is updated monthly, and name and phone number information are appended from several different sources, including TARGUS, and is updated daily.2 While there is recognition that rural areas tend to be underrepresented by ABS-based samples, and there is over-coverage due to households that have both a physical address and P.O. box, these samples have several recognized advantages over random digit dial and list-based samples for general population sampling (Iannacchione, 2011). Moreover, the sample purchased from MSG includes additional information about the sample, including demographic information pulled from numerous sources (commercial vendors, Census data at the block level, etc.) that will be important for evaluating non-response (discussed below).


Up to 15% of the purchased samples may be invalid, leading to valid samples of 1,785 for each of the two strata. The number of expected survey responses from these stratified samples will be sufficient for detailed analysis of individual question responses, as well as econometric analysis of the stated preference choice experiment questions.


3. Describe the methods used to maximize response rates and to deal with nonresponse. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield “reliable” data that can be generalized to the universe studied.


Numerous steps have been, and will be, taken to maximize response rates and deal with non-response behavior. These efforts are described below.


Maximizing Response Rates


The first step in achieving a high response rate is to develop an appealing questionnaire that is easy for respondents to complete. Significant effort has been spent on developing a good survey instrument. The research team developing the survey has considerable experience in economic survey design and testing, as well as stated preference techniques. The current survey instrument has also benefited from input on earlier versions from several focus groups and one-on-one interviews (verbal protocols and cognitive interviews), and peer review by experts in survey design and non-market valuation, and by scientists who study CIBWs and other marine mammals. In the focus groups and interviews, the information presented was tested to ensure key concepts and terms were understood, figures and graphics were tested for proper comprehension and appearance, and key economic and design issues were evaluated. In addition, cognitive interviews were used to ensure the survey instrument was not too technical, used words people could understand, and was a comfortable length and easy to complete. The result is a high-quality and professional-looking survey instrument.


The implementation techniques that will be employed are consistent with methods that maximize response rates. Implementation of the mail survey will follow the Dillman Tailored Design Method (Dillman et al., 2009), which consists of multiple contacts. The specific set of contacts that will be employed is the following:


      1. An advance letter notifying respondents a few days prior to the questionnaire arriving. This will be the first contact for households in the sample.

      2. An initial mailing sent a few days after the advance letter. Each mailing will contain a personalized cover letter, questionnaire, and a pre-addressed stamped return envelope. The initial mailing will also include an incentive of $5.

      3. A postcard follow-up reminder to be mailed about a week after the initial mailing.

      4. A follow-up phone call to encourage response. Individuals needing an additional copy of the survey will be sent one with another cover letter and return envelope.

      5. A second full mailing sent about one week after the conclusion of the telephone interview effort.


Non-respondents


Several steps will be taken to understand why non-respondents did not return the survey and to determine if there are systematic differences between respondents and non-respondents. A first step is to conduct a follow-up non-response survey of a sample of non-respondents. This non-response survey is a two-page survey that will be mailed by certified mail to a sample of 750 non-responding Alaska households. The survey includes several questions that can be used to gauge non-response to the mail survey. These questions include a direct question asking individuals to indicate which of several reasons may underlie their decision to not participate in the mail survey, as well as a few socioeconomic and demographic classification questions and two attitudinal questions correlated with responses to the willingness-to-pay (WTP) questions (see attached analysis of pretest results). Respondents to this non-response survey will be compared to the responders of the mail survey and to the Alaska population using Census data and relevant demographic variables. Respondents and non-respondents will also be compared with respect to the responses to the attitudinal and classification questions that are available for the Alaska population to assess differences.


Therefore, the specific steps that will be employed to assess the presence and extent of non-response bias are the following:


  • As a first step, demographic characteristics collected from respondents and non-respondents (collected in the non-response survey) will be used in two comparisons: a comparison of respondents to non-respondents and a comparison of respondents to Census data. For respondents, age, gender, and education information will be available from the completed survey. The same information will be available from non-respondents who participate in the non-response survey. A comparison of the demographic differences may indicate how respondents and non-respondents are different with respect to these characteristics. We will also compare demographic information for survey respondents with Census data to evaluate sample representativeness on observable data.


  • Similarly, Questions Q2 and Q4 from the questionnaire, and a question about membership or donations to a conservation or environmental organization, are also included in the non-response survey, and therefore can be used in a parallel fashion to compare respondents and non-respondents based on attitudinal dimensions and environmental involvement. The demographic and attitudinal question comparisons will enable us to assess how similar respondents and non-respondents are to each other and to the general population.


In addition, as noted in the previous section, the purchased sample not only includes name, mailing address, and telephone contact information, but also demographic information about the households in the sample. Specifically, the purchased sample includes the following information about each sampled household: Head of household name, age, and gender; number of adults in household, number of children, income, marital status, home ownership status (own/rent), education, and ethnicity. This information is a combination of data collected by third-party sources (e.g., consumer marketing vendors) and Census block-level data. Since these data are available for everyone in the sample, it is possible to analyze differences between respondents and non-respondents using these data as well. To this end, we will evaluate differences between all respondents and all non-respondents using demographic information that have been assigned to each household in the sample based on their mailing address, and then compare these groups to the Alaska population. While the data imputed from Census block- or tract-level data are not necessarily the actual levels that describe the individual household, Census blocks are the smallest geographic unit used to summarize population characteristics. Population sizes within a block vary widely and can range from 0 persons to several hundred. Census tracts are small subdivisions of a county that contain between 1,500 and 8,000 persons and were originally delineated to be “homogenous with respect to population characteristics, economic status, and living conditions” (http://www.census.gov/geo/www/cen_tract.html).


The steps above are useful for identifying systematic differences between respondents and non-respondents with respect to observable characteristics. In stated preference applications, however, information about the primary variables of interest, the ones used to estimate WTP, are usually not available. This precludes measuring the amount of non-response bias attributable to differences in WTP between respondents and non-respondents. To the extent that such a difference exists, WTP estimates generated from data provided by the sample respondents only will exhibit a selection bias (Edwards and Anderson 1987; Mitchell and Carson 1989). That is, in the context of stated preference surveys, the concern regarding selection bias is that individuals who respond to the survey have a different WTP (and more generally, preferences) compared to non-respondents irrespective of observable characteristics.


Thus, two types of biases associated with non-response are important for stated preference surveys, non-response bias and selection bias. Non-response bias is identified by comparing observable characteristics of respondents to non-respondents (as described above). If differences are found and the variable of interest is not one with an observable counterpart in the non-respondent data, the next step is to determine whether the variable of interest, such as WTP, is dependent upon the characteristics that differ between respondents and non-respondents. If it is, weights may be applied to the sample to generate estimates that better represent the full sample (see, for example, Leeworthy et al. 2001).3 Sample selection bias may exist in the presence or absence of non-response bias, and there is generally no method for testing for it.4 Instead, a number of endogenous selection models have been developed based on Heckman (1979) that explicitly model the selection process as a potentially correlated process. Further discussion of endogenous selection models are below.


Recent research in non-market valuation has used Census data to match characteristics of a sample’s respondents and non-respondents to evaluate non-response bias and to model sample selection jointly with the estimation of willingness to pay (Lee and Cameron, 2008; Cameron et al., 1999). The approach extends the general Heckman (1979) sample selection bias correction model to the specific case of mail survey non-response bias. The approach involves using zip code level Census data (or finer resolution data) as explanatory variables in the sample selection decision to explain an individuals’ propensity to respond to the survey. The econometric methods for jointly estimating sample selection and stated preference question responses have been developed for several contingent valuation question formats, including open-ended WTP questions (Edwards and Anderson, 1987), referendum (dichotomous choice) WTP questions (Whitehead et al., 1993), and referendum with follow-up (double-bounded dichotomous choice) WTP questions (Yoo and Yang, 2001). An extension of this framework to the analysis of stated preference choice experiment data is straightforward and builds upon work by Terza (2002).


4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved OMB must give prior approval.


In addition to the pilot pretest survey, three focus groups with fewer than ten members of the general public (with different questions for each group) were conducted during the survey design phase to test concepts and presentation of elements of the survey. These focus groups were conducted in Denver, Colorado, and in Sacramento and Marin County in California. The survey instrument was then further evaluated and revised using input from one-on-one interviews conducted in Salt Lake City, Utah. Both verbal protocol (talk aloud) and self-administered interviews were conducted, both with follow-up debriefing by team members. Moreover, the survey design and implementation plan have benefited from expert review by Dr. Kristy Wallmo of the Office of Science and Technology within NMFS, as well as reviews by environmental economists, Dr. Elizabeth Pienaar (University of Florida) and Dr. Kora Dabrowska (NOAA Knauss Fellow).


5. Provide the name and telephone number of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Several individuals were consulted on the statistical aspects of the design:


Dr. Dan Lew

Economist

National Marine Fisheries Service

(206) 526-4252


Dr. Brian Garber-Yonts

Economist

National Marine Fisheries Service

(206) 526-6301


Dr. Kristy Wallmo

Economist

National Marine Fisheries Service

(301) 713-2328


Drs. Dan Lew and Brian Garber-Yonts will be involved in the analysis of the survey data.


The contractor who will collect the data is


Zachary Lewis, Senior Study Director

Ipsos (formerly Synovate)

7600 Leesburg Pike

East Building, Suite 110

Falls Church, VA 22043

(703) 663-7235


References:


Berenguer, J., J.A. Corraliza, and R. Martin (2005) “Rural-Urban Differences in Environmental Concern, Attitudes, and Actions.” European Journal of Psychological Assessment, 21(2): 128-138.


Bergmann, A., S. Colombo, and N. Hanley (2008) “Rural Versus Urban Preferences for Renewable Energy Developments.” Ecological Economics, 65(3): 616-625.


Bosetti, V. and Pearce, D. (2003) “A study of environmental conflict: the economic value of Grey Seals in southwest England.” Biodiversity and Conservation. 12: 2361-2392.


Cameron, Trudy A., W. Douglass Shaw, and Shannon R. Ragland (1999). “Nonresponse Bias in Mail Survey Data: Salience vs. Endogenous Survey Complexity.” Chapter 8 in Valuing Recreation and the Environment: Revealed Preference Methods in Theory and Practice, Joseph A. Herriges and Catherine L. Kling (eds.), Northampton, Massachussetts: Edward Elgar Publishing.


Cummings, Ronald G. and Laura O. Taylor (1999) “Unbiased Value Estimates for Environmental Goods: A Cheap Talk Design for the Contingent Valuation Method,” American Economic Review, 89(3): 1999.


Dillman, D.A. (2000) Mail and Internet Surveys: The Tailored Design Method. New York: John Wiley & Sons.


Dillman, D.A., J.D. Smyth, and L.M. Christian (2009). Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 3rd Edition, Hoboken, New Jersey: John Wiley and Sons.


Dunlap, Riley E., Kent D. Van Liere, Angela G. Mertig, and Robert Emmet Jones (2000) “Measuring Endorsement of the New Ecological Paradigm: A Revised NEP Scale,” Journal of Social Issues, 56(3): 425-442.


Edwards, Steven F., and Glen D. Anderson (1987). “Overlooked Biases in Contingent Valuation Surveys: Some Considerations.” Land Economics, 63(2): 168-178.


Fredman, P. (1995) “The existence of existence value: a study of the economic benefits of an endangered species.” Journal of Forest Economics. 1(3): 307-328.


Freudenburg, W.R. (1991) “Rural-Urban Differences in Environmental Concern: A Closer Look.” Sociological Inquiry, 61(2): 167-198.


Groves, Robert M. (2006). “Nonresponse Rates and Nonresponse Bias in Household Surveys.” Public Opinion Quarterly, 70(5): 646-675.


Groves, Robert M., Floyd J. Fowler, Jr., Mick P. Couper, James M. Lepkowski, Eleanor Singer, and Roger Tourangeau (2004). Survey Methodology. Hoboken, New Jersey: John Wiley and Sons.


Hagen, D., Vincent, J., and Welle, P. (1992) “Benefits of preserving old-growth forests and the spotted owl.” Contemporary Policy Issues. 10: 13-25.


Heberlein, T. A., and R. Baumgartner (1978) “Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature.” American Sociological Review, 43(4): 447-462.


Heckman, James J. (1979). “Sample Selection Bias as a Specification Error.” Econometrica, 47(1): 153-162.


Huddart-Kennedy, E., T.M. Beckley, B.L., McFarlane, and S. Nadeau (2009) “Rural-Urban Differences in Environmental Concern in Canada.” Rural Sociology, 74(3): 309-329.


Iannacchione, Vincent G. (2011). “The Changing Role of Address-Based Sampling in Survey Research.” Public Opinion Quarterly, 75(3): 556-575.


Jakobsson, K.M. and Dragun, A.K. (2001) “The worth of a possum: valuing species with the contingent valuation method.” Environmental and Resource Economics. 19: 211-227.


Korinek, Anton, Johan A. Mistiaen, and Martin Ravallion (2007). “An Econometric Method of Correcting for Unit Nonresponse Bias in Surveys.” Journal of Econometrics, 136: 213-235.


Langford, I.H., Skourtos, M.S., Kontogianni, A., Day, R.J., Georgiou, S., and Bateman, I.J. (2001) “Use and nonuse values for conserving endangered species: the case of the Mediterranean monk seal.” Environment and Planning A. 33: 2219-2233.


Lee, Jaesung J., and Trudy A. Cameron (2008). “Popular Support for Climate Change Mitigation: Evidence from a General Population Mail Survey.” Environmental and Resource Economics, 41: 223-248.


Leeworthy, Vernon R., Peter C. Wiley, Donald B.K. English, and Warren Kriesel (2001). “Correcting Response Bias in Tourist Spending Surveys.” Annals of Tourism Research, 28(1): 83-97.


Lesser, V., Dillman, D.A., Lorenz, F.O., Carlson, J., and Brown, T.L. (1999). “The influence of financial incentives on mail questionnaire response rates.” Paper presented at the meeting of the Rural Sociological Society, Portland, OR.


Lew, D., Layton, D., and Rowe, R. (2010) “Valuing Enhancements to Endangered Species Protection under Alternative Baseline Futures: The Case of the Steller Sea Lion.” Marine Resource Economics. 25: 133-154.


Loomis, J., and White, D. (1996) “Economic Benefits of Rare and Endangered Species: Summary and Meta-Analysis.” Ecological Economics, 18: 197-206.


Olar, M., Adamowicz, W., Boxall, P., and West, G. (2007) “Estimation of the Economic Benefits of Marine Mammal Recovery in the St. Lawrence Estuary.” Report to the Policy and Economics Branch, Fisheries and Oceans Canada, Regional Branch Quebec.


Pate, Jennifer, and John Loomis (1997) “The Effect of Distance on Willingness to Pay Values: A Case Study of Wetlands and Salmon in California.” Ecological Economics, 20: 199-207.


Peytcheva, Emilia, and Robert M. Groves (2009). “Using Variation in Response Rates of Demographic Subgroups as Evidence of Nonresponse Bias in Survey Estimates.” Journal of Official Statistics, 25(2): 193-201.


Richardson, L., and Loomis, J. (2009) “The Total Economic Value of Threatened, Endangered and Rare Species: An Updated Meta-analysis.” Ecological Economics, 68: 1535-1548.


Rolfe, John, and Jill Windle. (2012) “Distance Decay Functions for Iconic Assets: Assessing National Values to Protect the Health of the Great Barrier Reef in Australia.” Environmental and Resource Economics, In press. DOI 10.1007/s10640-012-9565-3.


Salka, W.M. (2001) “Urban-Rural Conflict Over Environment Policy in the Western United States.” American Review of Public Administration, 31(1): 33-48.


Scott, C. (1961) “Research on Mail Surveys.” Journal of the Royal Statistical Society, Series A. 124(2): 143-205.


Singer E. (2002) “The use of incentives to reduce nonresponse in household surveys.: In Survey Nonresponse, ed. R Groves, D Dillman, J Eltinge, R Little, pp. 163-78. New York: John Wiley & Sons


Terza, Joseph V. (2002) “Alcohol Abuse and Employment: A Second Look.” Journal of Econometrics, 17: 393-404.


Tiller, K.H., P.M. Jakus, and W.M. Park (1997) “Household Willingness to Pay for Dropoff Recycling.” Journal of Agricultural and Resource Economics, 22(2): 310-320.


Van Goor, H., and B. Stuiver (1998). “Can Weighting Compensate for Nonresponse Bias in a Dependent Variable? An Evaluation of Weighting Methods to Correct for Substantive Bias in a Mail Survey among Dutch Municipalities.” Social Science Research, 27: 481-499.


Whitehead, John C., Peter A. Groothuis, and Glenn C. Blomquist (1993). “Testing for Non-Response and Sample Selection Bias in Contingent Valuation.” Economic Letters, 41: 215-220.


Yoo, Seung-Hoo, and Hee-Jong Yang (2001). “Application of Sample Selection Model to Double-Bounded Dichotomous Choice Contingent Valuation Studies.” Environmental and Resource Economics, 20: 147-163.


1 Random samples of general populations used in non-market valuation studies tend to be dominated by households living in urban areas since random samples will generally consist of very high proportions of urban households being represented in the sample (in large part reflecting the much smaller percentage of rural households in the overall population relative to urban households, which is approximately 19.3% for the U.S. as a whole according to the U.S. Census). This may lead to survey estimates being skewed to reflect the preferences of urban households, which may differ systematically from those of rural households (e.g., Bergmann et al. 2008; Freudenburg 1991; Tiller et al. 1997; Salka 2001; Berenguer et al. 2005; and Huddart-Kennedy et al. 2009).

2 According to MSG, the typical phone match is 50-62%, while name matches are between 85 and 90%.

3 A recent study by Korinek et al. (2007) propose an econometric approach for correcting for unit non-response, but the approach imposes a number of strong assumptions and data requirements.

4 Using a meta-analysis of 23 studies, Peytcheva and Groves (2009) found that the difference between demographic variable respondent and non-respondent means is not predictive of the difference between the means of respondents and non-respondents with respect to substantive variables in the study. The bias associated with the substantive variables is what we refer to here as selection bias. See also van Goor and Stuiver (1998).



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authorsarah.brabson
File Modified0000-00-00
File Created2021-01-29

© 2024 OMB.report | Privacy Policy