SSL ss Part B 112806

SSL ss Part B 112806.pdf

Steller Sea Lion Protection Economic Survey

OMB: 0648-0554

Document [pdf]
Download: pdf | pdf
B. Collections of Information Employing Statistical Methods
1. Describe (including a numerical estimate) the potential respondent universe and any
sampling or other respondent selection method to be used. Data on the number of entities
(e.g. establishments, State and local governmental units, households, or persons) in the
universe and the corresponding sample are to be provided in tabular form. The tabulation
must also include expected response rates for the collection as a whole. If the collection has
been conducted before, provide the actual response rate achieved.
The potential respondent universe is all U.S. households (approximately 106 million according to
the 2000 Census). A stratified random sample of approximately 800 Alaska households and
4,200 non-Alaska U.S. households will be used. Alaskan households are oversampled to ensure
the inclusion of their preferences, since they are potentially more directly affected by actions to
protect Steller sea lions and are likely to have more familiarity with Steller sea lions. The nonAlaska U.S. household sample is larger, recognizing the importance of sample size
considerations for the ultimate goal of generating reliable national estimates.
For the collection as a whole, a response rate of approximately 57% is anticipated. This is the
response rate achieved for the pilot pretest implementation treatment employing a $10 monetary
incentive (see Appendix).
2. Describe the procedures for the collection, including: the statistical methodology for
stratification and sample selection; the estimation procedure; the degree of accuracy
needed for the purpose described in the justification; any unusual problems requiring
specialized sampling procedures; and any use of periodic (less frequent than annual) data
collection cycles to reduce burden.
The survey will use a stratified random sample of approximately 5,000 households purchased
from a professional sampling vendor (see footnote 4). The population is stratified into Alaska
and non-Alaska households with the Alaska household stratum consisting of approximately 800
households and the non-Alaska stratum consisting of approximately 4,200 households. The
advance letter and cover letter accompanying the initial mailing will solicit the participation of a
male or female head of household to complete the survey.
For each stratum, a sample of households will be purchased. Up to 15% of the purchased sample
may be invalid, leading to valid samples of 680 and 3,570, respectively, for the two strata.
Survey responses will be used to statistically estimate a valuation model using a random utilitybased multinomial choice model to assess the statistical significance of the set of attributes as
contributors to the respondent’s preferences for protecting Steller sea lions. Given the expected
response rates, the sample sizes described above should be sufficiently large for this modeling
and for data analysis generally. Assuming a conservative sample size estimate of 2100, each
with three stated preference choice question responses per respondent (i.e., responses to Q10,
Q11, and Q12), will result in 6300 (non-independent) observations. This provides a very large
amount of observations with which to estimate the valuation function. To our knowledge, this
sample size exceeds most, if not all, sample sizes for peer reviewed public good valuation
studies. Summary statistics (means, medians, standard deviations, minimums, and maximums)
will be calculated for responses to questions as well.
1

3. Describe the methods used to maximize response rates and to deal with nonresponse.
The accuracy and reliability of the information collected must be shown to be adequate for
the intended uses. For collections based on sampling, a special justification must be
provided if they will not yield “reliable” data that can be generalized to the universe
studied.
Numerous steps have been, and will be, taken to maximize response rates and deal with nonresponse behavior. These efforts are described below.
Maximizing Response Rates
The first step in achieving a high response rate is to develop an appealing questionnaire that is
easy for respondents to complete. Significant effort has been spent on developing a good survey
instrument. Experts on economic survey design and stated preference techniques were hired to
assist in the design and testing of the survey. The current survey instrument has also benefited
from input on earlier versions from several focus groups and one-on-one interviews (verbal
protocols and cognitive interviews), and peer review by experts in survey design and non-market
valuation, and by scientists who study Steller sea lions, other marine mammals, and fisheries. In
the focus groups and interviews, the information presented was tested to ensure key concepts and
terms were understood, figures and graphics (color and black and white) were tested for proper
comprehension and appearance, and key economic and design issues were evaluated. In
addition, cognitive interviews were used to ensure the survey instrument was not too technical,
used words people could understand, and was a comfortable length and easy to complete. The
result is a high-quality and professional-looking survey instrument.
The implementation techniques that will be employed are consistent with methods that maximize
response rates. Implementation of the mail survey will follow the Dillman Tailored Design
Method (2000), which consists of multiple contacts. The specific set of contacts that will be
employed is the following:
i. An advance letter notifying respondents a few days prior to the questionnaire
arriving. This will be the first contact for households in the sample.
ii. An initial mailing sent a few days after the advance letter. Each mailing will contain
a personalized cover letter, questionnaire, and a pre-addressed stamped return
envelope. The initial mailing will also include a $10 incentive.
iii. A postcard follow-up reminder to be mailed 5-7 days following the initial mailing.
iv. A follow-up phone call to encourage response. Individuals needing an additional
copy of the survey will be sent one with another cover letter and return envelope.
v. A second full mailing will be sent using USPS certified mailing to all individuals who
have not returned the survey to date, including individuals who we were unable to
contact in the first phone interview.
Non-respondents
To better understand why non-respondents did not return the survey and to determine if there are
systematic differences between respondents and non-respondents, those contacted in follow-up
phone call(s) and identified as non-respondents will be asked a few questions to gauge their
2

reasons for not responding to the mail survey. These include select socioeconomic and
demographic classification questions and a few attitudinal questions. Information collected from
non-respondents will aid in improving the survey implementation and to correct for non-response
bias.
Specific steps that will be employed to assess the presence and extent of non-response bias are
the following:
•

As a first step, demographic characteristics collected from respondents and nonrespondents will be used in two comparisons: a comparison of respondents to nonrespondents and a comparison of respondents to U.S. Census data. For respondents, age,
gender, income, and education information will be available from the completed survey.
The same information will be available from non-respondents who participate in the
telephone interview. A comparison of the demographic differences may indicate how
respondents and non-respondents are different with respect to these characteristics. We
will also compare demographic information for survey respondents with U.S. Census
data to evaluate sample representativeness on observable data.

•

A parallel type of comparison will be made with respect to answers to the attitudinal
questions asked of respondents and non-respondents. One of these questions is the
General Social Survey question (Q2 in the mail surveys and Q1 in the telephone
interview). The distribution of responses to this question by respondents and nonrespondents will be evaluated for the two groups and compared with the GSS survey
results for the most recent occurrence of this question. Q1 in the mail surveys and Q2 in
the telephone interview are the same and thus allow another means to compare
respondents and non-respondents. The demographic and attitudinal question
comparisons will enable us to assess how similar respondents and non-respondents are to
each other and to the general population (except for the non-GSS attitudinal questions).

•

Another step that will be taken to evaluate the potential for non-response bias will be the
analysis of estimated values from the preference function as a function of time/sample
size. This approach essentially seeks to assess whether the estimated economic values
stabilize as additional sample is added over time. In some surveys, estimated economic
values (i.e., willingness to pay) decrease for respondents who return the survey later,
perhaps reflecting that early responders may be more interested in the topic and thus have
higher values. By analyzing how WTP changes during response waves, we can evaluate
the potential presence and significance of this effect on population wide estimates.

After taking the steps above, we will evaluate the potential magnitude of potential non-response
bias on the valuation results. If the potential is large, we will evaluate additional actions, such as
employing the approach of Cameron, Shaw, and Ragland (1999) (or newer approaches along
these lines) to explicitly account for sample selection in the model estimates. Their approach
extends the general Heckman (1979) sample selection bias correction model to the specific case
of mail survey non-response bias. The approach involves using zip code level Census data as
explanatory variables in the sample selection decision to explain an individuals’ propensity to
respond to the survey.

3

4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as
effective means to refine collections, but if ten or more test respondents are involved OMB
must give prior approval.
Several focus groups with fewer than ten members of the general public were conducted during
the survey design phase (prior to the formal pretest) to test concepts and presentation of elements
of the survey. These focus groups were conducted in Seattle and Denver. The survey instrument
was then further evaluated and revised using input from one-on-one interviews conducted in
Anchorage, Denver, Sacramento, and Rockville (Maryland). Both verbal protocol (talk aloud)
and self-administered interviews were conducted, both with follow-up debriefing by team
members. Moreover, the survey design and implementation plan have benefited from reviews
conducted by academics with expertise in economic survey design and implementation.
More recently, a focus group held in Seattle was conducted to further evaluate the changes made
to the survey instrument since the formal pretest.
5. Provide the name and telephone number of individuals consulted on the statistical
aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other
person(s) who will actually collect and/or analyze the information for the agency.
Several individuals were consulted on the statistical aspects of the design:
Dr. David Layton
Associate Professor of Public Affairs
University of Washington
(206) 324-1885
Dr. Robert Rowe
Chairman of the Board
Stratus Consulting, Inc.
(303) 381-8000
Dr. Roger Tourangeau
Director, Joint Program in Survey Methodology
University of Maryland and
Senior Research Scientist, Survey Research Center
University of Michigan
Dr. Dan Lew
Economist
NOAA Fisheries
(206) 526-4252
Dr. David Layton, Dr. Robert Rowe, Dr. William Breffle (Stratus Consulting) and Dr. Dan Lew
will be involved in the analysis of the data.
PA Consulting conducted the pilot pretest implementation under OMB Control No.: 0648-0511,
but no contractor has been selected for the full implementation yet.
4

References:
Bosetti, V. and Pearce, D. (2003) “A study of environmental conflict: the economic value of Grey Seals in
southwest England.” Biodiversity and Conservation. 12: 2361-2392.
Cameron, Trudy A., W. Douglass Shaw, and Shannon R. Ragland (1999). “Nonresponse Bias in Mail Survey Data:
Salience vs. Endogenous Survey Complexity.” Chapter 8 in Valuing Recreation and the Environment: Revealed
Preference Methods in Theory and Practice, Joseph A. Herriges and Catherine L. Kling (eds.), Northampton,
Massachussetts: Edward Elgar Publishing.
Dillman, D.A. (2000) Mail and Internet Surveys: The Tailored Design Method. New York: John Wiley & Sons.
Fredman, P. (1995) “The existence of existence value: a study of the economic benefits of an endangered species.”
Journal of Forest Economics. 1(3): 307-328.
Giraud, K., Turcin, B., Loomis, J., and Cooper, J. (2002). “Economic benefits of the protection program for the
Steller sea lion.” Marine Policy. 26(6): 451-458.
Hagen, D., Vincent, J., and Welle, P. (1992) “Benefits of preserving old-growth forests and the spotted owl.”
Contemporary Policy Issues. 10: 13-25. (1992),
Heckman, James J. (1979). “Sample Selection Bias as a Specification Error.” Econometrica, 47(1): 153-162.
Jakobsson, K.M. and Dragun, A.K. (2001) “The worth of a possum: valuing species with the contingent valuation
method.” Environmental and Resource Economics. 19: 211-227.
Langford, I.H., Skourtos, M.S., Kontogianni, A., Day, R.J., Georgiou, S., and Bateman, I.J. (2001) “Use and nonuse
values for conserving endangered species: the case of the Mediterranean monk seal.” Environment and Planning A.
33: 2219-2233.
Lesser, V., Dillman, D.A., Lorenz, F.O., Carlson, J., and Brown, T.L. (1999). “The influence of financial
incentives on mail questionnaire response rates.” Paper presented at the meeting of the Rural Sociological Society,
Portland, OR.
Singer E. (2002). “The use of incentives to reduce nonresponse in household surveys.: In Survey Nonresponse, ed.
R Groves, D Dillman, J Eltinge, R Little, pp. 163-78. New York: John Wiley & Sons
Turcin, B. (2001) “Dichotomous choice contingent valuation willingness to pay estimates across geographically
nested samples: case study of Alaskan Steller sea lion.” Master’s thesis, University of Alaska, Fairbanks.
Turcin, B. and Giraud, K. (2003) “Motivations in willingness to pay estimates across geographically nested
samples: case study of Alaskan Steller sea lion.” Working paper, Department of Resource Economics and
Development, University of New Hampshire.

5


File Typeapplication/pdf
File TitleMicrosoft Word - SSL ss Part B 112806.doc
Authorskuzmanoff
File Modified2006-11-28
File Created2006-11-28

© 2024 OMB.report | Privacy Policy