MDPE SS 5-30-13 Part B_REVISED_10-21-13

MDPE SS 5-30-13 Part B_REVISED_10-21-13.docx

Pilot Project Assessing Economic Benefits of Marine Debris Removal

OMB: 0648-0681

Document [docx]
Download: docx | pdf


SUPPORTING STATEMENT


PRELIMINARY CASE STUDY ASSESSING ECONOMIC BENEFITS OF MARINE DEBRIS REDUCTION


OMB CONTROL NO. 0648-xxxx



B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g. establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.


The potential respondent universe consists of residents of Orange County, California who are 18 years old or older and living in non-institutional households that are listed in the United States Postal Service’s Computerized Delivery Sequence File (CDSF). In 2010, there were an estimated 2,273,573 adults living in Orange County, California.1


For the primary survey, a simple random sample of 4,000 non-institutional residential addresses will be drawn from the CDSF. Within each selected household, a single adult will be randomly selected to complete the survey using the next birthday method (Oldendick et al. 1988). The next birthday method involves including a statement in the cover letter that the survey should be completed by the adult living in the household whose birthday will be the next to occur after a particular date (in this case, September 1st). The anticipated response rate for the survey is 30 percent using the “RR3” response rate formula defined by the American Association for Public Opinion Research (AAPOR 2011), resulting in approximately 1,200 completed surveys.2 Link et al. (2008) achieved response rates of approximately 20 to 37 percent in an address-based mail survey using the CDSF.3


For the non-respondent follow-up survey, a simple random sample of 600 addresses will be drawn from the primary survey non-respondents. The anticipated response rate for the non-respondent follow-up survey is 20 percent (AAPOR RR3), resulting in approximately 120 completed surveys.


Table 3: Population and Sample Details: Primary Survey


Number of persons in universe:

2,273,573 adults

Number of persons in sample:

4,000 adults

Anticipated response rate:

30 percent (AAPOR RR3)

Anticipated number of completed surveys:

1,200 surveys



Table 4: Population and Sample Details: Non-Respondent Follow-Up Survey


Number of persons in universe:

6,412 adults4

Number of persons in sample:

600 adults

Anticipated response rate:

20 percent (AAPOR RR3)

Anticipated number of completed surveys:

120 surveys


2. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Sampling

Households will be selected using simple random sampling (no stratification) from the U.S. Postal Service’s CDSF. Within each selected household, a single adult will be randomly selected to complete the survey using the next birthday method (Oldendick et al. 1988).

As discussed in Part A, the primary research goal is to quantify the relationship between marine debris and beach visitation choices. Within the framework of this goal and contingent on finding an impact, a number of secondary research questions may be explored, including:


  • What specific types of marine debris have the greatest impact on beach choices (e.g., plastic, metal, glass etc.)?

  • Does the impact of marine debris on beach choices vary in a systematic way across respondents (e.g., are visitors with children more sensitive to marine debris levels)?

  • Does the impact of marine debris on beach choices depend on the location of the debris on the beach (e.g., within the wrack line, on the foreshore, or on the backshore)?

  • Is there a threshold above which marine debris seems to influence beach choices and below which it does not?

  • What is the relationship between respondents’ perceptions of marine debris and actual marine debris levels?



The primary research goal will be accomplished by estimating the magnitude of the RUM model coefficient associated with marine debris. As no RUM studies in the literature have addressed this particular issue, a meaningful power analysis cannot be conducted: there are no available empirical data that provide information about the potential variance of the coefficient of interest.The sample size of 1,200 was selected using professional judgment after reviewing sample sizes from recently published beach visitation RUM models. Parsons et al. (2009) recruited 1,012 respondents for a repeat-contact panel of beach visitors with 601 participating in all waves of the survey; Parsons, Massey, and Tomasi (2000) used 565 responses from a general population mail survey of Delaware households to estimate a RUM model of beach visitation; and Hanemann et al. (2004) used data from 1,182 respondents (823 of whom reported taking beach trips) to estimate a RUM model of beach visitation in southern California.

The sample size was selected with the goal of answering the primary research question only. As the additional research questions are secondary, and as they are contingent upon finding an impact of marine debris on beach choice, they were not considered in selecting the sample size for the study.


Estimation and Modeling

Data on survey respondents’ trips to local beaches will be combined with data on travel costs and beach characteristics to estimate the parameters of a random utility maximization (RUM) model. Data on characteristics of local beaches will be gathered outside of the survey effort and include: actual marine debris measurements, beach width, beach length, beach amenities (including the presence/absence of volleyball nets, fire pits, piers, bike path/boardwalk, food concessions, and playgrounds), type of neighborhood, parking cost, presence/absence of cobbles, and beach raking. The model will be used to estimate the benefits to Orange County residents associated with reductions in marine debris at local beaches.

Following Haab and McConnell (2002), the utility an individual receives from visiting a particular site (call it site j) is a function of the full opportunity cost of visiting the site (cj) the non-debris related attributes of the site (xj), and the amount of marine debris at the site (dj). The utility of visiting site j can then be written as:

V(y-cj,xj,dj)

where y is the individual’s income.

The value of a reduction in debris from the current level (d0) to a reduced level (d1) is the amount of income the individual would be willing to pay (WTP), or give up, in the improved state of the world, to be indifferent between a world with less debris, but less income, and a state of the world with no improvement in debris but current income. Because we cannot observe the beach choice after a hypothetical change in quality, the appropriate welfare measure is that amount of income (WTP) that equates the expected maximum utility (Emax) across all sites in the two states of the world:

EmaxV(y-cj,xj,d0) = EmaxV(y-cj-WTP, xj,d1)

As the desire to recreate at debris-free beaches may vary across individuals, models will also be estimated that allow for preference heterogeneity. This can be accomplished by interacting individual characteristics (e.g., age, gender, number of children) with the debris variable, or by estimating mixed logit (McFadden and Train, 1999) or latent class models (Boxall and Adamowicz, 2002).

There are hundreds of beach access points in the Orange County area. In order to make the research manageable, the study will focus on modeling trips to sandy public beaches that have clear public access, lifeguards, restrooms, shower facilities, and dedicated parking areas. Thirty-one such beaches were identified during a site reconnaissance in February 2013, ranging from Zuma Beach in the north (within Los Angeles County) to San Onofre Beach in the south (within San Diego County).

Marine Debris Measurements

Marine debris data will be obtained through detailed measurements at each site conducted using methods similar to those specified in NOAA’s Marine Debris Shoreline Survey Field Guide (Opfer et al. 2012) for standing stock studies. This field guide has been developed specifically to provide standardized methods that can be employed to estimate the distribution and concentration of marine debris at beach sites. These sampling methods have been previously used at numerous sites along the coastlines of the United States through many different groups engaging in marine debris monitoring and assessment efforts (MD-MAP.net; www.marinedebris.noaa.gov/projects/monitoring.html). Two assessments will be completed at each beach: one in mid-July and one in mid-August.

Briefly, during each assessment, field personnel will count and categorize all observed macro-debris (debris larger than 2.5 cm on the longest dimension) along four randomly selected 5m wide transects within a 100m segment of beach. Macro-debris represents the size of debris that can be most easily observed by beach visitors, where 2.5 cm represents the length measurement distinguishing between micro- and macro-debris (Arthur et al., 2009; Opfer et al., 2012). Transects will be selected randomly in order to eliminate any bias in selecting sampling locations at individual beach sites. Four random transects have been previously established to be the optimal number of replicates needed to provide a representative sample of marine debris concentrations across the entire area of an individual beach site (Versar, 2012; Opfer et al. 2012). Each transect will span the beach from water’s edge to approximately 20m past mean high tide. The endpoints of the 100m segment and the four transects will be marked with flags and the coordinates will be recorded using a GPS device. Any unusual or unidentifiable debris will be photographed. The counts will provide a measure of marine debris concentration (number of items per square meter) that can be broken down by type of debris (e.g., separate concentration measures for plastic, metal, glass, rubber, processed lumber, and cloth/fabric).



Recall Period for Beach Trips

As discussed in Part A, survey respondents will be asked to provide information about beach day trips taken during a three-month period, from June to August, 2013. While many past surveys designed to collect data on outdoor recreation activities have used recall periods as long as one year, recent government-sponsored survey efforts have typically used recall periods that range between two and four months in length. For example, the National Marine Fisheries Service’s ongoing Marine Recreational Information Program uses a two-month recall period for gathering data on saltwater angling trips, while the U.S. Department of the Interior’s National Survey of Fishing, Hunting, and Wildlife-Associated Recreation uses a four-month recall period. The Department of the Interior moved from an annual recall to a four-month recall period in 1991 because “Research found that the amount of activity and expenditures reported in 12-month recall surveys was overestimated in comparison with that reported using shorter recall periods (pg. vii, Department of the Interior, 2013).” As discussed below, results from the focus group discussions indicate that a three-month recall period will be reasonable for the current study.

Minor amounts of recall bias would be unlikely to have any impact on NOAA’s ability to use the study to evaluate the potential usefulness of RUM travel cost models in assessing benefits from reductions in marine debris, as any bias would likely impact reported trips to all sites in a similar manner (e.g., a small increase in reported trips to all sites). A proportional increase in trips to all sites would not impact the significance of the RUM parameter associated with marine debris.

3. Describe the methods used to maximize response rates and to deal with nonresponse. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield "reliable" data that can be generalized to the universe studied.


A number of measures will be implemented to maximize the response rate, including:

  • An advance letter will be sent to all sampled households. The letter will notify the household that a survey is on the way, describe the purpose of the survey, and encourage the individual to respond.

  • All letters will include the NOAA logo and will be signed by the director of the Program.

  • The survey will be sent via first-class mail and will include a self-addressed, stamped envelope to facilitate response.

  • A $2 cash response incentive will be mailed with the initial survey.

  • One week after sending the initial survey, a thank you/reminder postcard will be mailed to all sampled households thanking them for responding and encouraging them to complete the survey if they haven’t already.

  • Three weeks after sending the initial survey, a replacement survey will be mailed to all sampled households who have not yet responded. The replacement survey will include a self-addressed, stamped envelope to facilitate response.

  • All survey materials were carefully crafted to provide a pleasing appearance that encourages response. Questions were kept short and the total number of questions was minimized, given the research needs. An attractive, color map of local beaches will be included with each survey instrument.

A non-respondent follow-up mail survey will be conducted with a sub-sample of 600 individuals who do not respond after the first two survey mailings in order to investigate the potential for non-response bias. The size of this sub-sample was selected based on professional judgment, given the anticipated project budget. To maximize the likelihood of response, the non-respondent follow-up survey will be extremely short (five questions) and it will be sent via two-day Federal Express. The questions in the non-respondent follow-up survey are a subset of the questions from the main survey, selected to characterize non-respondents with respect to number of beach trips and attitude towards marine debris. The potential for nonresponse bias will be assessed by comparing responses from this follow-up survey with the responses from the primary survey.

In addition, the potential for nonresponse bias will be assessed by comparing the demographic characteristics of the survey respondents with the demographic characteristics of the population of Orange County residents using data from the U.S. Census Bureau’s Current Population Survey (CPS). Specifically, comparisons will be conducted for age, ethnicity, race, gender, and income. If substantial differences are observed, sampling weights will be developed through sequential post-stratification (e.g., raking), so that the weighted demographic totals for the survey data align with corresponding totals for the CPS (Battaglia et al. 2004).


4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved OMB must give prior approval.


Two focus group discussions (total of nine participants across both groups) were held on February 6th and 7th, 2013 in Irvine, California to discuss topics related to beach visitation and to pretest a draft version of the survey instrument. The focus groups were moderated by Dr. Christopher Leggett. Participants were recruited by Adler Weiner Research, Inc., using demographic specifications that ensured diversity with respect to age, gender, and frequency of beach visitation. In addition, all participants were 18 years old or older and had lived in Orange County for at least four years. Each participant was provided an $85 honorarium for participation.

The focus groups were designed to gather information about respondents’ experiences at local beaches, including specific beaches frequented, desirable and undesirable beach characteristics, and potential concerns about marine debris on beaches. Throughout each discussion, the moderator followed a written outline, or script, that provided a structure for the discussion. The following topics were covered by the moderator, in sequence, during each focus group:

  1. Introduction

  2. Visits to local beaches

  3. Undesirable local beaches

  4. Characteristics of local beaches

  5. Garbage and manmade debris on local beaches

  6. Beaches known for garbage and manmade debris

The focus groups used a funneling technique: the initial topics were broad, and the topics gradually became narrower and more focused on marine debris on beaches. For each topic, the moderator began by having the participants provide written responses to a set of targeted questions in the discussion guide. He then asked each participant to describe his/her response to the questions verbally, using additional probes for clarification where necessary. This approach stimulated discussion while at the same time preserving participants’ initial reaction to each question. After completing the discussion guide, the participants were asked to review selected questions from a draft version of the survey instrument.

The following conclusions were drawn from the Irvine focus groups:

  • Many participants indicated that they were concerned about the presence of marine debris on beaches in the local area, and some participants indicated that they would refrain from visiting specific beaches due to concerns about marine debris. Participants indicated that they had observed marine debris on the sand and in the water at numerous local beaches.

  • Participants singled out cigarette butts as a particular concern. Other items observed on local beaches that were of concern to participants included plastic bottles; plastic bags; glass bottles; cans; six-pack rings; bottle caps; plastic cups; abandoned clothes, coolers, towels, and chairs; condoms; animal feces; and fast-food bags.

  • Two beach characteristics that were very important to many respondents were parking (availability and cost) and crowding, particularly during the summer months. In addition, several participants mentioned three beach activities that were not included in the draft survey questions: beach volleyball, partying (bonfires), and biking.

  • Participants generally seemed confident that they would be able to recall the number and destination of day trips taken to local beaches over a three-month period.

  • The list of potential beach destinations appeared to be comprehensive; participants did not mention local beaches that had been excluded from the list.

  • The wording of several survey questions were changed in response to comments from focus group participants, including a question about number of persons per vehicle on a typical beach trip (some participants said they walked or biked to the beach), a question about familiarity with specifically named beaches in the local area (participants indicated that it would be better to ask if they had “heard of” the beaches), and a yes/no question about whether or not observing marine debris on local beaches bothered the respondent (this was changed to a 1-5 scale, and the wording was changed to ask how “concerned” the respondent would be to observe marine debris on local beaches.)

5. Provide the name and telephone number of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


The following individuals were consulted on the statistical aspects of the design:


  1. Dr. Timothy Haab, Department Chair and Professor, Department of Agricultural, Environmental, and Development Economics, The Ohio State University.

  2. Dr. Christopher Leggett, Consultant to Industrial Economics, Incorporated, Cambridge, Massachusetts.

  3. Dr. Adam Domanski, National Oceanic and Atmospheric Administration.


Industrial Economics, Incorporated will collect and analyze the information for the Marine Debris Program.






References


American Association for Public Opinion Research (AAPOR). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. The American Association for Public Opinion Research. 2011.

Arthur, C., J. Baker and H. Bamford (eds). 2009. Proceedings of the International Research Workshop on the Occurrence, Effects, and Fate of Microplastic Marine Debris. Sept 9-11, 2008. NOAA Technical Memorandum NOS-OR&R-30.

Battaglia M, Izrael D, Hoaglin DC and Frankel M. 2004. “Tips and Tricks for Raking Survey Data (Aka Sample Balancing).” American Association of Public Opinion Research.

Boxall, P.C. and W.L. Adamowicz. 2002. “Understanding Heterogeneous Preferences In Random Utility Models: A Latent Class Approach.”  Environmental and Resource Economics 23(4):421-446.

Brick, J.M., Williams, D., and Montaquila, J.M. 2011. “Address-Based Sampling For Subpopulation Surveys.” Public Opinion Quarterly, 75, 409-428.

Bureau of Labor Statistics (BLS) 2013. U.S. Department of Labor, Occupational Employment Statistics. Accessed May 13, 2013, www.bls.gov/oes/.

Dillman, D.A. 2000. Mail and Internet surveys: The Tailored Design Method. New York, NY: John Wiley & Sons.

Dykema, Jennifer, Kristen Cyffka, John Stevenson, Kelly Elver, and Karen Jaques. 2012. "SHOW Me the Money? Effects of Preincentives, Differential Incentives, and Envelope Messaging in an ABS Mail Survey." Paper presented at the annual meeting of the American Association for Public Opinion Research, May, Orlando, FL. 

Groves, R. M., M. P. Couper, S. Presser, E. Singer, R. Tourangeau, G. P. Acosta and L. Nelson. 2006. "Experiments in Producing Nonresponse Bias." Public Opinion Quarterly 70(5): 720-736.

Groves, Robert M., Eleanor Singer, and Amy Corning. 2000. "Leverage-Saliency Theory of Survey. Participation: Description and an Illustration." Public Opinion Quarterly 64:299-308.

Groves, R. M., S. Presser and S. Dipko. 2004. "The Role of Topic Interest in Survey Participation Decisions." Public Opinion Quarterly 68(1): 2-31.

Iannacchione, V.G. 2011. The Changing Role of Address-Based Sampling in Survey Research. Public Opinion Quarterly, 75 (3):556-575.

Oldendick, Robert W., George F. Bishop, Susan B. Sorenson, and Alfred J. Tuchfarber. 1988. “A Comparison of the Kish and Last Birthday Methods of Respondent Selection in Telephone Surveys.” Journal of Official Statistics, 4: 307-318.

Opfer, S., C. Arthur, and S. Lippiatt. 2012. NOAA Marine Debris Shoreline Survey Field Guide. January.

Haab, T.C. and K.E. McConnell. 2012. Valuing Environmental and Natural Resources: The Econometrics of Non-Market Valuation. Cheltenham, UK: Edward Elgar.

Han, Daifeng, Jill M. Montaquila, Douglas Williams, and J. Michael Brick. 2010. “An Examination of the Bias Effects with a Two-Phase Address-Based Sample.” Paper Presented at Section on Survey Research Methods, Joint Statistical Meetings.

Hanemann, W. M., L. Pendelton, C. Mohn, J. Hilger, K. Kurisawa, D. Layton, C. Bush, and F. Vasquez. 2004. “Using Revealed Preference Models to Estimate the Effect of Coastal Water Quality on Beach Choice in Southern California. A Report from the Southern California Beach Valuation Project to the National Oceanic and Atmospheric Administration.”

Lew, D. K., and D. M. Larson. 2005. Valuing Recreation and Amenities at San Diego County Beaches. Coastal Management 33: 71-86.

Link, M., M. Battaglia, M. Frankel, L. Osborn, and A. Mokdad. 2008. “A Comparison of Address-Based Sampling (ABS) versus Random Digit Dialing (RDD) for General Population Surveys.” Public Opinion Quarterly 72: 6-27.

McFadden, D. and K. Train. 2000. “Mixed MNL Models for Discrete Response.” Journal of Applied Econometrics, 15(5), pp. 447-470.

Medway, Rebecca L. and Jenna Fulton. 2012. “When More Gets You Less: A Meta-Analysis of the Effect of Concurrent Web Options on Mail Survey Response Rates.” Public Opinion Quarterly 76(4): 733-746.

Messer, Benjamin L. and Don A. Dillman. 2011. “Surveying the general public over the Internet using addressed-based sampling and mail contact procedures.” Public Opinion Quarterly, 75(3):429-57.

Moore, S.L., D. Gregorio, M. Carreon, S.B. Weisberg, and M.K. Leecaster. 2001. “Composition and Distribution of Beach Debris in Orange County, California” Marine Pollution Bulletin, 42(3): 241-245.

Parsons, George. 2003. “The Travel Cost Model.” Chapter 9 in A Primer on Non-Market Valuation, Patricia Champ, Kevin Boyle, and Thomas Brown, eds. Kluwer Academic Publishers. Dordrecht.

Parsons, George R., Paul M. Jakus, and Theodore D. Tomasi. 1999. “A Comparison of Welfare Estimates from Four Models for Linking Seasonal Recreational Trips to Multinomial Logit Models of Site Choice.” J. Environmental Economics and Management, 38(2):143-157.

Parsons, G., Kang, A., Leggett, C., and K. Boyle, 2009. “Valuing Beach Closures on the Padre Island National Seashore.” Marine Resource Economics 24: 213-235.

Parsons, G. and D. Massey. 2003. A Random Utility Model of Beach Recreation. In: The New Economics of Outdoor Recreation. N. Hanley, W.D. Shaw, and R.E.Wright, Edward Elagar, eds.

Parsons, G., Massey, D., and Tomasi, T. 2000. “Familiar and Favorite Sites in a Random Utility Model of Beach Recreation.” Marine Resource Economics, 14, 299-315.

Rathbun, P.R. and R.M. Baumgartner. 1996. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the 1996 Joint Statistical Meetings. Chicago, Illinois. June.

U.S. Department of the Interior, U.S. Fish and Wildlife Service, and U.S. Department of Commerce, U.S. Census Bureau. 2013. “2011 National Survey of Fishing, Hunting, and Wildlife-Associated Recreation.”

Versar, Inc. 2012. “Pilot Marine Debris Monitoring and Assessment Project.” Prepared for:

NOAA Marine Debris Division Office of Response & Restoration. April.

Warriner, K., J. Goyder, H. Gjertsen, P. Hohner, and K. McSpurren. 1996. "Charities, No; Lotteries, No; Cash, Yes: Main Effects and Interactions in a Canadian Incentives Experiment." Public Opinion Quarterly 60 (4): 542-562.

1 U.S. Census Bureau, 2010 Census.

2 AAPOR defines RR3 as I/(I+P+R+NC+O+e(UH+UO)) where: I is the number of completed surveys, P is the number of partially completed surveys, R is the number of refusals (e.g., household returns a blank survey or a refusal note), NC is the number of non-contacts (e.g., household provides notification that the respondent is temporarily unavailable), O is the number of other households (e.g., household provides indication of a language difficulty, literacy issue, or illness), e is the estimated proportion of households of unknown eligibility that are eligible, UH is the number of households with unknown occupancy status (e.g., no returned survey or other communication from the household), and UO is the number of other households where eligibility is unknown (e.g., mailing returned as “refused by addressee”) (AAPOR 2011).

3 Ideally, the anticipated response rate would be based on mail surveys that sample from the CDSF and focus on beach visitation. However, recent general population surveys that focus on beach visitation (including the surveys described in Parsons et al. 2009, Hanemann et al. 2004, and Lew and Larson 2005) recruited participants via telephone or did not sample from the CDSF (e.g., Parsons, Massey, and Tomasi 2000).

4 Assumes 2,800 non-respondents and 2.29 adults per non-respondent household.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-29

© 2024 OMB.report | Privacy Policy