NPS_Visibility_Part_A_FINAL_7-30-15

NPS_Visibility_Part_A_FINAL_7-30-15.doc

Visibility Valuation Survey

OMB: 1024-0255

Document [doc]
Download: doc | pdf

Supporting Statement: Visibility Valuation Study (OMB# 1024-0225)


Supporting Statement A


NPS Visibility Valuation Study


OMB Control Number 1024-0255


Terms of Clearance: The early ICR was approved for a further focus group/pilot study of visibility improvement valuation in non-urban national parks and wilderness areas. The goal is to further refine the survey instruments and other aspects of the study design. Since this is a pilot study, the results of analyses using the data collected under this ICR are not suitable for policy purposes. When the agency submits a separate ICR to conduct the final survey in valuing visibility improvements, it should include a demonstration using the results from this focus group/pilot study ICR that the results of the final survey are likely generalizable. Such a demonstration will be important for considering the practical utility of the final survey.


Response: As described in the pilot survey report, a comparison of benchmarking question responses to well-established public opinion survey results, as well as respondent characteristics to Census data, indicates that survey respondents are similar to the general population. Analyses of valuation question responses based on data weighted to reflect general population parameters result in willingness-to-pay estimates that are generally between +/- 10 percent of unweighted estimates. These results suggest that the full survey results will be generalizable.



General Instructions


A completed Supporting Statement A must accompany each request for approval of a collection of information. The Supporting Statement must be prepared in the format described below, and must contain the information specified below. If an item is not applicable, provide a brief explanation. When the question “Does this ICR contain surveys, censuses, or employ statistical methods?” is checked "Yes," then a Supporting Statement B must be completed. OMB reserves the right to require the submission of additional information with respect to any request for approval.



1. Explain the circumstances that make the collection of information necessary. Identify any legal or administrative requirements that necessitate the collection.


On June 19, 2012, the Office of Management and Budget approved a pilot study of visibility improvement valuation in non-urban national parks and wilderness areas. The goal was to test and refine the survey instruments to be able to provide practical utility and generalizability of the final survey. The National Park Service (NPS) is requesting approval of this information collection request (ICR) that will be used to administer a national visibility valuation mail survey. This survey is the product of survey development and pre-testing activities conducted between 2008 and 2012. The results are provided in a report that describes the results from the pilot survey implemented in two multi-state regions.

The NPS serves in an advisory capacity on regulatory measures to achieve Clean Air Act requirements (including the Regional Haze Rule, 40 CFR Part 51). Therefore, it is essential for the agency to evaluate and understand the benefits and costs associated with efforts to improve air quality where visual quality is fundamental to visitor experience (Meldrum et al., 2006).

Current evaluation of Federal and state air quality legislation or regulations, as well as regional plans or policies that impact NPS-managed areas, is based on visibility valuation information derived from a study that is nearly 25 years old (Chestnut and Rowe, 1990) and that has been criticized for limited sample coverage, among other issues (Leggett et al., 2004). It is for these reasons that the NPS is seeking current visibility valuation information that will permit accurate evaluation of programs and policies affecting visibility in NPS-managed areas.


This collection will provide information required by the following laws, regulations, policies and statutes:


  • NPS Organic Act 16 U.S.C. §a-1

  • The Clean Air Act (CAA) 42 U.S.C. §7475(d)(2)(B) Sections 169A, 169B, and 110(a)(2)(j)

  • Regional Haze Rule, 40 CFR Part 51


2. Indicate how, by whom, and for what purpose the information is to be used. Except for a new collection, indicate the actual use the agency has made of the information received from the current collection. Be specific. If this collection is a form or a questionnaire, every question needs to be justified.


The collection will be used to provide the NPS with information needed to evaluate the benefits of programs and policies that may improve visibility conditions in non-urban National Parks and wilderness areas. The pilot study was conducted to test the applicability and usefulness of the survey instrument, sample design, and data analysis. The pilot survey was mailed in two regions. Telephone and mail surveys were used as follow-up methods to contact non-respondents. Overall, the pilot study results indicate that the instrument functioned properly and is ready for full implementation with minor revisions:


  • The survey response rates for the Four Corners and Southeast were 39 and 32 percent respectively; these response rates are not sufficient to exclude the potential for survey nonresponse to affect WTP estimates.

  • Comparisons with data from the mail and telephone follow-up surveys and comparisons with national probability survey results indicate that characteristics of people who responded to the pilot survey are not fully representative of the population that the pilot samples were drawn from. These results suggest that data analyses should consider weighted models that bring the sample into consistency with known population parameters. For the full survey we expect to rake to demographic characteristics, but also consider a non-response adjustment using data from the follow-up survey on respondent attitudes toward the environment and the government, for example.

  • The fraction of respondents choosing 0, 1, 2…..6 programs in the valuation questions was quite balanced in both pilot regions. For Four Corners the percentages (from choosing none, up to all six programs) were: 22%, 18, 17, 15, 10, 8, 10. In the Southeast they were: 16%, 17, 13, 18, 14, 9, 13. This is consistent with our experience in other high-quality choice experiment studies.

  • The estimated valuation question response equations differ between the Southeast and Four Corners regions. This indicates that it is important to implement final surveys in different regions of the country with baseline visibility and visibility improvements calibrated to each survey region. The results from the full survey will be aggregated at the regional level, with separate benefit estimates for each of the eight regions.

  • The statistical results suggest that people are most concerned with reducing the number of lowest visibility days and increasing the number of highest visibility days. Prior to full implementation, the experimental design may be modified to increase variation in changes in the interior of the distribution across programs. The visibility improvement levels used in the pilot are based on the distributions consistent with a linear improvement in the 20 percent worst days, plus several perturbations where mass (days) was moved from the endpoints to the interior of the distribution. Our intuition is that the result that the extent of the very best and worst days is most predictive of people’s choices is not an artifact of the design, but we plan to consider potential modifications to the “off-path” improvement levels. This would only affect a handful of the levels and would not otherwise change the overall experimental design.

  • Weighting data to account for sample nonresponse decreased estimated WTP in the Four Corners region and generally increased estimates in the Southeast. These results indicate that it will be important to provide the opportunity to weight survey data to representative population characteristics for each implementation region in the administration of the final survey.

  • Full details of the pilot study are provided in the report attached as Appendix A.

A national survey will be administered to random households in the following eight regions to estimate region-specific valuation of visibility conditions:


Northeast

Maine, New Hampshire, Vermont, New York, Massachusetts, Rhode Island, Connecticut, New Jersey, Pennsylvania, Ohio, and Indiana

Southeast

Delaware, Maryland, Virginia, West Virginia, Kentucky, Tennessee, North Carolina, South Carolina, Georgia, Alabama, Mississippi and Florida

Upper Midwest

Michigan, Illinois, Wisconsin, Iowa, and Minnesota

Central

Missouri, Arkansas, Louisiana, Texas, Oklahoma, and Kansas

Four Corners

Utah, Arizona, New Mexico and Colorado

Northern Plains/Rockies

North Dakota, South Dakota, Nebraska, Montana, Wyoming, and Idaho

Sierra Nevada

California and Nevada

Northwest

Oregon and Washington


The regions were delineated based on analysis of current and expected future visibility data to establish areas with relatively homogenous baseline and improved conditions.


The questionnaires will be identical in format and content, however, they will differ by region based on the accompanying maps/pictures and baseline/improved visibility conditions specified in the valuation questions. The questionnaire will contain the seven sections described below:


Section A: Background Questions


Questions 1 and 2 are intended to orient the respondent to the context of implementing and funding public programs and gauge their confidence in various institutions; they follow from the National Opinion Research Center (NORC) General Social Survey (GSS).


Section B: Provides information on haze and its effects on visibility

Question 3 engages the respondent regarding personal experiences with haze following the information and comparison photographs.


Section C: Provides background information on National Parks and Wilderness Areas


Questions 4 and 5 are intended to determine the respondent’s level of awareness regarding these areas. Focus group results suggested that individuals did not understand Wilderness Areas in particular, so the purpose of this information is to bring respondents to a common level of understanding regarding the locations where visibility improvements will occur.


Question 6 is intended to determine whether the respondent is aware of and has visited any of the parks or wilderness areas in the visibility improvement region specified on the enclosed map. This information may be relevant in explaining responses to the valuation questions.


Section D: Provides information on the sources of haze affecting the specified region


Question 7 is intended to gauge the salience of this issue to respondents.


Question 8 is intended to gauge the respondent’s level of knowledge regarding sources of haze.

Note that here and elsewhere in the survey, extensive background and technical information will be presented. This information is essential to establish the appropriate context for respondents to answer the valuation questions. Several of the questions following these information sections are intended to maintain the respondent’s attention and focus.


Section E: Provides information on improving visibility conditions, accompanying picture sets and example programs that will be evaluated in the valuation questions


Question 9 is intended to gauge the respondent’s reaction to and confidence in the information describing ways to reduce/control haze.


Question 10 is intended to determine respondent‘s appreciation of different haze levels in the photograph sets that will be the basis for alternative programs described in the valuation questions.

Section F: Provides information on each of the attributes that comprise the valuation questions- ecosystem changes, health changes, program timing and cost (visibility improvements are addressed in the previous section)- and the set of valuation questions


Questions 11 to 14 are designed to encourage the respondent to reflect on the attribute information to provide appropriate context for answering the valuation questions.


Questions 15 to 20 are the set of valuation questions. Each is a “single-choice” question where the respondent chooses between a specified visibility improvement program and the status quo. The levels of the attributes described above vary across questions according to a specified experimental design. Six replications of the question are administered to each respondent to maximize the efficiency of information collected while balancing potential for respondent fatigue.


The experimental design was developed by Dr. Barbara Kanninen of BK Econometrics, LLP in consultation with the study team. The design consists of four sets of six choice questions with varying attribute levels (Table 1) that will be randomly assigned to respondents (the same set of six questions will be answered by a quarter of the sampled persons in each region).


Table 1. Choice Question Attributes and Levels


Attribute

Description

Levels

Visibility Improvement

Bar chart depicting number of days in the year associated with each photograph in picture set

25, 50, 75 and 100% progress toward natural haze conditions

Ecosystem Impacts

Particles that form haze can affect water quality, soil, plants, and in turn, the growth and variety of plants and animals

No Change or A Small Reduction

Health Impacts

Some park visitors who have respiratory problems may experience coughing or shortness of breath on days with high levels of human-caused haze

No Change or A Small Reduction

Timing

Number of years until specified program improvements are realized

10 or 20 years

Cost

Recurring annual cost to household

15, 35, 65, $115


Section G: The questions in this section contain benchmarking and demographic questions and will serve as the questions used in the non-respondent follow-up survey.


Questions 21 to 25 are designed to elicit information regarding the credibility of the specified valuation scenario and respondent reactions to the valuation questions.


Questions 26 to 28 will be used to compare attitudes/characteristics of respondents to those of the general population from other large public opinion surveys.


Questions 29-34 are standard demographic questions that will be used to provide information on the representativeness of respondents with respect to the general population.


3. Describe whether, and to what extent, the collection of information involves the use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses, and the basis for the decision for adopting this means of collection. Also describe any consideration of using information technology to reduce burden and specifically how this collection meets GPEA requirements.


No automated or electronic techniques will be used. This information will be collected through mail administration of the questionnaires.

4. Describe efforts to identify duplication. Show specifically why any similar information already available cannot be used or modified for use for the purposes described in Item 2 above.


To our knowledge, no other agency is currently collecting visibility valuation data related to national parks and wilderness areas. Information currently used was collected 25 years ago and is limited in geographic scope.


5. If the collection of information impacts small businesses or other small entities, describe any methods used to minimize burden.


This information collection will only be sent to households and will not impact small businesses or other small entities.


6. Describe the consequence to Federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles to reducing burden.


Failure to conduct this study would force the NPS to continue to rely on outdated information, potentially compromising the accuracy and reliability of policy evaluations.


7. Explain any special circumstances that would cause an information collection to be conducted in a manner:

* requiring respondents to report information to the agency more often than quarterly;

* requiring respondents to prepare a written response to a collection of information in fewer than 30 days after receipt of it;

* requiring respondents to submit more than an original and two copies of any document;

* requiring respondents to retain records, other than health, medical, government contract, grant-in-aid, or tax records, for more than three years;

* in connection with a statistical survey that is not designed to produce valid and reliable results that can be generalized to the universe of study;

* requiring the use of a statistical data classification that has not been reviewed and approved by OMB;

* that includes a pledge of confidentiality that is not supported by authority established in statute or regulation, that is not supported by disclosure and data security policies that are consistent with the pledge, or which unnecessarily impedes sharing of data with other agencies for compatible confidential use; or

* requiring respondents to submit proprietary trade secrets, or other confidential information, unless the agency can demonstrate that it has instituted procedures to protect the information's confidentiality to the extent permitted by law.


No special circumstances apply to this information collection.


8. If applicable, provide a copy and identify the date and page number of publication in the Federal Register of the agency's notice, required by 5 CFR 1320.8(d), soliciting comments on the information collection prior to submission to OMB. Summarize public comments received in response to that notice and in response to the PRA statement associated with the collection over the past three years, and describe actions taken by the agency in response to these comments. Specifically address comments received on cost and hour burden.

Describe efforts to consult with persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported.


Consultation with representatives of those from whom information is to be obtained or those who must compile records should occur at least once every three years — even if the collection of information activity is the same as in prior periods. There may be circumstances that may preclude consultation in a specific situation. These circumstances should be explained.


On November 13, 2013, a 60-day Federal Register notice (78 FR 68089) was published stating an intention to request OMB approval for an information collection associated with the collection described in this ICR. In this notice, public comment was solicited for 60 days, ending on January 13, 2014. Only one comment was received. The commenter requested additional information on the survey. In response to this request, we provided a summary of the study purpose and design. No other public comments were received.


In addition to our Federal Register notice, we solicited comments from survey research, non-market valuation and visibility experts familiar with this study, as well as outside peer reviewers. We asked them to provide feedback on survey design, length and clarity of the questions. The individuals listed below provided editorial suggestions and feedback concerning the technical integrity and grammatical clarity of the instruments. We also asked that they provide an estimate of the length of time it would take to complete the questionnaire, based on their previous experiences with similar collections. Their suggestions were incorporated to provide improvements. The reviewers also said that the instruments were straightforward and that the instructions were very clear and useful. The respondents suggested that, based on their review of the final version of the survey instruments, the estimated burden time to complete the questionnaire will take at least 25 minutes per respondent. Therefore we estimate that it will take each respondent 25 minutes to read the instructions and complete the questionnaire.


Based on the reviewers’ comments some minor modifications to questions and survey wording were made to improve clarity. In general, however, comments were positive with respect to the chosen valuation methodology, the choice question design, and presentation of scenarios and information.


Dr. Richard Carson, Professor,

Department of Economics,

University of California, San Diego,


Dr. Vic Adamowicz, Distinguished Professor,

Department of Rural Economy,

University of Alberta,


Dr. Kevin Boyle, Professor

Agricultural and Applied Economics,

Virginia Tech University,


Dr. William Schulze, Professor

Applied Economics and Management,

Cornell University,


John Molenar,

Air Resource Specialists





9. Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.


A monetary incentive of $2 will be mailed with the survey materials. The use of modest monetary incentives has been shown to significantly increase survey response rates (Rathbun and Baumgartner, 1996 and Warriner et al., 1996). A monetary incentive of $5 will be included with the non-respondent follow-up survey.


10. Describe any assurance of confidentiality provided to respondents and the basis for the assurance in statute, regulation, or agency policy.


We will not provide any assurance of confidentiality to any respondents. The anonymous nature of responses will be described in the initial contact and survey cover letters. Evaluation and statistical analysis of collected information will be kept independent of the identity of individual respondents. Any information that identifies individuals will be accessible only to the study team, except as required by law.


11. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private. This justification should include the reasons why the agency considers the questions necessary, the specific uses to be made of the information, the explanation to be given to persons from whom the information is requested, and any steps to be taken to obtain their consent.


No questions of a sensitive nature will be asked.



12. Provide estimates of the hour burden of the collection of information. The statement should:

* Indicate the number of respondents, frequency of response, annual hour burden, and an explanation of how the burden was estimated. Unless directed to do so, agencies should not conduct special surveys to obtain information on which to base hour burden estimates. Consultation with a sample (fewer than 10) of potential respondents is desirable. If the hour burden on respondents is expected to vary widely because of differences in activity, size, or complexity, show the range of estimated hour burden, and explain the reasons for the variance. Generally, estimates should not include burden hours for customary and usual business practices.

* If this request for approval covers more than one form, provide separate hour burden estimates for each form and aggregate the hour burdens.

* Provide estimates of annualized cost to respondents for the hour burdens for collections of information, identifying and using appropriate wage rate categories. The cost of contracting out or paying outside parties for information collection activities should not be included here. Instead, this cost should be included under “Annual Cost to Federal Government.”


The survey will involve two components- a mail survey and a non-respondent follow-up survey. The samples for the eight multi-state regions will be drawn from the US Postal Service (USPS) Computerized Delivery Sequence File. A sub-sample of non-respondents will be contacted to complete a short follow-up survey:


  • General Population Mail Survey- 3,200 households total per region will be contacted (3,200 x8=25,600 households). Assuming a response rate of 35 percent based upon an average of the results of the pilot study, this implies that we should expect to receive 1,120 completed responses per region (1,120 x 8 = 8,960 responses).


  • Non-respondent Follow-Up Survey – We anticipate that there will be approximately 2,080 non-respondents per region. In four of the eight regions we will re-contact non-respondents to complete a short follow-up survey via Fed-Ex. Assuming a 17.5 percent response rate (half of the main survey rate) we expect to receive roughly 360 responses per region (360 x 4 = 1,440 responses).


  • Based on the pilot study, we are assuming that we will receive 1,120 completed responses from each region. Each respondent will spend about 25 minutes to complete and return the questionnaire (8,960 total respondents x 25 minutes = 3,733 hours). The non-respondent survey will take about less than 10 minutes to complete (1,440 respondents x 10 minutes = 240 hours). We estimate the total burden of this collection will be 3,803 hours (Table 3).




Table 3. Total Estimated Burden


Respondents

Total Number of

Responses

Completion Time

Burden Hours

General Population Mail Survey

8,960

25 minutes

3,733

Nonrespondent Survey

1,440

10 minutes

240

TOTAL

10,400


3,973


We estimate the total annual dollar value of this collection to be $115,654 (Table 4). We multiplied the estimated burden hours by $29.11 (for individuals or households). This wage figure includes a benefits multiplier and is based on the National Compensation Survey: Occupational Wages in the United States published by the Bureau of Labor Statistics Occupation and Wages, (BLS news release USDL-10-1687 for Employer Costs for Employee Compensation—June 2013 at - http://www.bls.gov/news.release/pdf/ecec.pdf), dated September 11, 2013).






Table 4. Estimated Dollar Value of Burden Hours


Activity

Sector

Annual Number of Responses

Total Annual Burden Hours

Dollar Value of Burden Hours (Including Benefits)

Total Dollar Value of Annual Burden Hours

Completing Survey

Private Individuals

10,400

3,973

$29.11

$115,654



13. Provide an estimate of the total annual non-hour cost burden to respondents or recordkeepers resulting from the collection of information. (Do not include the cost of any hour burden already reflected in item 12.)

* The cost estimate should be split into two components: (a) a total capital and start-up cost component (annualized over its expected useful life) and (b) a total operation and maintenance and purchase of services component. The estimates should take into account costs associated with generating, maintaining, and disclosing or providing the information (including filing fees paid for form processing). Include descriptions of methods used to estimate major cost factors including system and technology acquisition, expected useful life of capital equipment, the discount rate(s), and the time period over which costs will be incurred. Capital and start-up costs include, among other items, preparations for collecting information such as purchasing computers and software; monitoring, sampling, drilling and testing equipment; and record storage facilities.

* If cost estimates are expected to vary widely, agencies should present ranges of cost burdens and explain the reasons for the variance. The cost of purchasing or contracting out information collection services should be a part of this cost burden estimate. In developing cost burden estimates, agencies may consult with a sample of respondents (fewer than 10), utilize the 60-day pre-OMB submission public comment process and use existing economic or regulatory impact analysis associated with the rulemaking containing the information collection, as appropriate.

* Generally, estimates should not include purchases of equipment or services, or portions thereof, made: (1) prior to October 1, 1995, (2) to achieve regulatory compliance with requirements not associated with the information collection, (3) for reasons other than to provide information or keep records for the government, or (4) as part of customary and usual business or private practices.


There is no non-hour cost burden, recordkeeping nor any fees associated with collection of this information.


14. Provide estimates of annualized cost to the Federal government. Also, provide a description of the method used to estimate cost, which should include quantification of hours, operational expenses (such as equipment, overhead, printing, and support staff), and any other expense that would not have been incurred without this collection of information.


The total annual (one-time) cost to the Federal Government is $723,837. This includes the cost to the Federal Government for salaries and benefits for administering this information collection ($8,837) and operational expenses ($715,000). Table 5 below shows Federal staff and grade levels associated with this information collection. We used the Office of Personnel Management Salary Table 2011-DEN (http://www.opm.gov/flsa/oca/11tables/html/den_h.asp) to determine the hourly rate. We multiplied the hourly rate by 1.5 to account for benefits (as implied by the BLS news release USDL-10-1687). Operational expenses are listed in Table 6.


Table 5. Federal Employee Salaries and Benefits


Position

Grade/

Step

Hourly Rate

Hourly Rate incl. benefits

(1.5 x hourly pay rate)

Estimated time (hours)

Annual Cost

NPS ARD

13/6

$49.09

$73.64

120

$8,837



Table 6. Operational Expenses


Operational Expenses

Estimated Cost

Contract Support

  • Survey materials preparation, coordination, oversight of data collection, data analysis and reporting



$220,000





  • Monetary Incentive ($2 per 25,600 sampled persons for main survey, $5 per 8,320 sampled nonrespondents for follow-up survey)


$92,800

Survey Support


  • Sample procurement, survey printing, postage, non-response survey, data entry, etc.

$402,200

Total

$715,000


15. Explain the reasons for any program changes or adjustments in hour or cost burden.


This request is to conduct the final version of this proposed collection. The early versions were approved to conduct focus groups and a pilot test in two regions. The results of the pilot were used to refine the instruments that will be used in the eight region sample described in Part B of this submission.


16. For collections of information whose results will be published, outline plans for tabulation and publication. Address any complex analytical techniques that will be used. Provide the time schedule for the entire project, including beginning and ending dates of the collection of information, completion of report, publication dates, and other actions.


The results of the main survey will be published in a report to the NPS. Data tabulation will include response frequencies and measures of central tendency, as appropriate. Responses to valuation questions will be analyzed using standard discrete-choice modeling techniques (e.g., see Louviere et al., 2000 and Holmes and Adamowicz, 2003).


The estimated schedule for the full survey is as follows:


  • Final Material Preparation & Coordination Upon Approval

  • Main Survey Implementation Feb. 1 – April 30 2016

  • Data analysis and Reporting May 1 – July 30 2016


17. If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.


The OMB control number and expiration date will be displayed on each survey associated with this collection.


18. Explain each exception to the topics of the certification statement identified in "Certification for Paperwork Reduction Act Submissions."


There are no exceptions to the certification statement.




REFERENCES



Chestnut, Lauraine G. and Robert D. Rowe. 1990. Preservation Values for Visibility Protection at the National Parks. Prepared for U.S. Environmental Protection Agency, Office of Air Quality Planning and Standards, and National Park Service, Air Quality Management Division.


Holmes, T. and W. Adamowicz, Chapter 6: Attribute-Based Stated Preference Methods, in A Primer on Nonmarket Valuation, P. Champ, K. Boyle and T. Brown, eds., Kluwer, 2003.


Leggett, C., K. Boyle, R. Carson and R. Unsworth, Valuing Visibility in National Parks: An Overview of the Challenges, Final Report, Prepared for NPS Air Resources Division, July, 2004.


Louviere, J., D. Hensher and S. Swait, Stated Choice Methods, Cambridge University Press, 2000


Meldrum, B., S. Hollenhorst, L. Le and M. Manni, Clean Air in the National Parks: A Report on Visitor Perceptions and Values, NPS Social Science Program, Draft, March, 2006.


Rathbun, P.R. and R.M. Baumgartner. 1996. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the 1996 Joint Statistical Meetings. Chicago, Illinois. June.


Warriner, K., J. Goyder, H. Gjertsen, P. Hohner, and K. McSpurren. 1996. "Charities, No; Lotteries, No; Cash, Yes: Main Effects and Interactions in a Canadian Incentives Experiment." Public Opinion Quarterly 60 (4): 542-562.




15


File Typeapplication/msword
File Modified2015-07-30
File Created2015-07-30

© 2024 OMB.report | Privacy Policy