NASS Review & Forest Service Response
Alan Watson Response to Review by Carolyn Swan, Mathematical Statistician, USDA/NASS, Statistics Division/Methods Branch
June 21, 2007
NASS
This is a two-phase study whose initial (“front-end”) phase is an interview of park visitors for address gathering purposes. The second phase consists of a voluntary mail-out survey in two versions, one for overnight visitors to the park, the other for day visitors, based on the earlier address gathering. A major purpose of the study is to analyze trends in park use, demographics of visitors, and their attitudes toward wilderness areas, following earlier surveys in other national parks (1969, 1971).
Forest Service Response: This information collection is at a National Forest Wilderness in Minnesota, not a park. The reviewer did describe the basic methodology correctly, with an on-site voluntary response at access points to include day users and overnight users, with a follow-up, voluntary mailback questionnaire sent out within two weeks of contact. Many of the questions included, however, replicate those asked of visitors to this same area in two previous studies in 1967 and 1991. Many questions have also been asked of visitors to other places, as well, and some questions related to changes in management policies at this place since the most recent study of visitors there.
NASS
Some design issues warrant further discussion in the OMB package. The sample design is based on temporal clusters, sampling of heavy-use days in the park, about one fourth of the days in the peak season, on which visitor addresses are collected at 17 primary sampling points (departure or arrival points for excursions). The sampling days are bisected (morning and afternoon sessions) and include half-day sampling sessions by Forest Service employees at cooperating outfitter concessions in surrounding local communities. The temporally defined clusters could incur risk of biasing effects from periodicity. One example: if most of the address collection turns out to be done in the morning, the interviewers may not be giving adequate coverage to visitors from more distant urban areas, arriving later than those from communities closer to the park. Periodic phenomena could be analyzed and accounted for by stratifying half-day clusters and organizing the sampling accordingly.
Forest Service Response: The proposed sampling design for the current study is informed by the sample taken in 1991. It also builds on that example to develop a sampling design more representative of the current population (including adding day visitors – an important change in use there is believed to be changing day use patterns). The reviewer was incorrect in believing that “heavy-use days” were targeted for the sample. In fact it is a random sample through the heaviest use season at the heaviest use places (70% of all use accounted for), defines the sampling plan for one-half of each sample day, then the other half day provides sampling opportunity for lesser used as well as the heavy uses access points.
Subsequent to this review and additional on-site discussions, this paragraph was expanded to explain the logic of random selection of morning and afternoon sampling blocks to avoid the bias potential discussed above:
For each interviewer 19 random days were chosen and then the day before or the day after was alternately added to form sampling blocks of at least two days. Each of those days was then randomly assigned to one of the primary sampling locations, with distribution of sampling across entry points adjusted by level of use. Specific site sampling plans have been developed to guide the interviewers on each day of the sampling season. For each day, a morning or afternoon sampling unit was randomly selected (7:30 – 11:30 am for on-site, 7-11:00 am for permit distribution centers; 2 – 6 pm for on-site locations, 1 – 5 pm for permit distribution centers) and then an afternoon location and time period assigned to avoid bias toward selection based upon prevalent times of departure and arrival. In most cases, entry is limited to one or two specific launch points or trailheads and that is where on-site sampling will occur. However, a few of the points have numerous entry locations that may differ by type of use. These have been identified and randomly chosen for on-site sampling.
NASS
In the write-up, there is an emphasis on achieving quotas to account for nonresponse, but how current are the nonresponse rates assumed? It is generally accepted that nonresponse to surveys is an increasing phenomenon. It should also be noted that sample numbers achieved are, in themselves, no guarantee of a data product’s statistical reliability or utility. When the interviewers themselves carry the responsibility for listing (or sampling), there is a real risk of coverage or sampling bias, unless interviewers are given comprehensive and very specific instructions on how to complete their statistical task–and unless they receive adequate supervision. Coverage may be problematic at the 17 sampling points and at the commercial establishments selected for address collections. If, as it is asserted in part B, section 2, the interviewers need only complete “an average of 2.2 visitor contacts” per sampling day to reach their quota, they might be tempted to select only those cases that they perceive as the easiest, injecting coverage and sampling bias into the survey. Further discussion of the interviewing methodology would be helpful. It appears that the subsequent address sampling is one-to-one.
Forest Service Response: We appreciate the reviewer’s concern about the quota. When we indicated in the sampling plan that “This will require each interviewer to make an average of 2.2 visitor contacts per day to reach the target of 666 total intercepts,” we only meant that this is the minimum anticipated number of contacts per day in order to reach the target contact numbers. It was not really intended to establish a quota, but rather some gauge of how well our sampling plan is working. We anticipate some days where we will contact very high numbers and some days with maybe no contacts, depending upon weather, season, and other factors. If numbers are higher than that, we will sub-sample the total pool of contacts to obtain our sample, if numbers appear to be low, some greater intensification of sampling time may be implemented. We will assure close supervision of the field data collector by a senior scientist, with frequent on-site evaluations of sampling process and success.
We do acknowledge that while past response rate examples for similar surveys include the Boundary Waters Canoe Area Wilderness (74% response), Shining Rock Wilderness (75% response), Desolation Wilderness (83% response), and Gates of the Arctic National Park and Preserve (95% response), as the reviewer indicated expectations for response agreement may be moving to a lower level across society. At this time, within the field of study of visitors to public lands for recreation, we have not seen this decline.
Don A. Dillman, of Washington State University, published a book entitled Mail and Internet Surveys: The Tailored Design Method in 2000, which precisely documents the appropriate ways to assure high response rates in mail-back surveys in social research. Dillman’s methods have been used in many dispersed recreation visitor studies and have produced consistently high response rates. Dillman provided guidelines for writing initial and subsequent cover letters in which a justification of the information collection effort appears along with an appeal for response based upon the importance of each individual sampled to respond for a larger population of people represented. Following this approach, there would typically be an initial mailing of information, a postcard reminder, and two follow-up mailings of the questionnaire and appropriate cover letter.
Whether or not this minimum response rate of 70% is obtained using these methods, on-site responses for respondents and non-respondents will be compared. Enough basic information is being collected from all people to help us understand whether the respondents and non-respondents differ to a significant degree on basic demographic factors and area visitation patterns.
NASS
In both survey questionnaires, a volley of demographic questions occurs on the final page, questions involving occupation, employment status, household income by category, number of persons sharing in the household income, number of paid vacation days and respondents’ use of vacation days, sex, race, and ethnicity. For voluntary respondents, this cumulation of personal questions may be an excessive burden; it could also induce nonresponse bias.
Forest Service Response: Subsequent to this review, we removed a small set of demographic questions to reduce burden, but most of these questions were either repeated because they were asked of visitors in the two previous studies at this place, or because recent literature suggests that better understanding of these answers may help us anticipate future trends in visitation patterns at this place. The Dillman approach suggests that we have these potentially sensitive questions at the end in order to not influence overall response as much. All responses are voluntary. Respondents will hopefully be committed enough to complete the entire survey, or if the demographic information is not completed, at least provide answers to the other questions.
File Type | application/msword |
File Title | Alan Watson Response to Review by Carolyn Swan, Mathematical Statistician, USDA/NASS, Statistics Division/Methods Branch |
Author | FSDefaultUser |
Last Modified By | FSDefaultUser |
File Modified | 2007-07-11 |
File Created | 2007-06-21 |