NPS Response to 2008 Terms of Clearnace

response to terms of clearance 6292011.docx

Programmatic Review for NPS-Sponsored Public Surveys

NPS Response to 2008 Terms of Clearnace

OMB: 1024-0224

Document [docx]
Download: docx | pdf



Programmatic Review for NPS Sponsored Public Surveys (1024-0224)

Response to Terms of Clearance

In response to the terms of clearance posed by OMB in 2008, we asked Dr. Don A. Dillman to conduct comprehensive review of the Visitor Services Program questionnaires. Based on his review, Dr. Dillman raised two concerns related to: double response requests and cognitive testing on long list questions. We asked the researchers at University of Idaho – Visitor Services Project (VSP) to provide the following feedback in response to his concerns.


Concern 1: Some questions used in the NPS Pool of Known Questions use a double column format

The concern was the significant lower response rates for the second column. To examine this issue, the VSP compared the number of answers in the first and second column for questions that have a double column format. The following is an example of one question from the Pool of Known Questions:


a) Prior to this visit, how did you and your personal group obtain information about Black Canyon of the Gunnison National Park (NP)? Please mark (•) all that apply in column (a).


b) If you were to visit Black Canyon of the Gunnison NP in the future, how would you and your personal group prefer to obtain information about the park? Please mark (•) all that apply in column (b).


a) Prior to this visit


b) Prior to future visits

O

Did not obtain information prior to visit

O

O

Black Canyon of the Gunnison NP website: www.nps.gov/blca

O

O

Other websites

O

O

Friends/relatives/word of mouth

O

O

Inquiry to park via phone, mail, or email

O

O

Local businesses (hotels, motels, restaurants, etc.)

O

O

Maps/brochures

O


The table below shows the number of answers for survey questions with a double column format and the difference in the number of answers between the first and second column. The results show that the second column in fact received a lower response rate than the first column in many cases. However, observations show that the difference was most significant in questions that ask about future preference. Whereas, in questions that asked about intention vs. actual action the second column which stated the actual question received higher response rate. From this observation, it is plausible to assume that the lower response rate in the second column may be due to the content rather than the design of the question.




Table 1: Difference in response rate between first and second column


Project/year

Question

# returned

First column

Second column

Difference


YOSE (2008)

Sources of information used/future preference

563 (60%)

484

428

56

BLRI (2008)

Sources of information used/future preference

826 (75%)

979

704

275

HOBE (2008)

Sources of information used/future preference

231 (60%)

188

158

30


Activity expected/activity conducted


192

207

-15

CARL (2008)

Sources of information used/future preference

259 (77%)

210

153

57

FIIS (2008)

Activities past visit/this visit

636 (56%)

621

560

61

HEHO (2008)

Sources of information used/future preference

287 (72%)

244

216

28


Activities expected/ activity conducted


251

245

6

CIRO (2008)

Sources of information used/future preference

256 (73%)

234

190

44


Rock climbing activity this visit/future visits


149

220

-71


Activities this visit/future visit


245

218

27

CARE (2008)

Sources of information used/future preference

480 (78%)

403

372

31


Activity expected/activity conducted


456

418

38

GRSM (Fall 2008)

Sources of information used/future preference

781 (68%)

657

554

103


Activity expected/activity conducted


738

713

25

GRSM (Summer 2008)

Sources of information used/future preference

748 (65%)

645

554

91


Activity expected/activity conducted


708

665

43

FOLS (2009)

Sources of information used/future preference

261 (77%)

186

188

-2


Topic learning this visit/future visit


223

190

33


Activities this visit/future visit


260

202

58

HOME (2009)

Sources of information used/future preference

254 (75%)

206

160

46


Activities this visit/future visit


230

207

23

MIMI (2009)

Sources of information used/future preference

249 (73%)

184

193

-9


Activity expected/activity conducted


223

226

-3


Topic learning this visit/future visit


230

174

56

WORI (2009)

Sources of information used/future preference

243 (72%)

192

171

21

KLSE (2009)

Sources of information used/future preference

220 (65%)

142

153

-11


Topic learning this visit/future visit


206

101

105

YOSE (2009)

Sources of information used/future preference

689 (57%)

595

543

52

SLBE (2009)

Sources of information used/future preference

696 (60%)

623

478

145


Activity expected/activity conducted


669

635

34

JAGA (2009)

Sources of information used/future preference

241 (71%)

187

162

25

BOST (2009)

Sources of information/future preference

603 (58%)

452

426

26

BRCA (2009)

Activity expected/activity conducted

626 (73%)

601

600

1

INDU (2009)

Sources of information used/future preference

499 (55%)

410

353

57

MAVA (2009)

Sources of information used/future preference

267 (79%)

213

181

32



Concern 2: Cognitive testing on long list questions

During the previous review of the Pool of Known Questions, one concern was that respondents may have a tendency to skip the items toward the end of the questions that contain a long list. In response to this concern the Visitor Services Project conducted a pilot study to determine if the respondents have the tendency to skip the items at the end of a long list.


Two versions of the same questionnaire were used. The first version was numbered with an “odd” sequence such as 1, 3, 5 and so on. In the first version, the items were listed in alphabetical order. The second version was numbered with an “even” sequence such as 2, 4, 6 and so on. Questions with a long list of items (10 items or more) such as questions about activities; use of park services and facilities; and learning methods for a future visit were reversed. Items in the “even” questionnaire were in a reverse order with the “odd” questionnaire. For example, in a 14-item question the first item in an odd questionnaire is the 14th item in the corresponding even questionnaire.


The questionnaires were distributed to visitors at the park at random so that the first recipient would have an odd questionnaire and the second would have an even questionnaire and so on. This was an attempt to minimize the respondents’ effect. The testing hypothesis was that the order of an item in a long-list question is independent of the response rate to that item. To test this hypothesis we used Wilcoxon Sign Rank test for two related sample to compare the response rate of the same item in an odd vs. even questionnaire. If the order of the items has an effect on the response rate then the response rate should systematically decline as the order of the item increases. For example in a 10-item question, the 1st to 5th items in the odd questionnaire should have higher response rates against the same items in the even questionnaire. Conversely, the 6th to 10th items in an odd questionnaire should have lower response rates against the same items in even questionnaire.



RESULTS


The figures below show an example of a hypothetical scenario when response rate to each item has a perfect correlation to the item order. This shows the dependence of response rate to the item’s location with items toward the end of the long list receiving lower response rates compared to those at the beginning of the list. Figure 2 shows a typical scenario in VSP survey questionnaire. The response rate to each item does not follow any particular pattern but rather is content-dependent. For example, the first item in the odd questionnaire received a lower response rate than the corresponding last item in the even questionnaire.


Table 2 shows the results of Wilcoxon Sign Rank test on the response rate of odd questionnaire compare to the same item in even questionnaire with reversed order. Of all tests, only two cases show the significant difference due to item order (p-value<0.05). This is empirical evidence that show in VSP questionnaire the response rate for question items is more likely to be content-dependent rather than order-dependent.



Figure 1: Scenario when response rate to each item has a perfect correlation with item order.




Figure 2: Visitor awareness from George Washington Carver NM data





Table 2: Wilcoxon Sign Rank rest results


Project

Year

Question

Number of items

p-value

Everglades NP

April 2008

Activity-self guided

14

0.041

Everglades NP

April 2008

Activity- guided

14

0.187

Everglades NP

February 2008

Activity- self guided

14

0.65

Everglades NP

February 2008

Activity- guided

14

0.071

Everglades NP

April 2008

Visitor services and facilities used

14

0.124

Everglades NP

February 2008

Visitor services and facilities used

14

0.022

Everglades NP

April 2008

Methods of learning about park

12

0.06

Everglades NP

February 2008

Methods of learning about park

12

0.30

Horseshoe Bend NMP

2008

Sources of information used

14

0.131

Horseshoe Bend NMP

2008

Activities expected

14

0.064

Horseshoe Bend NMP

2008

Activities participated

14

0.079

Horseshoe Bend NMP

2008

Visitor services and facilities used

13

0.249

Horseshoe Bend NMP

2008

Interpretive methods

12

0.480

Little River Canyon NPres

2010

Sources of information

15

0.173

Little River Canyon NPres

2010

Activities

14

0.133

Little River Canyon NPres

2010

Visitor services and facilities used

11

0.131

Chattahoochee River NRA

2010

Information used

14

0.875

Chattahoochee River NRA

2010

Activities

16

0.426

Chattahoochee River NRA

2010

Site visited

19

0.888

Chattahoochee River NRA

2010

Services used

11

0.062

George Washington Carver NM

2010

Information sources used

14

0.683

George Washington Carver NM

2010

Activities this visit

14

0.221

George Washington Carver NM

2010

Visitor awareness

14

0.925









Concern #3 - item nonresponse analysis to help determine specific problematic questions,

The VSP did a review of a sample of non-response items to determine if there were any specific problematic questions or concerns. After the review it was determined that many questions in VSP surveys were tailored to the park’s situation and thus are varied greatly across questionnaires. In addition, some of the questions only target certain audiences. For example, questions about type of accommodations used only applied to visitors who stayed overnight in the area surrounding the park. Those questions are not comparable across the board. We identified some questions that are somewhat similar in content to determine the nonresponse effect due to question type and the level of complexity and sensitivity


Table 1: Question attributes


Question

Complexity

Sensitivity

Location in the questionnaire

Information used to plan visit

Low (check all that applied)

Low (no personal information)

Beginning

Activity conducted at the park

Low (check all that applied)

Low (no personal information)

Middle (first half)

Awareness of park management

Low (Yes/No)

Medium (visitor’s knowledge)

Beginning

Primary reason for visiting the area/park

Low (check one)

Low

Middle (first half)

Length of visit

Medium (require some memory recall)

Low

Middle (first half)

Evaluation of park services and facilities

High (the question in matrix format and require memory recall)

Medium (evaluation of public services)

Middle

Group type

Low

Low

Second half

Group size

Low

Low

Second half

Age/zip code/ number of time visit

High (the question in matrix format and require memory recall)

High (personal information)

One of the last 5 questions

Race/ ethnicity

Medium (the question in matrix format)

High (personal information)

One of the last 5 questions

Expenditure

High (require substantial memory recall)

High (personal information)

One of the last 5 questions

Overall quality rating

Low

Medium (evaluation of public service)

One of the last 5 questions



Table 2 shows the percentage of respondents who responded to each question. On average, there wasn’t any significant difference in response rate of each question due to length, complexity or sensitivity with an exception of the expenditure question.


Table 2: Response rate for each question



Number of questionnaires

Min

Max

Mean

Std. Dev

Information

45

94%

100%

99%

1%

Activity

46

77%

100%

93%

6%

Awareness

30

72%

100%

97%

5%

Primary reason for visiting

38

76%

100%

93%

6%

Length of visit

46

86%

100%

97%

3%

Evaluation of park services and facilities

44

68%

98%

90%

5%

Group type

47

95%

100%

98%

1%

Group size

47

85%

100%

98%

2%

Age/zip code/number of time visit

47

93%

100%

99%

1%

Race/ethnicity

29

86%

99%

93%

3%

Expenditure

20

69%

95%

84%

6%

Overall quality

46

90%

100%

98%

1%


The question asking about visitors’ expenditures while visiting an area is often complicated (requiring visitors to remember how much they spent on a particular category) and somewhat sensitive as it is related to personal spending habits. These questions were designed by the authors of MGM2 model and have been used in other questionnaires outside the scope of VSP surveys. However, we observed that some visitors (especially day-users), did not spend any money on any category, chose to skip the question instead of writing a “0” number in every category. To improve this question, we added in an option of “no money spent” as a screening to distinguish between skipping and a true response. We will revisit this question after the survey season to determine if the nonresponse issue has been improved.




20


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPonds, Phadrea
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy