Part B - NRBS OMB Justification 2014 OMB comments_ICF

Part B - NRBS OMB Justification 2014 OMB comments_ICF.doc

The National Recreational Boating Survey

OMB: 1625-0089

Document [doc]
Download: doc | pdf

Supporting Statement for the

National Recreational Boating Survey

Part B

Submitted by:

Philippe Gwet

United States Coast Guard

Department of Homeland Security

November 2012

Contents


Exhibits



1Collection of Information Employing Statistical Methods

In the service of promoting public safety and contributing to policy decisions, the United States Coast Guard (USCG) is undertaking a National Recreational Boating Survey (NRBS). The NRBS concerns two populations (A) the population of recreational boats and (B) the population of recreational boating participants.

The purposes of the NRBS in order of priority are to measure:

  1. Exposure:

    1. Boat and boater hours on the water

    2. Boat hours in docked recreation

  2. Boating participation and boat ownership

    1. Total annual participation overall

    2. Total annual participation by boat type

    3. Total boat ownership

  3. Boating safety awareness and behaviors

    1. Lifejacket use

    2. Reasons for life jacket use

    3. Alcohol use and boat operation

  4. Economic impact of recreational boating

    1. Money spent on boats

    2. Money spent in communities on boat trips

  5. Negative event Incidence and risk

    1. Actual and reported accidents that cause injury and boat damage

  6. Boat statistics

    1. Features of boats such as hull material and propulsion systems

The NRBS includes three surveys designed to comprehensively measure boats and boaters in the United States. The NRBS design is driven by a desire to achieve comprehensive population coverage and high quality measurement of a variety of boat features and boating activities. Exhibit 1 contains an introduction to the structure of the NRBS. We refer to this structure several times in this document.

There are two core surveys in the program: the Boat Survey, supplemented by a regular Trip Survey with a panel of boats; and the Participant Survey. The Boat Survey is an annual survey of recreational boats (registered and unregistered) in the United States. This survey is supplemented by a panel of boats used to collect boating trip data. A monthly Trip Survey using a panel sample source is the best way to collect accurate data about number and duration of actual trips taken on recreational boats. The Participant Survey is an annual population-based survey of US residents regarding their boating behaviors.

The Coast Guard will use this survey data for the sole purpose of producing statistical summaries at the state as well as at the national levels. At the end of the data collection phase, personal data such as names, addresses, telephone numbers, and e-mail addresses will no longer be used neither by the Coast Guard, nor any other parties involved in the NRBS including the contractor, and the Coast Guard boating partners. Only individual-level data related to boat types, and boating participation will be shared with the CG boating partners. Moreover, all recreational boating participants who will be contacted, will be given the opportunity to decline participating in this survey.

Exhibit 1: Survey Questionnaires and Data Collection

Survey

Mode(s)

Sample source(s)

Universe

Respondent

Analytic Goals

Boat Survey

Phone

RDD

Privately owned recreational boats

Unregistered recreational boats

Member of boat-owning household

2-Ownership and participation

4-Economic impact of boating

6-Boat statistics

Registered recreational boats in all states

Member of boat-owning household



Mail

Registry Lists

Privately owned recreational boats

Registered recreational boats in states sharing lists

Registered boat owner


Trip Survey

Web, Phone

Panel

Privately owned recreational boats

 

Boat owner panelist

1-Exposure

3-Safety awareness and behaviors

4-Economic impact of boating

5-Negative events

Participant Survey

Phone

RDD

Boating participants

U.S. households

Any adult household member

2- Ownership and participation

3- Safety awareness and behaviors

U.S. child (<16) boating population

Any adult household member (proxy)


U.S. adult boating population

Adult boater

Rented boats

 

Adult boater: rented boat

1-Exposure

3-Safety awareness and behaviors

4-Economic impact of boating

5-Negative events

Respondent Universe and Sampling Methods

We have divided this section into two parts to describe two universes: the universe of boats and the universe of boating participants. In each section, we provide (a) a definition of the universe, (b) sample sources, (c) sampling and respondent selection methods, (d) population and sample sizes, and (e) expected response rates.

The Universe of Recreational Boats

The universe for the Boat Survey and the Trip Survey supplement is all recreational boats stored in the U.S. on January 1 of the survey year where a “recreational boat” is a boat not used for any commercial purposes.

The universe of recreational boats includes:

Privately owned, registered boats;

Privately owned, unregistered boats; and

Rented boats (such as canoes) that are captained by private citizens1—people who are not professional boat captains.

The universe does not include:

Boats owned or captained by professional captains—people who captain boats for commercial purposes.

The survey of recreational boats (as opposed to boaters or participants) consists of two components (a) a Boat Survey, collecting specific information about a selected boat, and (b) a Trip Survey, covering information about events associated with the boat while it was in use.

We have added a panel design to support trip-level data collection for measures such as exposure. The most important goal of the NRBS is to collect accurate information about the amount of time that boats are on the water. This exposure estimate will serve as the denominator in safety measures such as accident incidence. Because people are inaccurate in estimating averages of concepts like duration (see Section 1.2.4), encounter- or trip-based sampling to obtain this information is best. In particular, the Trip Survey that panelists will complete will ask for an estimate of the number of hours on the water during a particular, very recent trip.

1.1Sample Sources

For the Boat Survey, the universe is all recreational boats. There are two sources for selecting a sample of recreational boats:

Boat registration databases cover:

    • Registered boats in states where lists are available, and

    • Boats documented with the United States Coast Guard.



A stratified national RDD (landline and cell phone) sample covers:

    • Registered and documented boats,

    • Registered boats in states where lists are not available,

    • Unregistered boats, and

    • Rented boats.

Most states make their registration databases available to the public. For efficiency, where boat registration databases are available, the Boat Survey will be a dual-frame, dual-mode survey using mail and telephone RDD with a cell phone component. Where databases are not available, the Boat Survey will be telephone RDD with a cell phone component. The boat registration database and the RDD frame overlap and require dual-frame adjustments.

At the end of both the mail and RDD Boat Surveys, we will recruit respondents to the panel of boats. Unless a panelist opts out of the panel explicitly via the telephone and e-mail contacts provided, they will remain in the available sample for up to 12 months from their initial opt-in. The sample for each panel survey administration will be selected from the panel, excluding recent respondents (within 3 months), and stratified to achieve targets by state and by boat type.

1.2Sampling and Respondent Selection Methods

Sampling
Respondent Selection
Boat Survey

The Boat Survey will be conducted at the beginning of the survey year and will collect information on registered and unregistered, owned recreational boats. Boats will also be recruited to provide information about specific trips at a later time in the year. There are two sampling frames for the Boat Survey: boat registration lists for a mail survey of registered boat owners, and RDD landline and cell phone numbers for a telephone survey of owners of unregistered or unlisted boats.

Boats on the registration database will be stratified by state and boat type. Rare boat types will have a higher selection probability than common boat types. The measure of size is provided in Section 1.2.2. Mail surveys will be directly addressed to the registered boat owner. A systematic sample of boats will be sampled from each stratum (state and boat type) with equal probability. Within each stratum, boats will be sorted (implicit stratification) by ZIP code and owner. Registered boat owners are selected with probability proportional to size based on the weighted number of registered boats they own.

An RDD survey will be conducted for states without registry information and to reach unregistered boaters. The sample will consist of a list-assisted RDD sample of telephone numbers. To build the list-assisted frame, all possible telephone numbers are divided into blocks (or banks) of 100 numbers. A 100-block is the series of 100 phone numbers defined by the last two digits of a 10-digit phone number. For phone numbers with the first eight digits in common, there are 100 possible combinations of the last two digits (ranging from 00-99). To enhance efficiency and reduce costs, the frame excludes zero-blocks, i.e., those 100- blocks with zero listed phone numbers.

Telephone numbers will be stratified into state-based strata according to the primary state served by the area code and prefix. Upon reaching a household, an adult household member will provide a roster of the boats owned by members of the household. If the respondent is not capable of providing the details necessary for the roster, another adult will be requested. Boats will be stratified into the categories listed in Section 1.2.2. Rare boat types will be subsampled at a higher rate than common boat types. This double sampling for stratification is designed to increase the number of rare boats in the sample.

The owner of each selected boat will be recruited to participate in the trip panel. That is, the panel of boats that will be contacted to complete the Trip Survey.

1.3Population and Sample Sizes

The population for the Boat Survey is all registered and unregistered boats. The estimated boat ownership percentage in the U.S. is 18 percent of households2, but it varies by state. To estimate the variation in boat ownership by state, we used the number of registered boats3 to calculate registered boats per capita in each state. We then translated this to households based on an average of 2.6 persons per household.4 Since the registered boats represent approximately 60 percent of the boat owners, we multiplied the per capita registered boat rate by 1.67 to create an estimate of per capita boat ownership (60% registered + 40% unregistered). Finally, we bounded the state-level calculations by 9 percent (0.5 national average) and 36 percent (twice the national average). The resulting population estimates are provided in Exhibit 2.

The sample sizes in the table below include the number of owners contacted for the Boat Survey, the number of boat owners that we expect to agree to participate in the trip panel, and the expected number of trip responses annually. The methods for determining these sample sizes appear below in Section 1.2.2)



Exhibit 2: Population of Boats and Anticipated Sample Sizes for the Boat Survey and the Trip Survey



2008 Population

(000s)

Registered Vessels

Est Boat Owning HHs

Boat

Boat

Trip

STATE


Owners

Panelists

Responses

Total

Total U.S.

304,056

12,801,021

18%

30000

19500

36163

AL

Alabama

4,662

274,176

25%

663

431

1120

AK

Alaska

686

47,548

30%

200

130

169

AZ

Arizona

6,500

144,570

10%

301

196

508

AR

Arkansas

2,855

206,195

31%

520

338

879

CA

California

36,757

964,881

11%

1389

903

2347

CO

Colorado

4,939

98,055

9%

201

131

170

CT

Connecticut

3,501

108,539

13%

243

158

205

DE

Delaware

873

61,569

31%

200

130

169

DC

Dist of Columbia

592


9%

200

130

169

FL

Florida

18,328

991,680

23%

2355

1531

3980

GA

Georgia

9,686

344,597

15%

749

487

1265

HI

Hawaii

1,288

15,094

9%

200

130

338

ID

Idaho

1,524

91,612

26%

200

130

169

IL

Illinois

12,902

379,354

13%

791

514

668

IN

Indiana

6,377

241,474

16%

532

346

449

IA

Iowa

3,003

213,767

31%

538

349

454

KS

Kansas

2,802

93,900

15%

200

130

169

KY

Kentucky

4,269

176,716

18%

420

273

355

LA

Louisiana

4,411

301,249

30%

699

455

1182

ME

Maine

1,316

112,818

36%

311

202

263

MD

Maryland

5,634

202,892

16%

342

222

289

MA

Massachusetts

6,498

145,496

10%

303

197

256

MI

Michigan

10,003

830,743

36%

2157

1402

1823

MN

Minnesota

5,220

866,496

36%

2250

1463

1901

MS

Mississippi

2,939

180,356

27%

440

286

743

MO

Missouri

5,912

321,782

24%

765

497

646

MT

Montana

967

79,651

36%

219

142

185

NE

Nebraska

1,783

83,722

20%

205

133

173

NV

Nevada

2,600

59,895

10%

200

130

338

NH

New Hampshire

1,316

100,261

33%

246

160

208

NJ

New Jersey

8,683

183,147

9%

236

154

200

NM

New Mexico

1,984

38,100

9%

200

130

338

NY

New York

19,490

494,020

11%

997

648

843

NC

North Carolina

9,222

375,815

18%

840

546

1420

ND

North Dakota

641

53,519

36%

200

130

169

OH

Ohio

11,486

415,226

16%

905

588

765

OK

Oklahoma

3,642

223,758

27%

546

355

922

OR

Oregon

3,790

184,147

21%

427

278

361

PA

Pennsylvania

12,448

342,427

12%

505

328

426

RI

Rhode Island

1,051

43,665

18%

200

130

169

SC

South Carolina

4,480

442,040

36%

1132

736

1913

SD

South Dakota

804

53,570

29%

200

130

169

TN

Tennessee

6,215

274,914

19%

626

407

1057

TX

Texas

24,327

599,567

11%

1203

782

2033

UT

Utah

2,736

76,921

12%

200

130

169

VT

Vermont

621

31,482

22%

200

130

169

VA

Virginia

7,769

251,440

14%

535

348

905

WA

Washington

6,549

270,789

18%

607

395

513

WV

West Virginia

1,814

63,064

15%

200

130

169

WI

Wisconsin

5,628

617,366

36%

1603

1042

1355

WY

Wyoming

533

26,956

22%

200

130

169

PR

Puerto Rico

3,954

N/A

10%

200

130

338



1.4Expected Response Rates



The Boat Survey has two components: the mail survey and the RDD survey. Based on the results of the 2011 National Recreational Boating Survey, we anticipate a response rate of about 40 percent from the mail survey after initial and replacement questionnaire mailings. We also expect a response rate of 40 percent to the RDD telephone surveys which will be used to address coverage issues relating to registration laws and availability of registry lists.

The Trip Survey will be conducted by internet with a telephone follow-up to non-respondents and individuals who do not provide e-mail addresses. Based on ICF Macro’s experience from the 2011 Boat Survey, we expect that approximately 73% of invited boat owners will agree to participate in the panel. Panelists will be surveyed up to four times annually. In any given survey administration, we expect 45 percent of the invited panelists to complete the Trip Survey. Panelists will remain in the panel for subsequent months even if they don’t respond in any given month unless they explicitly request removal.  We anticipate panel attrition of about 10 percent over the course of the survey year.

As a whole, the field of survey research has been experiencing a decline in response rates over recent years. Lower response rates are often used as an indicator for the risk of non-response bias in a study, where those who respond are substantially different from non-respondents. In the absence of high response rates, a non-response analysis helps to justify the accuracy of the survey data.

We intend to evaluate the extent of non-response bias by comparing NRBS results to external data sources. Many of these comparisons are inherent in the weighting for non-response, such as differential non-response by boat type. The use of repeated measures in the boat panel permits an analysis using key survey items collected during the baseline survey. The baseline data includes those who refuse to participate in the panel and those who agree, but then do not respond when asked. Both forms on non-response can be evaluated using comparisons to the baseline data.

The Universe of Boating Participants

The second universe of interest to the NRBS is boating participants. The base for the universe of boating participants is the U.S. household population. It is a goal of the survey to determine the proportion of all Americans who have participated in recreational boating during the reference year. A boating participant is defined as someone who has spent time on a recreational boat, docked or on the water, during the reference period.

Within this universe, the NRBS is concerned with two further populations:

  • Children who boat, and

  • Adults who boat.

Adult is defined as being at least 16 years old. Including all adults ages 16 or older is important for safety estimates since people who are 16 years old are permitted to operate motor boats unsupervised in many states (United States Coast Guard, 2008).

1.5Sample Source

The Participant Survey sample will be a national RDD telephone sample with landline and cell phone components. More details on the approach to data collection appear in Section 1.3.5.

1.6Sampling and Respondent Selection Methods

Sampling

The Participant Survey sample will consist of a list-assisted RDD sample of telephone numbers. To build the list-assisted frame, all possible telephone numbers are divided into blocks (or banks) of 100 numbers. A 100-block is the series of 100 phone numbers defined by the last two digits of a 10-digit phone number. For phone numbers with the first eight digits in common, there are 100 possible combinations of the last two digits (ranging from 00-99). To enhance efficiency and reduce costs, the frame excludes zero-blocks, i.e., those 100- blocks with zero listed phone numbers.

Telephone numbers will be stratified into state-based strata according to the primary state served by the (area code and prefix). A sample of cell phone numbers will also be included in each state. The RDD sample will be structured to achieve up to 40% of the total phone sample by cell. These proportions are based on the anticipated proportion of the US adult population that will be reachable mostly or only by cell phones during the 2014 survey cycle.

Respondent Selection

The RDD survey has a somewhat complex respondent selection. The survey has been designed to maximize data quality and to minimize handoffs. For the Participant Survey portion, there are up to three survey subjects:

  • Any household member (boating or non-boating),

  • A boating child, or

  • A boating adult.

The qualified household member is anyone over the age of 16. That person is required to enumerate the household members and provide information on whether each has boated recently. He or she need not personally have boated in the target period. The enumeration data support participation estimates and also subject selection for the next two phases.

Data for the child will be collected by proxy. There are only two questions in this section, and both concern topics that an adult household member should be able to report on: safety behaviors. A random child is selected from the set of household boating participants who are under 16 years of age. The household member answers questions regarding this child.

A boating adult is randomly selected from the set of household boating participants who are 16 and over. If the household member and the boating adult are not the same person, the survey requires a transfer to the boating adult.

1.7Population and Sample Sizes

The population for the boating Participant Survey is all children and adults in the United States. To survey the population, we will conduct household interviews with one adult in the household.

Exhibit 3: Population of Participants and Anticipated Sample Sizes for the Participant Survey

 

 

2008 Population

(000s)

Sample

STATE

 

Size

Total US

 

304,056

17,000

AL

Alabama

4,662

327

AK

Alaska

686

327

AZ

Arizona

6,500

327

AR

Arkansas

2,855

327

CA

California

36,757

327

CO

Colorado

4,939

327

CT

Connecticut

3,501

327

DE

Delaware

873

327

DC

Dist of Columbia

592

327

FL

Florida

18,328

327

GA

Georgia

9,686

327

HI

Hawaii

1,288

327

ID

Idaho

1,524

327

IL

Illinois

12,902

327

IN

Indiana

6,377

327

IA

Iowa

3,003

327

KS

Kansas

2,802

327

KY

Kentucky

4,269

327

LA

Louisiana

4,411

327

ME

Maine

1,316

327

MD

Maryland

5,634

327

MA

Massachusetts

6,498

327

MI

Michigan

10,003

327

MN

Minnesota

5,220

327

MS

Mississippi

2,939

327

MO

Missouri

5,912

327

MT

Montana

967

327

NE

Nebraska

1,783

327

NV

Nevada

2,600

327

NH

New Hampshire

1,316

327

NJ

New Jersey

8,683

327

NM

New Mexico

1,984

327

NY

New York

19,490

327

NC

North Carolina

9,222

327

ND

North Dakota

641

327

OH

Ohio

11,486

327

OK

Oklahoma

3,642

327

OR

Oregon

3,790

327

PA

Pennsylvania

12,448

327

RI

Rhode Island

1,051

327

SC

South Carolina

4,480

327

SD

South Dakota

804

327

TN

Tennessee

6,215

327

TX

Texas

24,327

327

UT

Utah

2,736

327

VT

Vermont

621

327

VA

Virginia

7,769

327

WA

Washington

6,549

327

WV

West Virginia

1,814

327

WI

Wisconsin

5,628

327

WY

Wyoming

533

327

PR

Puerto Rico

3,954

327



1.8Expected Response Rates

The call attempt protocols for the NRBS have been intentionally designed to maximize contact with potential respondents. Given the general trend toward lower response rates for household surveys (Groves, 2004), the anticipated response rate for the NRBS Participants Survey is approximately 20 percent.

Procedures for the Collection of Information

The NRBS consists of three data collection components:

  1. A national Boat Survey, conducted by RDD phone and mail to collect information about owned boats and recruit boats to the panel;

  2. A multi-mode Trip Survey of boats to collect information about individual trips; and

  3. A national Participant Survey conducted by RDD phone to collect information from all boating participants.

The design of each of these elements is complex. They contribute in different ways to information about boats and boating participants, and a general outline of the structure of the system appears in Exhibit 1. In this section, we begin with an overview of the survey program, then proceed to each survey component, and finally discuss the sampling and specific data collection details.

Exhibit 4 shows the schedule of data collection. The NRBS is a biannual survey designed to collect data about boating participation and boat activities for 2012, 2014 and beyond.



Exhibit 4: Survey program schedule



Boat Survey

Trips Survey

Participant Survey

2011

Q1




Q2




Q3




Q4

X



2012

Q1


X


Q2


X


Q3


X


Q4


X


2013

Q1



X

Q2




Q3




Q4

X



2014

Q1


X


Q2


X


Q3


X


Q4


X


2015

Q1



X



There are three survey components, depicted in Exhibit 1 at the beginning of this document:

  • The Boat Survey (see Appendix A) collects information about how many and what kinds of boats are owned and some information about how much money boat owners spend on their boats. The survey will be conducted in the fourth quarter of the year preceding the target year. This staggered data collection schedule will ensure that the panel of boats to participate in the Trip Survey is established before the target year begins.

  • The Trip Survey (see Appendix B) will proceed monthly during the survey year. This survey samples individual trips that boats have taken and collects information about what happened on those trips: how long they lasted, what safety events occurred, and what money was spent. The sample for the Trip Survey will be boats that have responded to the Boat Survey.

  • The Participant Survey (see Appendix C) collects information about who has spent time boating during the year. We will conduct this survey in the first quarter of the year subsequent to the target year.

This section provides separate discussions of the following elements of data collection for the Boat and Participant populations:

  • Statistical methodology for stratification and sample selection

  • Estimation and statistical testing procedures, and

  • Survey instrument

Where processes converge in the actual data collection, we offer a separate section to discuss ICF Macro’s approach to data collection and quality assurance for RDD, mail, and panel phone/web data collection. Recreational Boats

1.9Survey development

The surveys are based on the Coast Guard’s research objectives as well as questions provided by an Advisory Committee including researchers at Michigan State University who have conducted many similar surveys.

We cognitively tested each of the survey instruments with 27 people (9 for each instrument) in 2010. For the most part, respondents (including less experienced boaters and boat owners) understood the questions. During testing, we identified issues with:

  • Flow of the household enumeration section of the Participant Survey;

  • Comprehension of the lifejackets questions (specifically the ability to distinguish between types); and

  • Comprehension of terms associated with boats in use and not in use (“docked”, “parked”, etc.)

After approximately half of interviews were completed, the cognitive testing team met to discuss results and modify the surveys. Modifications were tested in the remaining interviews and found to resolve the identified problems with comprehension. The boat and participant survey instruments were successfully used during the 2011 National Recreational Boating Survey, which was limited in scope, and did not include the monthly trip panel survey scheduled to be conducted in 2012 and 2014.

Key to the goals of the project is accurate self-reporting of time on the water and number of trips. Respondents were able to report time on the water and number of trips for the one month reference periods in the Trip Survey. Without a benchmark (actual knowledge of numbers of trips or time), it is impossible to test whether these responses were accurate, but respondents were relatively certain. When probed, some respondents said they might have been on the water for an hour more or less, for instance, but not for several hours more or less. During survey administration, we will be able to evaluate the consistency of responses across trip surveys within respondents. If within-respondent error were not lower than between-respondent error, that would be evidence that the questions were “noisy”, and we could revisit their content or structure.

The cognitive testing report is included with this submission.



1.10Statistical Methodology for Stratification and Sample Selection

The sample for the Boat Survey was designed to meet objectives at state and national levels. To support these objectives, the stratification for the boat sample is based on state and boat type. Sample sizes for the Boat and Trip Surveys appear above in Exhibit 2.

To support state-level estimates, we instituted a minimum sample size for states with small boating populations. In support of national-level estimates, we optimally allocated sample to states based on the estimated cost of conducting a survey with a boating household. The result is a design that balances state-level and national-level objectives.

Estimated state-level cost conducting a survey with a boating household varies based on the percentage of boating households in the state and whether the state provides a boat registration database.

Cost per interview for registry states

In registry states, the Boat Survey is a mail-phone dual frame for registered boaters and telephone for unregistered boaters. We estimated that, on average, a telephone survey with an unregistered boater costs 10 times the cost of a mail survey with a registered boater. In states with higher percentages of boating households, the cost ratio is less since fewer households are screened out in the telephone survey. In states with lower percentages of boating households, the cost ratio is higher since more household screening is required. We adjusted the average cost ratio for the telephone survey based on the estimated state-level boating household percentages (described in Section 1.1.1.3). We then determined the optimal ratio of registered boaters to unregistered boaters for the final sample in each state. The optimal allocation was calculated based on the state-level cost ratios and the assumption that 40 percent of boat owners will be unregistered (60 percent will be registered,)5  and .

Based on the optimal allocation of sample to registered (mail) and unregistered (phone boaters in each state, we used a weighted average to estimate the state-level average cost per interview,  .

Cost per interview for non-registry states

In non-registry states, there is no mail component so the average cost per interview is based on the cost to conduct a telephone interview and the estimated percentage of boating households in the state.

State sample allocation

Next, we optimally allocated sample to states based on the total number of registered boats in the state (Rs) and the average cost per interview,   . States allocated less than 200 boats were increase to a minimum of 200 boats. The minimum sample size is based on achieving boating estimates with an error margin of +/-8 percent at the 95 percent confidence level. The sample sizes to meet this requirement were based on a percentage of 50 percent and a design effect of 1.3. The design effect is anticipated based on the oversampling of registered boaters relative to unregistered boaters.

Sample sizes in states where the minimum was not imposed were adjusted down to maintain the overall sample size.

The final allocation is presented above in Exhibit 2.

1.11Estimation and Justification of Sample Size

The sample sizes at the national level are based on achieving acceptable precision for

  1. Overall boat estimates with an error margin of +/-1 percent at the 95 percent confidence level, and

  2. Boat type estimates with error margin of at least +/-3 percent at the 95 percent confidence level.

The boat types of interest are presented in the table below along with anticipated ownership rates (note some owners own more than one boat type). The expected design effect at the national level is 1.4, which includes the effect due to disproportionate sampling of states (estimated deff=1.07) and the disproportionate sampling of registered and unregistered boaters (deff=1.3). Based on this design effect, the minimum sample size for achieving the desired level of precision for boat types is 1,500. With a sample size of 30,000, we expect this sample size to be achieved for all relevant boat types except power boats larger than 28 feet, sail boats 25 feet or less and sail boats over 25 feet. For these boat types, we will oversample relative to other boat types to achieve the desired sample size. The oversampling ratios are presented in the table below.

Exhibit 5: Anticipated Sample Sizes of Participants by Boat Type


Watercraft Type/Size

Boat Ownership Rate

Oversampling ratio

30,000

Power boat, <16 ft

25.8%

1.0

6,900

Power boat, 16-20 ft

21.0%

1.0

5,600

Power boat, 21-28 ft

9.6%

1.0

2,600

Power boat, >28 ft

3.6%

1.6

1,500

Sail, <25 ft

5.0%

1.1

1,500

Sail, >25 ft

5.0%

1.1

1,500

Pontoon boat

15.0%

1.0

4,000


1.12Estimation and Statistical Testing Procedures


The boat sample will be weighted to the total number of boats (registered and unregistered) in each state. The number of registered boats is known for each state (Rs). This will be the inflation factor for the state-level sample of registered boats,  , where Pri is the boat’s selection probability. Pri is calculated differently for registry and non-registry states. For registry states, boats are stratified by boat type with rare boats sampled at a higher probability than common boat types. Within each stratum, boats are selected with equal probability.

To reach registered boats in non registry states and unregistered boats in every state, boats are selected through RDD, which is a two-stage cluster sample with households as the primary sampling unit and boats as the secondary sampling unit. For each state, the probability that a telephone number is selected from the RDD frame is the number of selected telephone numbers (n) from the RDD frame divided by the number of possible numbers on the frame (N). Households are selected with a probability proportionate to the number of telephone lines in the household. To adjust for the unequal probabilities in households with more than one telephone line, the probabilities are multiplied by the number of telephone lines as recorded during the survey (Li). Within each household we select one boat from all eligible boats in the household (Bi). For household i, the probability of a boat being selected for the sample is .

Since the Boat Survey is a cell and landline dual-frame with overlap, we combine the two samples with a weighted average of the dual users (overlapping group).

For efficiency in registry states, registered boats are selected from the registry and through RDD. To account for the increased probability of selection for registered boaters, we use a dual-frame weighting adjustment. The adjustment is based on a weighted average of the two independent samples of registered boaters, both of which are inflated to the total number of registered boats (known from the registry). The weighted average is derived from the sample sizes for the two components.

The number of unregistered boaters in the state is unknown. We estimate the total number of unregistered boaters (Us) with a ratio estimator. The estimator is based the design-weighted estimate of unregistered boaters (us) estimated in the RDD sample relative to the number of registered boaters (rs), . This will be the inflation factor for the state-level sample of unregistered boats,  , where Pri is calculated as described above for RDD samples. Note that all boat weighting will be for each boat type within a state when the sample size is adequately large.

Each month, the boats selected for reporting trips will be weighted to match the total number of boats in each state,   for registered boats and  for unregistered boats. For boats that took more than one trip, two trips are randomly selected for the survey. The weighting factor for the trip sampling is the number of trips taken in the month (Ti) divided by the number of trips selected (ti, maximum of 2), . An estimate of the total number of monthly trips is the sum of the trip weights. The monthly trip surveys will estimate exposure in hours (h) and days (d). Total monthly exposure is a weighted estimate of the monthly trip sample, and .

Annual exposure estimates are estimated by totaling the individual monthly estimates of trips, exposure hours, and exposure days.

1.13Survey Instrument

Design Considerations

In response to comments received from the initial OMB submission, we have revised the survey system schedule and content to more effectively elicit accurate reports of boating experiences. The key driver of the modifications is the importance of exposure. Much of the design is based on the pursuit of a single analytic goal: the measurement of boating ‘exposure’ or the time people are spending on boats. This measure will serve as the denominator in a measure of boating safety that takes the general form:

Thus, the most important goal of the NRBS is to collect accurate information about the number and durations of boat trips. We are challenged in achieving this goal by the fact that we must rely on boaters’ recall of their experiences, and recall is notoriously inaccurate. However, there is substantial research on the best ways to structure surveys and ask questions to maximize accuracy.

Measuring numbers

The accuracy of recall of a number of specific events in response to a survey question depends on (Tourangeau, Rips, & Rasinski, 2000):

  • Number of events in the reference period. The more events (e.g., boat trips) there are, the more difficult it is to manage the amount of information to be recalled and provide an accurate count.

  • Ability to recall events. To say that recall is influenced by ability is another way of saying that events may simply be forgotten. The likelihood that an event will be remembered is higher if the event is distinctive or ‘memorable’. Ability to recall specific events also decreases with distance in time.

  • Proximity to temporal boundaries of the reference period. Asking about a one-week period beginning last Monday is more effective than asking about a one-week period beginning last January because it is difficult for respondents to map their personal experiences to dates in the distant past.

Size and proximity of the reference period could have a substantial impact on the accuracy of NRBS estimates. Respondents’ self-reports of days spent fishing in the same time period increase with longer recall periods. That is, when a diary study was used to measure angling behavior over three months, the total days spent fishing was fewer than when a mail study asked about the same three months (Tarrant & Manfredo, 1993). A similar pattern was found for angling data collected on the National Survey of Fishing, Hunting, and Wildlife Associated Recreation (Cahoon, Riker, & Moore, 1993).Having a reference period that is too long or too distant in the past could mean overestimating boating trips. Referencing a trip that is too distant or too many trips in the past could mean inaccurate duration estimation. Reference periods used in the NRBS in questions that measure numbers of events:

  • One month for total number of trips taken by privately owned recreational boats. Using a single month ensures that the total number of trips will be less than 31; our research suggests that the median will be slightly over 1. Using the calendar month gives respondents a discrete, real sense of the boundaries of the reference period and minimizes the potential for telescoping.

  • One year for total number of trips taken on rented boats. Because their use is recreational but their ownership is not private, rented boat trips must be captured in the participant survey, which is conducted in Q1 and references the prior year.

Measuring duration

In addition to the number of trips, it is necessary to determine the total number of hours boated on each trip. These two numbers (number of trips and hours per trip) will be used to calculate exposure hours. To answer duration questions like ‘how long were you on the boat last week’, respondents call to mind a story or sequence of events to anchor the time estimate (Tourangeau, Rips, & Rasinski, 2000).

We could ask respondents for the ‘average’ number of hours their boats were on the water when they boated, but this question is most useful for understanding how much Participants boat on a normative trip. That is, respondents will tell themselves a story about a normal or ‘average’ trip in order to answer the question. The goal for this survey is to enumerate every hour spent in recreation on a boat regardless of whether the trip is normative. To do this, we collect the numbers of hours spent on individual, actual trips.

To sample these trips, we need to ask about trips in a period short enough that respondents can remember each trip clearly and discretely. Our research suggests that most boaters take about 1 trip in a given month. Taking more than 4 trips in a month is relatively rare. Using a month also provides a straightforward way to enumerate trips so that we can select one randomly (‘tell us about your Third trip in January’).

.

Boat Survey

The Boat Survey is designed to fulfill the following three primary analytic goals:

  • Overall boat ownership,

  • Economic impact of boating, and

  • Boat statistics.

To meet the first goal, the Boat Survey contains two elements (a) an enumeration of all boats owned in the household and (b) a detailed survey regarding one randomly selected household boat.

The Boat Survey is the same for all sample sources. The sample frames for this survey— (a) the American household population and (b) the set of listed, registered boat owners— overlap. We will correct for this in weighting. It is important to collect boat ownership data from all eligible survey respondents in any mode because the anticipated incidence of certain types of unregistered boats in the household population survey (RDD) is quite low.

Trip Survey

The trip report is the source of much of the most important NRBS data including:

  • Exposure (Goal 1),

  • Safety behaviors (Goal 3),

  • Expenditures (Goal 4), and

  • Negative events (Goal 5).

These data need to be collected for all boats. To collect them for privately-owned recreational boats, we will use either a telephone survey or a web survey with telephone non-response follow-up depending on the mode preference of panelists. The script for this questionnaire appears in Appendix B. Both of these protocols allow us to capture data dynamically that we otherwise would miss in a mail survey.

Recreational Boating Participants

1.14Statistical Methodology for Stratification and Sample Selection

The sample for the Participant Survey was based on achieving state-level participation estimates with an error margin of +/-5 percent at the 95 percent confidence level. The sample sizes to meet this requirement were based on a boating participation rate of 30 percent.6 To meet this objective, we will have state-level stratification with 327 household interviews per state.

1.15Estimation and Justification of Sample Size

Sample sizes were estimated based on the required samples to achieve the precision described above in Section 1.2.2.1.

1.16Estimation and Statistical Testing Procedures

The Participant Survey is administered to a sample of households selected via RDD. For each stratum state, the probability that a telephone number is selected from the RDD frame is the number of selected telephone numbers (n) from the RDD frame divided by the number of possible numbers on the frame (N). Households are selected with a probability proportionate to the number of telephone lines in the household. To adjust for the unequal probabilities in households with more than one telephone line, the probabilities are multiplied by the number of telephone lines as recorded during the survey (Li). The design weight is the inverse of the selection probability.

Since the Participant Survey is a cell and landline dual-frame with overlap, we average the dual users (overlapping group) based on the sample sizes of the two components. The combined dual-frame sample is calibrated to the total number of households in each state.

1.17Survey Instrument

The Participant Survey has several parts. The questionnaire script appears in Appendix C.

Household enumeration

The household enumeration questions are designed to measure the overall incidence of recreational boating participation (Goal 2). Survey questions are asked of any household member over 16 years of age. That person reports on the gender, age, and boating participation for all household members.

This survey also contains a few household-level questions regarding safety (Goal 3). These questions include two questions regarding a randomly selected, boating child from the household.

Participant survey

The Participant Survey is a survey of one randomly selected boating adult in the household. It collects information about:

  • Recreational boating participation by boat type (Goal 2), and

  • Participation in boating activities in the reference period (Goal 2).

Rented boat trip report

A few recreational boat trips are taken on rented boats—especially in canoes and kayaks. To ensure that some account is made of exposure in these boats, we will include a trip report in the telephone Participant Survey. Again, the contents of this trip report are the same as the contents of the Trip Survey.

Data Collection Procedures

The actual procedures for data collection will be similar for the Boat and Participant populations. These procedures are determined by sample source and mode rather than by survey content. ICF Macro has procedures in place for the data collection systems necessary for the conduct of the NRBS. In this section, we describe the approach to:

  • RDD data collection

    • Boat Survey (unregistered/unlisted boats)

    • Participant Survey

  • Mail data collection

    • Boat Survey (registered & listed boats)

  • Panel data collection

    • Web and telephone trip survey

1.18RDD data collection

Loading the Sample: The sample will be loaded and resolved once. Landline sample records will have been pre-screened to exclude business and non-working numbers.

Managing Call Attempts: Each call attempt will be given a minimum of five rings. Careful management of the sample allocation and scheduling of interview sessions will assure adequate penetration coverage of residential households with a minimum of 10 attempts for unresolved telephone numbers. Persistent “ring - no answers” will be attempted a minimum of four times at different times and days of the week. Each number will be called a minimum of 10 times over six calling periods or until a completed interview is achieved. If a respondent is contacted on the last call, and an interview cannot be completed, another attempt will be made. A six-attempt protocol for the cell phone sample will be conducted. A lower attempt protocol is recommended for cell phone sample for two reasons: First, because a random-respondent selection is not conducted on cell phone sample, more interviews are completed on the first contact. Second, refusal conversion will be limited to one additional attempt after an initial refusal. Therefore, fewer attempts are needed to obtain completed interviews from cell phone sample as compared to landline sample.

Dealing with Busy and No-Answer: Lines that are busy will be called back a minimum of five times at 10-minute intervals. If the line is still busy after the fifth attempt, the number will be attempted again on different calling occasions until the record is resolved.

Attempting Call-backs: The NRBS calling system optimizes queuing for definite call-backs by continuously comparing station sample activity and the index of definite call-back records. When a definite appointment time arrives, the system finds the next available station and delivers the record as the next call. The call history screen that accompanies each record informs the interviewer that the next call is a definite appointment and describes the circumstances of the original contact. The handling of call-backs to respondents is crucial to the success of any telephone survey project. The effective management of call-backs will increase the response rate and population coverage. Perhaps more importantly, scheduling an appointment that is convenient for the respondent, and ensuring that the appointment is kept, offers a basic courtesy to someone who has agreed to assist us with a study. Callbacks to cell phone users will be limited to one additional refusal attempt after an initial refusal.

Managing Interrupted Interviews: Interrupted interviews with receptive respondents will be restarted using a definite call-back strategy. A definite call-back for an exact time can be set and the interview can begin where it left off. If the interviewer who began the survey is available at the prescribed time, the system will send the call back to that station. This is especially important for the NRBS surveys which involve survey handoffs between household members.

Recording Call Dispositions: Dispositions of each call attempt on all records in the sample will be automatically stored in the CATI system. This provides a complete call history for each record in the sample. The call history is displayed on the interviewer’s screen during each new attempt.



1.19Mail Data Collection



Sample Processing and Management: As described in Section 1.1.1, Boat Survey data collection for a) registered boats in states where lists are available, and b) boats documented with the United States Coast Guard will be accomplished via a mail survey. ICF Macro will obtain registration databases of these vessels via a third party vendor which has access to state registration and Coast Guard documentation lists. To facilitate mailing, sample records obtained from the vendor will be provided in standardized format, and will contain at least the following variables: owner name, owner mailing address, and vessel type. The lists will reflect the most accurate contact information available at the time, and will have been updated via the National Change of Address Database (NCOA).

As sample is obtained, it will be assigned a unique identifying number (the masterID) that will be used to track all survey mailings and each record’s disposition during fielding.

Document Preparation: ICF Macro will format all survey documents including initial mailings, survey instruments, cover letters, and reminders. The mail survey text will be carefully designed to include clear, user-friendly instructions that encourage respondent cooperation and increase response rate. Skip patterns will be clearly marked with explanatory text to guide the respondent and to the next appropriate point in the survey. The mail survey instrument will be designed to accurately capture the data reported by respondents. Questions will be numbered and sections marked that provide an intuitive path for the respondent.

Research has shown that attractive, personal mailings promote survey response. We will employ the following strategies to promote respondent engagement with the NRBS mail survey:

  • Personalize the notification, cover letters, and reminders.

  • Use a contrasting color to print logos and signatures on notifications, cover letters, and reminders.

  • Make the survey mailing envelope stand out by printing the Coast Guard logo with the return address and mailing in an attention-getting envelope.

  • Inclusion of attractive artwork for the cover of the survey booklet.

Personalization: Every piece of outgoing mail, including notifications and reminder postcards, will be inkjet-printed with the mastered in order to track sample disposition throughout data collection. Notification, cover letters, and reminders will be personalized with the boat owner’s name. This personalization will be executed in the electronic documents sent to the printer and will integrate seamlessly with the rest of the document text.

Data Collection Protocol: The mail survey protocol will consist of an initial contact, a mail survey packet, and two reminders (one postcard and a second survey packet). The steps for mail survey administration and the specifications of the materials included in each step are described below.

All sampled records will be mailed an advance letter introducing the survey. This letter will identify the Coast Guard as the sponsor of the survey, explain how the data will be used, and encourage respondent cooperation. The letter will communicate the importance of the survey for improving boating safety and the benefits of survey participation. Potential respondents will be informed that their participation is voluntary. These letters will be:

  • Personalized with respondent name, address, and date.

  • Printed with a signature in contrasting ink.

  • Printed with contact information and a toll-free number to ICF Macro or the Coast Guard for respondents who have questions about the survey.

  • The letters will be mailed first class, in a business-sized envelope.

  • Within three days of the advance letter, the first survey packet will be mailed to all contacts. The first survey mailing consists of three elements:

  • A cover letter reiterating the purpose and importance of the survey with the signature and logo printed in a contrasting ink.

  • A survey booklet printed on high quality paper.

  • A business reply envelope (BRE).

Within five to seven days of the first survey packet mailing, ICF Macro will send a postcard reminder to each sampled respondent.

Within four weeks of the first survey mailing, ICF Macro will send a second survey mailing to all sampled respondents who have not returned a complete survey and for whom no survey component has been returned as undeliverable.

Return Mail Processing and Data Entry: Returned mail will be opened and processed by hand by ICF Macro staff. MasterIDs from returned documents will be entered into a Data Collection Tracking System within 48 hours of receipt. Data from returned surveys will be entered within five business days.

Data Collection Tracking System: ICF Macro will create a SQL database to store and track sample information, dispositions, and survey data. One benefit of our integrated tracking system is its capability to provide our clients with information about study progress on demand. With the Macro Portal, CG-5422 will be able to log in to see the most up to date sample information, disposition reports, and even custom summaries of data elements.

1.20Panel data collection

Enrollment in the panel: Respondents will provide their contact information and preferred mode of contact at time of recruit during the Boat Survey. We will collect telephone number and e-mail address from recruited respondents. For respondents who prefer e-mail contact, we will send an e-mail to confirm their opt-in. Respondents will need to simply click on a link in this e-mail to confirm their participation. This practice is consistent with CASRO guidelines designed to prevent spamming.

Respondents who say they prefer telephone contact will only be contacted by telephone. Respondents who prefer e-mail contact will be contacted by e-mail first and then telephone if they are non-responsive for seven days.

Frequency of contact: Recruited panelists will be contacted approximately on a three month schedule for a period of 12 months. In Northern states, boating is rare from October through March. In these states, one Trip Survey will be conducted in April with a reference period ‘from January through March’ and one in January with a reference period ‘from October through December’. It will not be necessary to use the entire available panel in these cases, since the incidence of trips is low, so some panelists may only be contacted three or two times during the year.

Invitation to participate: Panelists will be invited to participate by e-mail or telephone. See Appendix B for the e-mail invitation and the telephone script for invitation.

Screening for eligibility: We will screen on the web and the phone survey for continued ownership of the boat. Respondents who no longer own the panel boat will be removed from the panel.

Incentives: We will offer an incentive of $5 for completion of each Trip Survey. Respondents who have sold their boats and are ineligible will not receive the incentive because they will not complete a survey. Respondents who have not taken an eligible trip will be incentivized. This will help minimize panel attrition.

Follow-up with non-respondents: We will dial respondents up to 10 times on different days and at different times of day. The procedures for panel sample will be the same as those for RDD sample described above in Section 1.2.3.1.

Informed Consent

Before each telephone interview, the interviewer will read an informed consent statement to the respondent. These statements appear in telephone surveys in Appendices A and C. The consent form describes the interview, the types of questions that will be asked on the actual survey, the risks and benefits of participation, and participants’ rights, and it provides information on whom to contact with questions about any aspect of the study. The consent form also indicates that participation is completely voluntary and that participants can refuse to answer any question or discontinue the interview at any time without penalty or loss of benefits. The interviewer will enter a code via the keyboard to signify that the participant was read the informed consent script and agreed to participate.

Quality Control

Exhibit 6 lists the major means of quality control.

Exhibit 6: Quality Control Procedures

Survey Step

Quality Control Procedures

Testing of CATI program

Test each response to each question, and each path through the survey (100%)

Review frequencies from randomly generated data to ensure that the program is organizing data properly and recording values according to the survey specification (100%)

Develop skip check program to check data against defined conditions specified in the Microsoft Word version of the questionnaire (100%)

Provide USCG with an electronic test version of the programmed survey (100%)

CATI pretest

Pretest of 100 interviews to ensure the CATI program is working properly and to verify questionnaire content, skip patterns, value verification, consistency of answers across questions, interviewer and supervisor training, and sample management procedures

CATI quality assurance

Monitor at least 10% of all interviews (10% sample)

Monitor each interviewer at least once per week (100%)

Assign supervisors to manage a team of no more than 10 interviewers (100%)

Participate in daily briefing call with Command Center (100%)

Review call center shift reports and internal project tracking reports daily (100%)

Preparation of data files

Identify incomplete interviews and merge back into the main data file (100%)

Clean and, when applicable, back-code open ended responses (100%)

Assign a final disposition to each record (100%)

Produce frequency tabulations of every question and variable to detect missing data or errors in skip patterns (100%)

Printing of Mail Surveys

Printing will utilize state-of-the-art software and hardware that will print large volumes very quickly, at low cost, and with outstanding image quality. Accuracy of insertion (i.e., matching of masterIDs and address information on all mailed pieces) will be checked by hand for at least 10 percent of the total outgoing pieces

Receipt of Mail Returns

MasterIDs from returned documents will be entered into the Data Collection Tracking System within 48 hours of receipt.

Input of Mail Data

Data entered by data entry staff will be verified at 100 percent through the use of double data entry as well as custom range and logic checks incorporated into the data entry system.

Data from returned surveys will be entered within five business days.

Programming Web Surveys

Visual review of every question (100%)

Develop skip check program to check data against defined conditions specified in the Microsoft Word version of the questionnaire (100%)

Provide USCG with an electronic test version of the programmed survey (100%)

Panel enrollment

Double opt-in process for panelists providing e-mail addresses (100%)



Methods to Maximize Response Rates and Deal with Non-response

We have planned the data collection frequency to keep respondent burden at the lowest possible level. To maximize response rates, we have designed survey systems for RDD, mail, and panel data collection that seek to obtain every possible response. To address the problem of non-response, we further plan to weight the data to represent the initial samples.

Modifications to data collection frequency to reduce respondent burden

The NRBS is designed as a biannual survey to collect the data USCG needs with minimal respondent burden. Further, we have designed the survey to include a representative panel of respondents to provide much of the most involved boat information. This serves both data quality and survey response.

Maximizing Response Rates

Procedures to promote survey response

For the RDD telephone survey, we will distribute calls across days of the week and times of day. See Section 1.2.3 for details of how the protocol will be customized by telephone type (cell vs. landline). We will complete one refusal conversion attempt for each initial telephone refusal. We have also structured the survey so that handoffs (transfers from one respondent) are placed close to the end of the survey.

We will offer an incentive to panelists for each survey they complete: $5. This incentive will minimize panel and survey attrition and promote prompt survey response. Prompt survey response is important because accurate recall of hours on the water is critical to achieving accurate exposure estimates.



Dealing with Non-response

Survey non-response goes beyond eliciting as many responses to the survey as we can. In the modern survey environment, some non-response is inevitable, and this non-response is accompanied by a risk of response bias. To adjust for this, we will adjust the final data obtained to reflect the population of boats and the population of participants.

We will also conduct nonresponse bias analyses using the information that we can get from sample frames and other sources about what the sample distributions should be. One key variable on which nonresponse bias is possible is boat type (and/or registration status). We know that people to whom surveys are more personally relevant or interesting are more likely to respond (e.g., (Groves & Couper., 1998)). It follows that people who have invested more money and time in their boats will be more likely to respond to the surveys, so that some boat types (power boats) may be overrepresented compared to less expensive boats (kayaks and canoes or inflatable boats).

1.21Panel

The survey panel will provide trip information. These people will all be recruited in the boat survey at the beginning of the survey cycle, and attrition is inevitable. We have planned for attrition in the sample sizes, so it will not threaten our power to draw conclusions, but differential attrition could affect the quality of those conclusions. Monthly, we will review the responding panel compared to the invited panel with respect to boat type.

We will weigh the obtained data to be representative of the boat population measured in the Boat Survey. These comparisons may shed light on the variables that need to be included in the weighting scheme.

After the second month, panelists will begin to be removed for nonresponse or by request. Quarterly, we will compare the remaining panel to the original panel. The purpose of this analysis will be to compare trip survey results between panelists who have and have not left the panel. We can compare:

  • Total number of trips reported in the period (people who quit may be more or less avid boaters)

  • Total number of trips selected for in-depth analysis (people who quit may have been asked to report on two trips)

  • Total number of overnight trips reports

  • Whether an overnight trip was selected for reporting (overnight trips have more questions associated with them)

  • Total number of questions asked

These comparisons may shed light on questionnaire modifications that may help improve panel retention.

1.22Listed sample of boat owners

The mail Boat Survey will use listed samples of boat owners from registration frames. The frames should contain information about boat type, length, and propulsion. After the completion of the boat survey, we will evaluate the representativeness of the final sample against these frames. We will also evaluate the representativeness of the recruited survey panel from the frames.



We will also evaluate response to the mail survey as it relates to the boating, socioeconomic, and demographic environment. The environmental variables will include zip code information concerning race/ethnicity, educational status, urbanicity, tenure, and other related neighborhood descriptors. In addition, we will evaluate nonresponse based on the boating culture. Boating culture will be measured as boats per capita in the zip code.

This analysis may affect how we weight the boat data. It may also affect how we draw panel samples each month during the year.

1.23RDD Samples

Some boat surveys and all participant surveys will be completed with landline and cell phone RDD samples. For the RDD boat surveys, we do not have control totals for boat types—we do not know what the proportions of power vs. nonpower boats should be in the final samples. However, we can compare responses of early to responses of late-responders in the 10 attempt protocol to determine whether there is evidence that owners of certain types of boats are more difficult to reach or to convince to participate. This analysis might lead to protocol modifications or additional interviewer training in establishing rapport and combating response from particular types of respondents.

For the RDD participant surveys, the population is the population of people in the United States rather than the population of boats, so control totals for key demographics—gender and age—are available from Census data. We will conduct a nonresponse analysis comparing the sample to the population by gender, age, and region. This analysis will inform the weighting approach.

Similar to the boat sample, we will evaluate response to the RDD samples as it relates to the boating, socioeconomic, and demographic environment. The environmental variables will include zip code information concerning race/ethnicity, educational status, urbanicity, tenure, and other related neighborhood descriptors. In addition, we will evaluate nonresponse based on the boating culture. Boating culture will be measured as boats per capita in the zip code. This analysis will be limited to landline samples since cell phones cannot accurately be associated with a geographic areas below county.

Tests of Procedures or Methods to be Undertaken

Tests of cell phone incentives

Incentives are widely used in two of the survey types undertaken with the NRBS: (A) cell phone interviews and (B) panel surveys. Proposed incentives for both are $5 per completed interview. However, in a recent experiment, ICF Macro eliminated a cell phone incentive on a population-based RDD survey and found no impact on response rates. The results do suggest that cell phone incentives may not be necessary—they are quite expensive, and they may not promote response. The results do suggest that cell phone incentives may not be necessary—they are quite expensive, and they may not promote response.





Individuals Consulted on Statistical Aspects and Individuals Collecting or Analyzing the Data

Statistical Review

Statistical aspects of the study have been reviewed by the individuals listed below.

Philippe Gwet, Ph.D.

U.S. Department of Homeland Security

United States Coast Guard

2100 Second St. SW

Washington, DC 20593

[email protected]

(202) 372 – 1102


Randal ZuWallack, M.S.

ICF Macro

126 College Street

Burlington, VT 05401

[email protected]

(802) 863 – 9600

Agency Responsibility

Within the agency, the following individual will be responsible for receiving and approving contract deliverables and will have primary responsibility for data analysis:

Philippe Gwet, Ph.D.

U.S. Department of Homeland Security

United States Coast Guard

2100 Second St. SW

Washington, DC 20593

[email protected]

(202) 372 – 1102

Responsibility for Data Collection

The representative of the contractor responsible for conducting the planned collection is:

Heather Driscoll, M.S.

ICF International

126 College St.

Burlington, VT 05401





References

Cahoon, L., Riker, C., & Moore, T. F. (1993). Recall Bias in the National Survey of Fishing, Hunting, and Wildlife Associated Recreation. Retrieved August 18, 2009, from Proceedings of the American Statistical Association: http://www.amstat.org/sections/SRMS/Proceedings/papers/1993_083.pdf

Griffin, D. H., Fischer, D. P., & Morgan, M. (2001, May 17-20). Testing an Internet Response option for the american community survey. Paper presented at the Annual conference for the American Association for Public Opinion Research .

Groves, R. M., & Couper., M. P. (1998). Nonresponse in Household Interview Surveys. New York: Wiley.

Groves, R. (2004). Survey errors and survey costs. Wiley-Interscience.

Tarrant, M. A., & Manfredo, M. J. (1993). Digit preference, recall bias, and nonresponse bias in self reports of angling participation. Leisure Sciences , 231 - 238.

United States Coast Guard. (2008). Reference Guide to State Boating Laws. Retrieved October 5, 2009, from Boating Safety Resource Center: http://www.uscgboating.org/state_boating_laws.aspx



1 Note that rented boats are in the universe of interest for analytic goals 1, 3, and 5. USCG desires to know about events that occur on these boats but does not require information about number, expenditures, or ‘statistics’ such as size for these boats.

2 Based on internal boat ownership data.

3 Based on internal USCG information

4 ACS 3-year estimates.

5 This assumption is based on internal survey data on boat ownership.

6 This assumption is based on internal survey data on boating participation.



File Typeapplication/msword
Authorseth.h.muzzy
Last Modified Bytyrone.huff
File Modified2014-06-25
File Created2014-06-25

© 2024 OMB.report | Privacy Policy