Multimode Pilot Final Report

Att 7. NCCDPHP GCT Multimode Pilot Final Report Final.docx

Cognitive Testing and Pilot Testing for the National Center for Chronic Disease Prevention and Health Promotion

Multimode Pilot Final Report

OMB: 0920-1291

Document [docx]
Download: docx | pdf

Shape5

Shape3 Shape2 Shape1

Submitted to:
Centers for Disease Control and Prevention



Submitted by:
Robynne Locke, ICF

Matthew D. McDonough, ICF

Kelly Diecker, ICF



August 15, 2018

Attachment 7:

CDC Behavioral Risk Factor Surveillance Survey

New York Multimode Pilot:

Final Report





Table of Contents

Executive Summary 4

I. Introduction 5

1. Purpose of Experiment 5

2. Supplemental Drop Point Experiment 6

3. Report Organization 6

II. Pilot Instruments 7

1. Instrument Development 7

1.1 Creating Equivalent Stimuli in Multi-mode Instruments 7

2. Preparation for Data Collection 8

III. Sampling 8

1. Sample Frame 8

2. Stratification 9

IV. Pilot Experimental Design 10

1. Experiment 1: RDD Sample 10

2. Experiment 2: ABS Sample 11

V. Data Collection Process 11

1. RDD Experiment Mailing Protocol 12

2. ABS Experiment 13

3. Drop Point Experiment 14

4. Addressing Respondents 15

5. Return Address 15

6. IVR Help Line 15

VI. Data Management 16

1. Mail Questionnaire Scanning 16

2. Data Processing 16

3. Weighting 17

VII. Analysis 17

1. Response Rates 17

1.1 RDD Response Rates 17

1.2 ABS Response Rates 18

2. Accuracy of Address Matching 19

2.1 Accuracy of RDD sample 19

2.2 Accuracy of ABS 19

3. Number of Web Connections 20

4. Times of day web-based questionnaires are completed 21

5. Number of CATI Attempts 21

6. Data Quality 22

6.1 Differences in Item Refusal 22

6.2 Navigation Errors (Skip Inconsistencies) 22

6.3 Differences in Demographic Comparisons 24

6.3.1 Demographic Differences by Mode (includes RDD and ABS) 24

6.3.2 Demographic Differences by Frame (includes web, mail, and CATI modes) 28

6.4 Differences in Health Outcomes 28

6.4.1 Differences by Survey Mode (includes RDD and ABS) 29

6.4.2 Differences by Survey Frames (includes web, mail, and CATI modes) 35

7. Drop Point Experiment 35

VIII. Discussion 36

Appendix A: Citations 38

Appendix B. Mail Instrument 39




Executive Summary

The BRFSS has been a random digit dialing (RDD) telephone survey since its inception. Over the past decade, however, response rates to telephone surveys have dropped substantially, leading to cost increases and questions about sample representativeness. Concerns have been compounded by the increased cost of conducting surveys on cell phones, which is needed to ensure high population coverage. As the landscape changes (technologically, financially, and methodologically), BRFSS must continue to innovate to remain a trusted source of behavioral health information.

The objective of the BRFSS Multimode Pilot study was to assess the cost effectiveness and data quality from sequential modes of data collection based on telephone and address-based sampling methodologies, in order to inform potential alternative approaches to BRFSS data collection. The pilot study was conducted in the state of New York using the 2017 BRFSS Core instrument.

In the first experiment, the ICF research team explored whether a random digit dialing (RDD) sample (N=2813) could be used to gather data through multiple modes. To test the performance of an address-matched RDD sample, the ICF research team administered two, sequential, self-administered survey modes: web and mail. In total, ICF received 57 web completes, and 164 mail completes.

In the second experiment, the ICF research team explored the possibility of using an alternate sampling approach, an address-based sample (ABS) (N=6,097), to collect BRFSS data. To test the performance of an ABS, the ICF research team again administered two, sequential, self-administered survey modes: web and mail. Furthermore, ABS non-respondents with matched cell phone or landline numbers received CATI follow-up. In total, ICF received 173 web completes, 525 mail completes, and 184 CATI completes.

Response Rates: ICF examined the response rates for each phase of the study, and for each sample frame. For both sample frames, the web + mail phase of the protocol had a higher response rate than the web only.

Questionnaire Data Quality: ICF also examined the quality of the data collected by examining navigation errors, refusals, and comparing response distributions for selected key variables by frame and mode. Response distributions for most demographic questions were similar between the two sample frames, with some exceptions (marital status, education, and income). There were larger differences when comparing demographic questions by mode. Similarly, response distributions for key health outcome and behavior questions were similar between the two frames, but the differences were larger when comparing the three modes. In all but two variables (asthma and e-cigarette use), web respondents reported more positive health outcomes than CATI or mail. One emerging conclusion from looking at the effects of sample frame and mode separately is that mode is probably a more important factor than frame. This finding is consistent with the findings of a similar BRFSS mode and frame experiment conducted by ICF. In addition, the mail mode had a high level item of missingness, either from refusals or navigation errors.

Cost Considerations: ICF considered the costs associated with conducting this Pilot study, as well as a calculated cost per complete that is comparable to the current annual BRFSS computer assisted telephone interview (CATI) costs per complete for data collection from RDD NY cell phones and landlines. Both the web and mail modes are projected to cost less than the CATI cell data collection (when averaging landline and cell costs). This is consistent with the findings of a similar BRFSS pilots conducted by ICF. When comparing costs per complete by sample frame, ABS web completes were more expensive than RDD web completes. However, the reverse was true for mail mode, where ABS completes were less expensive than RDD completes.

  1. Introduction

    1. Purpose of Experiment

The Behavioral Risk Factor Surveillance System (BRFSS) is the nation's premier system of health-related telephone surveys that collect state data about U.S. residents regarding their health-related risk behaviors, chronic health conditions, and use of preventive services. Established in 1984 with 15 states, BRFSS now collects data in all 50 states, as well as the District of Columbia and participating U.S. territories. BRFSS completes more than 400,000 adult interviews each year, making it the largest continuously conducted health survey system in the world.

The BRFSS has traditionally relied on Random Digit Dial (RDD) telephone sample for data collection. However, recent advancements in telephone sampling have improved the match of cell phone numbers to physical addresses. Concurrently, there have been improvements in matching phone numbers to addresses in address-based samples (ABS). Whether it is more efficient to maintain a telephone sample that has been matched to an address, or to use an address-based sample that has been matched to landline and cell phones, had not previously been researched.

In addition to questions about the efficiency of phone versus address-based samples, there is ongoing research on the use of sequential modes of response (by telephone interview, mailed questionnaire and/or web survey) to collect health information. Current changes in telecommunications (such as call screening, caller ID and call blocking) have made the reliance on telephone interviewing an increasingly expensive mode of data collection. Other methods which are less expensive (such as web surveys) result in lower response rates, thereby potentially introducing nonresponse bias. Studies have shown that offering respondents different modes in sequence (rather than concurrently) results in higher response rates. However, again, the existing research on the costs and effectiveness of sequential modes using different samples is sparse.

In order to address these questions, CDC and ICF designed a BRFSS Pilot study—consisting of two experiments—to assess the cost effectiveness and data quality from sequential modes of data collection based on telephone and address-based sampling methodologies. The pilot study was conducted in the state of New York using the 2017 BRFSS Core questionnaire.

In the first experiment, the ICF research team explored whether a random digit dialing (RDD) sample, the kind of sample typically used in BRFSS data collection, could be used to gather data through multiple modes. Whereas an RDD sample had only been able to facilitate telephone data collection, recent developments in the ability to match telephone numbers to corresponding addresses has now opened up an RDD sample to multiple mode survey administration. To test the performance of an address-matched RDD sample, the ICF research team administered two, sequential, self-administered survey modes: web and mail. The benefits of web and mail data collection, over interviewer-administered modes like the telephone interviews used in the standard BRFSS cell phone and landline protocol, are generally known to include more accurate reporting of sensitive behaviors and characteristics due to additional privacy, and potential cost savings. .

In the second experiment, the ICF research team explored the possibility of using an alternate sampling approach, an address-based sample (ABS), to collect BRFSS data. The geographic nature of the sampling frame allows accurate local area targeting that is impossible with RDD frames. Additional geographic and demographic descriptors from external data sources can easily be appended to make sampling more efficient as well. After data have been collected, any data with a geographic link (e.g., tobacco and alcohol sales, pollution sensors, traffic, access to parks, food deserts) can be appended to conduct health surveillance analyses that were difficult or impossible to conduct in the past. To test the performance of an ABS, the ICF research team again administered two, sequential, self-administered survey modes: web and mail. Furthermore, as RDD sample can be matched to addresses, ABS can be matched to phone numbers to allow some CATI data collection. Consequently, ABS non-respondents with matched cell phone or landline numbers received CATI follow-up.

    1. Supplemental Drop Point Experiment

A third experiment conducted by ICF research staff took place within the ABS experiment. In this experiment, ICF removed from the ABS a number of Drop Point addresses, meaning no specific unit or apartment numbers are listed for a particular building, leaving the residents themselves to be responsible for identifying mail addressed to individuals within the building. Such Drop Point addresses have historically had low response rates. ICF sought to increase the response rate at Drop Point addresses by researching building unit numbers in advance of the mailing.

    1. Report Organization

The report of ICF’s pilot test and its findings proceeds in the following manner. Section II discusses the questionnaires used, and how they were adapted from the NY BRFSS computer-assisted telephone interviewing (CATI) questionnaire, and how the logistical components of the pilot were prepared. Section III discusses the sampling approaches in detail. Section IV explains the design of each experiment. Sections V includes the technical details of data collection (including costs per complete). Section VI documents data management processes and procedures. Section VII contains statistical results of the pilot, including response rates, and data quality measures (such as differences in item refusal, navigation areas, and differences in demographic and health outcomes by survey mode and frame). Finally, Section VII provides a summary discussion of key pilot findings.

  1. Pilot Instruments

    1. Instrument Development

ICF converted the 2017 Core BRFSS telephone interview instrument into self-administered web and mail instruments. This entailed significant attention to question wording, instructions, response options, and other issues that ensure an identical stimulus (i.e., question meaning) to the telephone interview. Specifically for the web-based questionnaires, screen layout, programming, and testing on multiple devices and screen sizes was an important step in the process.

      1. Creating Equivalent Stimuli in Multi-mode Instruments

In order to keep question meaning as equivalent as possible in self-administered modes, ICF addressed the following issues:

Phrasing and pronouns – Interviewer-administered response options are often presented in the second person singular (e.g., you), while self-administered surveys often use the first person singular (e.g., “I”).

Interviewer instructions – In interviewer-administered surveys, instructions are often included to help respondents answer correctly. These instructions are not read to all respondents, but only to those who struggle to provide a response. Instructions considered essential for respondents to answer accurately were carried forward into the self-administered questions.

Explicit and implicit “don’t know” and “refuse” responses – In some interview-administered questions, respondents are sometimes presented with the option “don’t know” or “don’t know” is embedded in the question stem. In others, “don’t know” is not read to the respondent, but it is an option that the interviewer can record. Where “don’t know” is explicitly read to the respondent, the option must be explicitly displayed for respondents in other modes. In other cases, care should be taken as to whether to include “don’t know” for self-administered mode respondents.

Soft and hard edit checks – Edit checks (i.e., ways of identifying potentially inappropriate responses to questions) can be programmed into questionnaires with digital administration (online and CATI). However, data cannot be checked in real time when respondents complete on paper. In these cases, it is possible to can build in “don’t know” captures to minimize the instances of guessing.

Skip patterns, question fills, and question numbering – Computerized questionnaires can contain programed skip patterns based on responses to previous questions. Parts of responses can be populated based on previous responses (for example, the respondents sex or the number of drinks considered binge drinking based on the respondent’s sex). This cannot be done in mail surveys, but design elements can make navigating through the instrument simple. This includes numbering the questions in a clear and meaningful way, and placing navigation instructions in-line with response options to ensure they are seen.

Open-ended questions – In telephone surveys, some questions can be “field coded’ for these questions, rather than reading response options to the respondent, the interviewer asks the question, listens to the response, and then identifies the best response from a long list of options. When these questions are asked in web format, a pre-fill option can be used that allows respondents either to use a drop-down box or a look-up feature (whereby the respondent begins typing the answer as if it is open-ended, but the response fills based on the first few letters). For mail, decisions must be made to include the field-coded options, taking up space on an already long instrument, or ask the questions as open-ended, which may lead to responses that do not match the field coding categories.

In addition to the wording and presentation of the instrument, there are programming challenges that need to be considered when moving a survey instrument from CATI-only to mixed mode implementation. Primary among these is making sure that the web questionnaire can be accessed on any web-enabled device anywhere.

Mobile first design – Current web survey best practice is to design web-administered surveys for the smallest possible screen on which they may be completed (for the most part, this is smartphones) (Barlas, Thomas, and Graham 2015). This design strategy includes avoiding placing questions in grids in any mode wherever possible (Revilla, Toninelli, and Ochoa 2015). Other design choices, such as placing the ‘next’ button on the right and the ‘previous’ button on the left (Bergstrom et al. 2016) for web surveys, and correspondingly adding page numbers on the bottom of paper surveys, train respondents in how to navigate through the instrument to lessen the burden of response (Couper et al. 2011). Overall, all attempts were made to keep the stimulus for respondents as similar as possible across modes, including layout and color schemes and other formatting considerations.

    1. Preparation for Data Collection

Once the instrument was adapted for web and paper administration, the questionnaire was programed into ICF’s printing and web survey platforms. This included using the same fonts, images, and color scheme where possible. The paper survey was designed using the Dillman Tailored Design Method where appropriate (see Dillman et al. 2014), and most closely mirrored the mobile-first design of the web survey. The web instrument was programmed into ICF’s survey deployment platform for an integrated approach to dissemination and data collection. The paper instrument underwent independent review from multiple survey research staff at ICF, while the online survey was reviewed by staff specialized in testing web and phone surveys.

  1. Sampling

    1. Sample Frame

The sample frame for Experiment 1 (RDD sample; mailing addresses matched to phone numbers drawn) consisted of 51,549,300 telephone numbers in New York State.

The sample for Experiment 2 (Address-Based Sample (ABS); with phone numbers matched to addresses drawn, where available) consisted of 7,449,434 USPS residential-only or primarily residential addresses in NY State.

    1. Stratification

The RDD experiment was stratified by landline and cell. The sample within each stratum was selected using Marketing System Group’s (MSG) Virtual Genesys sample generation system. For the landline stratum, Virtual Genesys’s “Remove Known Businesses” option was used to purge records with phone numbers associated with businesses.

After the sample was selected, it was sent to MSG for address matching. Of the 12,542 records drawn, 2,813 (or, 22.4%) were matched to NY-based addresses.

Table 1 RDD Sample Stratification

Strata

Frame

(RDD Max Sample Yield)

Sample Drawn

# Matched to NY Addresses (included in pilot)

# Matched to non-NY Addresses (excluded from pilot)

NY Address Match Rate

Landline

19,879,300

6,092 (6450 records selected, with 358 "Known Business" records removed during the sample draw)

1784

12

29.3%

Cell

31,670,000

6,450

1029

148

16.0%

Total

51,549,300

12,542

2813

160

22.4%



The ABS experiment consisted of only one stratum (NY State). As with the RDD pilot, the ABS was selected using Marketing System Group’s (MSG) Virtual Genesys sample generation system. The sample included drop point addresses and P.O. boxes designated as Only Way to Get Mail (OWGM).

After the sample was selected, it was sent to MSG for telephone matching. Of the 8,700 addresses drawn, 6,097 (or, 70.1%) were matched to telephone numbers.


Table 2 ABS Stratification

Region

 

Frame (Total USPS Residential Addresses)

Sample Drawn

# Matched to Phone

Phone Match Rate

# Matched to Landline

# Matched to Cell

NY State

Total

7,449,434

8,700

6,097

70.1%

3,369

2,728


Non-Drop Point

7,128,609

8,341

5,772

69.2%

3,258

2,514


Drop Point

320,825

359

325

90.5%

111

214



After the phone matching was completed, the sample was split between the non-drop point addresses and the drop point addresses (where multiple housing units share a mail receptacle).

Additional processing was completed for the drop point sample sub-experiment. First, drop point records were split into two experimental groups. For Group 1, research (e.g., internet searching of realty websites) was conducted to determine the naming conventions associated with the units at a given address (such as Unit 1, Apt. B, Suite 3, etc.). After these naming conventions were determined, each record was copied to create a new record containing the original drop point street address and unique secondary addresses.

For Group 2, a record’s “drop count” variable (i.e., the number of units associated with a given drop point address) was used to determine the number of copies to make of that record. The table below provides an example of the output from each experimental condition.

Table 3 Drop Point Naming Conventions

Experimental Group

Original Drop Point Address

Updated Addresses Included in Final Mailing

Group 1

123 Main Street, New York City, New York

123 Main Street, Apt 4A, New York City, New York

 

 

123 Main Street, Apt 4B, New York City, New York

 

 

123 Main Street, Apt 4C, New York City, New York

Group 2

123 Main Street, New York City, New York

123 Main Street, New York City, New York

 

(drop count = 3 units)

123 Main Street, New York City, New York

 

 

123 Main Street, New York City, New York



The table below provides the number of records from each group that were included in the final experiment. In the cases where a drop point record was matched to a telephone number, only the first record in the set of related addresses was associated with the listed number.

Table 4 Drop Point Sample

Drop Point Experiment Group

# Records in Original Sample

# of Records in Final Sample

# of Records With Phone Match

Group 1 (Manual Research of Unit Naming Convention)

180

427

167

Group 2 (Automatic Inflation of Records per Drop Count)

179

406

158

Total

359

833

325



  1. Pilot Experimental Design

    1. Experiment 1: RDD Sample

The RDD experiment began with a standard BRFSS RDD sample including cell phones and landlines. Following an address matching effort, the survey was deployed through a web phase and mail phase. The goal was to achieve at least 200 completes from each mode. RDD CATI data collection could not be conducted within this pilot due to budgetary restrictions.

    1. Experiment 2: ABS Sample

The ABS experiment started with an ABS, which was then split into two subsamples, one of addresses that were matched to a phone number and one of addresses that were not matched to a phone number. Following this matching effort, the survey was deployed to both subsamples through a web phase and mail phase. The goal was to achieve at least 200 completes from each mode, in each subsample. Additionally, the phone matched sample underwent a CATI phase with the goal of achieve an additional 200 completes.

The Drop Point Experiment followed the same protocol as the other ABS experiment.

Figure 1 Pilot Experiment Design

  1. Data Collection Process

Data collection began February 2, 2018, and ended May 29, 2018. Data were collected via web, mail, and CATI; these procedures are described in detail below. All mailings originated from the ICF survey operations center in Martinsville, VA, and all completed mail surveys were returned there. All phone calls also originated from the Martinsville survey operations center.

The following sections outline the protocols for each experiment and mode. Costs per complete are provided for each mode of data collection in the summary table following each section. Costs per complete were calculated by totaling the production costs for data collection event and dividing the total by the number of completes. For data collection by web, production costs include printing letters, postage, and processing of digital data. For data collection by mail, production costs include printing surveys, postage of surveys sent out and returned by participants, scanning of returned surveys, and processing of scanned data. For both web and mail data collection, costs per complete were calculated using the actual number of items printed, mailed, and received during the pilot. For data collection by phone, production costs include interviewing and data processing efforts. Whereas the web and mail data collection was done specifically for the pilot, ICF is continuously involved in BRFSS CATI data collection and as such has established costs based on whether calls are being made to landlines or cell phones. Costs per complete do not include programming and or maintenance costs that might be included in a longer term effort. Cost per complete also don’t include ICF project management staff time, as such costs can be highly variable based on the hours needed and differing rates of the staff members involved. Excluding such factors from all modes allows for a more straightforward calculation and presentation of the costs per complete.

Further analysis of costs per complete by sample and mode are provided in Section VIII.

    1. RDD Experiment Mailing Protocol

On February 2, all participants were mailed a one page invitation letter. This letter included information about the study, instructions for accessing the survey online, and contact information for questions or concerns. On March 1, survey non-respondents were mailed a packet which included a one page cover letter that outlined the purpose of the research, a copy of the printed survey instrument, and a business reply envelop (BRE) to return the completed instrument.

Respondents who completed the survey by web were removed from the second mailing. In addition, respondent mailings that were returned as “undeliverable” were removed from the second mailing. However, a few respondents could not be removed, as completed or undeliverable surveys were received after the mail room had started processing the next mailing.

The contents, dates, population, and quantities of all mailings are further described in Table 5.



Table 5 Experiment 1 (RDD) Data Collection Process and Cost per Complete

Mailing

Mailing Description

Mailing Contents

Mail Drop Date

Population

Quantity Mailed

Completes

Cost Per Complete

1

Mailed letter offering response by web

  • One-page letter offering response by web only


Mail Drop: 2/2/18

All sampled addresses

Total: 2,813



57

$56.06

2

Mailed package offering response by mail

  • One-page cover letter offering response by web or mail only

  • Eight-page printed survey

  • Postage-paid business reply envelope (BRE)

Mail Drop: 3/1/18

All sampled addresses, excluding 35 completes, 1 refusal, and 17 undeliverables

Total: 2,722



164

$51.67



    1. ABS Experiment

On February 2, all participants were mailed a one page invitation letter. This letter included information about the study, instructions for accessing the survey online, and contact information for questions or concerns. On March 1, survey non-respondents were mailed a packet which included a one page cover letter that outlined the purpose of the research, a copy of the printed survey instrument, and a business reply envelop (BRE) to return the completed instrument. Additionally, on April 5, CATI data collection began for web and mail non-respondents.

Respondents who completed the survey were removed from the subsequent data collection, either mailing or calling. In addition, respondent mailings that were returned as “undeliverable” were removed from the second mailing but not the CATI follow-up as the matched phone number may still have been valid. However, a few respondents could not be removed, as completed or undeliverable surveys were received after the mail room or call center had started processing the next data collection effort.

The contents, dates, population, and quantities of all data collections are further described in Table 6. Completes and cost per complete in this Table includes the Drop Point sample, as this was a sub-sample of the full ABS experiment.

Table 6 Experiment 2 (ABS) Data Collection Process and Cost per Complete

Contact

Contact Description

Contents

Date (Mail Drop /CATI)

Population

Quantity

Completes

Cost Per

Complete

1

Mailed letter offering response by web

  • One-page letter offering response by web only


Mail Drop: 2/2/18

All sampled addresses

Total: 9,224


  • Overall: 173

  • Unlisted: 48

  • Listed: 125


  • Overall: $63.81

  • Unlisted: $59.01

  • Listed: $64.26

  • Listed (w/o Drop Point):

$53.31

2

Mailed package offering response by mail

  • One-page cover letter offering response by web or mail only

  • Eight-page printed survey

  • Postage-paid business reply envelope (BRE)

Mail Drop: 3/1/18

All sampled addresses, excluding 107 completes, 6 refusals, and 262 undeliverables

Total: 8,799



  • Overall: 525

  • Unlisted: 87

  • Listed: 438




  • Overall: $46.45

  • Unlisted: $78.07

  • Listed: $40.17

  • Listed (w/o Drop Point):

$34.73


3

CATI

  • CATI survey in English and Spanish

CATI began: 4/5/18

All sampled addresses, excluding 525 completes and 38 refusals

Total: 5,270 (2,945 landline, 2,325 cell)


  • Overall:

184

  • Landline:

80

  • Cell:

104




  • Landline: $52.57

  • Cell: $92.06



    1. Drop Point Experiment

The Drop Point experiment started several weeks after the start of the above the other experiments to allow for time to research drop point addresses. On February 28, all records in the drop point sample were mailed a one page invitation letter. This letter included information about the study, instructions for accessing the survey online, and contact information for questions or concerns. On March 15, all participants were mailed a one page reminder letter. This letter again included information about the study, instructions for accessing the survey online, and contact information for questions or concerns. On March 30, survey non-respondents were mailed a packet which included a one page cover letter that outlined the purpose of the research, a copy of the printed survey instrument, and a business reply envelope (BRE) to return the completed instrument. Additionally, on May 4, CATI data collection began for web and mail non-respondents with a matched telephone number.

Respondents who completed the survey were removed from the subsequent data collection, either mailing or calling. In addition, respondent mailings that were returned as “undeliverable” were removed from the second mailing but not the CATI follow-up as the matched phone number may still have been valid. However, a few respondents could not be removed, as completed or undeliverable surveys were received after the mail room or call center had started processing the next data collection effort.

The contents, dates, population, and quantities of all data collections are further described in Table 7.

Table 7 Drop Point Sub-Experiment Data Collection Process

Contact

Contact Description

Contents

Date

Population

Quantity

Completes

1

Initial mailed letter offering response by web

  • One-page letter offering response by web only


Mail Drop: 2/28/18

All sampled addresses

Total: 833

3

2

Mailed reminder letter offering response by web

  • One-page letter offering response by web only

Mail Drop: 3/15/18

All sampled addresses, excluding 3 completes

Total: 830

10

3

Mailed package offering response by mail

  • One-page cover letter offering response by web or mail only

  • Eight-page printed survey

  • Postage-paid business reply envelope (BRE)

Mail Drop: 3/30/18


All sampled addresses, excluding  5 completes, 0 refusals, and  43 undeliverables

Total: 785




22 (+ 4 respondents who also completed in Web after mailing sample was drawn)

4

CATI

  • CATI survey in English and Spanish

CATI began: 5/4/18

All sampled addresses with a listed phone number (n = 325) -- among the 325 phone matched records, ICF excluded 3 web completes, 8 paper completes, and 0 refusals1 2

Total: 314


6 (4 completes + 2 partial completes)


    1. Addressing Respondents

As names were not provided with the addresses for the vast majority of records, mailings were addressed to Current Resident.

    1. Return Address

Completed surveys and returned mailings were delivered to the ICF survey operations center in Martinsville, VA.

    1. IVR Help Line

A dedicated toll-free phone number was created for respondents who needed assistance with the survey. The help line was monitored by ICF Monday through Friday from 9 a.m. until 5 p.m. eastern time and was available to respondents throughout the entire fielding period. The IVR helpline received a total of 9 calls. Of these calls, 6 individuals called to ask that a paper survey be mailed to them (mostly because they did not have computers), One individual called to ask about content on the survey, one called to inform ICF they had received the reminder but had already completed the survey, and one called to ask to be removed from the mailing list.

  1. Data Management

    1. Mail Questionnaire Scanning

All returned mail was reviewed and manually entered into a database with a tracking code and receipt date. Completed surveys were reviewed and prepared for scanning. The prepared surveys were then scanned. ICF scanners collected optical marks (bubbles and checkboxes), barcodes (used for quality assurance procedures), and handwriting. In-process exception correction allowed data entry staff to view problems and make corrections on-screen, alleviating the need to review the paper forms. Change logs tracked changes to data files for the utmost in security. The scanner captured all mail responses, regardless of whether they were appropriately marked.

    1. Data Processing

2.1 Mail Survey Data Cleaning

After scanning, SAS programs were used to identify mail surveys with consistency and range checks; those cases in error were reviewed manually and standard edits were applied. The review and manual cleaning of the mail surveys took approximately 8 hours. The cleaned mail responses conformed to the skip patterns and ranges in the web survey, allowing the mail data to be merged with the data collected from the web instrument.

2.2 Combining Data from Different Modes

ICF initially collected NY BRFSS Pilot data in three separate raw datasets, one for each mode (web, mail, and CATI). Although all three datasets were based on the same survey, the variables in the web, CATI, and mail raw datasets had different names, as well as some having different types, structures, and coding schemes.

To facilitate the analysis of web, CATI, and mail records, ICF processed and combined the raw datasets so that all variables had the same names, types, structures, and coding schemes. For example, the web, CATI, and mail surveys had different ways of asking about the counties where respondents live. The web and CATI variables (s8q9) were categorical variables with many options, and mail variable (q47) was an open-ended question. During data processing the mail variable was renamed to match the web/CATI, the correct county code for each open-ended mail response was added, and other-specify fields were populated for any mail responses that could not be categorized. The web and CATI variables also had different codes for nonresponses, so they were to match CATI. In this way, variables were transformed from other modes to match CATI variables. As this example suggests, the variables in the combined dataset tended to resemble their web and CATI versions more than mail, both because the web and CATI names were more descriptive and because the web and CATI data were more structured, with cleaner categories and fewer open-ended questions. Along with the final combined file of web, CATI, and mail data, ICF developed a codebook listing the variable names, labels, and topline frequencies.

2.3 Definition of a Complete and Partial Complete Questionnaire

Completed questionnaires were identified as those for which the participant responded to the final question in the HIV section “Do any of the following situations apply to you? You do not need to indicate which one”, which is the last substantive question asked of all participants. This is question S16Q3 on the web/CATI survey, and question 92 on the mail survey. Eligible partial completes consist of records where the participant responded to the question “Because of a physical, mental, or emotional condition, do you have difficulty doing errands alone such as visiting a doctor’s office or shopping?” (question S8Q27 on the web/CATI survey, and question 65 on the mail survey), or in the case that this particular question was skipped, at least one item that comes after it on the survey. In this document the references made to “completes” or “completed surveys” includes both completed questionnaires and eligible partial completes unless otherwise stated.

2.4 Duplicate Completes

ICF analysts applied the following rule to eliminate any duplicate completes:

  • CATI: Has response to s1q1

  • Web: Has response to s1q1

  • Mail: All returns

    1. Weighting

The goal of the BRFSS Multimode Pilot was not to provide population estimates, but rather to identify potential differences in populations and health outcomes by survey mode and frame. Unweighted data were used in the analysis in order to identify the effects of survey mode and frame.

  1. Analysis

    1. Response Rates

This section provides the response rates within each sample frame, for each phase of the data collection protocol.

1.1 RDD Response Rates

Within the RDD Sample frame, the response rate for web was 2.1% and the response rate for web and mail was 8.4%, as shown below.

Table 8 RDD Web and Mail Response Rates

 

Web Only

Web + Paper

Total Records

2813

2813

Undeliverable

169

206

Web Screen Out

0

0

Refusal

6

8

Web Drop Outs

5

5

Total Eligible (Denominator)

2644

2607

Completes

55

219

Response Rate

2.1%

8.4%



1.2 ABS Response Rates

For the ABS that was not matched to phone numbers, the response rate for web was 1.9%, and the response rate for web + mail was 5.6%.

Table 9 ABS Unmatched Sample

 

Web Only

Web + Paper

Total Records

3077

3077

Undeliverable

289

336

Web Screen Out

1

1

Refusal

4

5

Web Drop Outs

4

4

Total Eligible (Denominator)

2787

2740

Completes

53

154

Response Rate

1.9%

5.6%

For the ABS that was matched to phone numbers, the response rate for web was once again 1.9%, and the response rate for web + mail was 9.3%.

Table 10 ABS Matched Sample

 

Web Only

Web + Paper

Total Records

6097

6097

Undeliverable

244

271

Web Screen Out

0

0

Refusal

7

18

Web Drop Outs

13

13

Total Eligible (Denominator)

5853

5826

Completes

112

540

Response Rate

1.9%

9.3%



For the ABS web+mail+CATI sample, the response rate was 11.7%.

Table 11 ABS Web+Mail+CATI

 

Web + Mail + CATI

Total Records

6097

Ineligible (Has at least one undeliverable mailing + an ineligible phone disposition)

120

Web Screen Out

0

Refusals

44

Web Drop Outs

13

Total Eligible (Denominator)

5977

Completes

701

Response Rate

11.7%

For both sample frames, the web + mail phase of the protocol had a higher response rate than the web only. In addition, the ABS CATI phase had a higher response rate than the other two phases. In comparing the response rates of the three modes analyzed, it should be considered that web respondents only received one invitation to the study, compared to mail (2 contacts) and CATI (at least 3 contacts). Alternative protocols (e.g., beginning with CATI and then moving to mail or web; or offering multiple contacts to participate in web) would likely impact the response rates of each mode.

    1. Accuracy of Address Matching

2.1 Accuracy of RDD sample

Of the total RDD sample selected, 2,813 records (or, 22.4% of the sample drawn) were matched to addresses and used in the experiment. Of these, 173 records (or 6% of the matched sample) were returned as undeliverable.

2.2 Accuracy of ABS

Of the 8,700 ABS addresses drawn, 6,097 (or, 70.1%) were matched to telephone numbers. Of these, 536 records (or 9% of the sample) were returned as undeliverable.

In the mail survey, respondents were asked whether anyone in their household had the phone number listed in the sample file. Forty eight (48) percent of ABS respondents said that was their correct phone number (compared to 59% of RDD mail respondents). An additional 7% of ABS respondents reported that the number was correct for another person in their household, compared to 9% of RDD mail respondents.

Table 12 Phone Confirmation

The number for this address listed in the telephone directory is [Phone Number]. Does anyone
in your household have this phone number?)

Sample type


Frequency

RDD

#

%

ABS

#

%

Total

#


Yes, this is my phone number

19
59.4

41
47.7

60

Yes, someone else in my household has this phone number

3
9.4

6
7.0

9

No one living here has this phone number

10
31.2

36
41.9

46

Refused

0
0.0

3
3.5

3

Total

32

86

118



In addition, ABS respondents who completed the survey by phone were asked whether they remembered receiving a letter and paper survey in the mail. A vast majority of respondents (73%) indicated that they did not remember receiving the survey. One possible explanation for this could be that the phone and address for that record were mismatched. However, there could be alternative explanations for this as well (the survey was never opened, the survey was misplaced by another member of the household, or the respondent forgot that they received the survey).

Table 13 Receipt of Mail Confirmation

Several weeks ago, we sent you a letter and a paper survey in the mail. Do you remember receiving these?

Value

Landline: Count

Landline: Percent

Cell: Count

Cell: Percent

Yes

17

25.4%

9

5.6%

No

40

59.7%

78

48.1%

Don't Know/Not Sure

4

6.0%

2

1.2%

Refused

6

9.0%

6

3.7%

Total

67

100.0%

162

100.0%



ABS CATI respondents were also asked to confirm the address in the sample file. While the majority of respondents (60%) confirmed their address (indicating a correct phone and address match in the sample file), 28% of ABS CATI respondents answered that the address in the sample file was NOT their current address. This indicates that in addition to the 9% of addresses that were returned as undeliverable, a relatively high percentage of survey may have been delivered to unintended recipients due to an incorrect or out of date phone match.

Table 14 Current Address Confirmation

I have [ADDRESS] listed as your current residence. Is this correct?

Value

Count

Percent

Yes

97

60.2%

No

45

28.0%

Don't Know/Not Sure

0

0.0%

Refused

19

11.8%

Total

161

100.0%



Finally, 11 records (or 3% of the ABS CATI sample) were not in New York, and 6 records (or 1.7% of the sample) were non-working,

    1. Number of Web Connections

Almost all respondents who completed the survey by web did so in one web connection (96%). 10 respondents (or 4% of completes) completed the survey in two or more web connections.

Table 15 Number of Web connections

Value

Count

Percent

1

227

95.8%

2

8

3.4%

3

1

0.4%

5

1

0.4%

Total

237

100.0%



    1. Times of day web-based questionnaires are completed

Table 16 shows the times of day web surveys were completed. Times of day are divided into two hour increments. No web surveys were completed between 12am and 6am.

Table 16 Times of day web surveys were completes

Time Completed

Frequency

06:00-07:59

5

08:00-09:59

20

10:00-11:59

26

12:00-13:59

41

14:00-15:59

39

16:00-17:59

30

18:00-19:59

34

20:00-21:59

35

22:00-23:59

7



    1. Number of CATI Attempts

Table 17 presents the frequency of CATI completes for each number of call attempts.

Table 17 Number of CATI Attempts

# of Attempts

Landline: Count

Landline: Percent

Cell: Count

Cell: Percent

1

13

16.5%

18

17.3%

2

4

5.1%

18

17.3%

3

9

11.4%

15

14.4%

4

10

12.7%

16

15.4%

5

2

2.5%

10

9.6%

6

8

10.1%

9

8.7%

7

7

8.9%

3

2.9%

8

7

8.9%

12

11.5%

9

6

7.6%

1

1.0%

10

12

15.2%

2

1.9%

15

1

1.3%

0

0.0%

Total

79

100.0%

104

100.0%



    1. Data Quality

6.1 Differences in Item Refusal

As further described in Section 6.2 (below), mail mode had a relatively high level of item missingness, although it is not possible to determine whether this missingness is due to refusals or respondent error. CATI and web, however, had very few refusals. Most CATI and web completes had only one or two refusals. Although CATI and web had a low number of refusals, in general, there more CATI refusals than web refusals. For 33 questions, there were more CATI refusals than web refusals. By comparison, only 12 questions had more web refusals than CATI refusals. Questions with more than three refusals (in total) are broken out by mode in Table 18 below. Due the high number of missing mail items (and the inability to distinguish between mail refusals and navigation errors), mail mode has been excluded from this table.

Table 18 Web and CATI Questions with >3 Refusals

Question

CATI Refusals

Web Refusals

Total Refusals

In what county do you currently live?

1

5

6
 

During the past 30 days, how many days per week or per month did you have at least one drink of any alcoholic beverage such as beer, wine, a malt beverage or liquor?

2

51

53
 

During the past 30 days, on the days when you drank, about how many drinks did you drink on the average?

0

4

4
 

Considering all types of alcoholic beverages, how many times
during the past 30 days did you have [s11q3_ins] drinks on an occasion?

0

4

4
 

During the past month, how many times per week or per month did you do physical activities or exercises to strengthen your muscles?

2

2

4
 

Which of the following best represents your annual household income from all sources?

21

7

28
 



More sensitive questions (alcohol and income) had the highest rates of refusals. Interestingly, while CATI had more CATI refusals for household income, web had more refusals alcohol consumption.

6.2 Navigation Errors (Skip Inconsistencies)

Unlike computer assisted telephone surveys or self-administered web surveys for which skip patterns can be programmed into the survey instrument, the mail survey required the respondent to follow skip instructions on the paper form. In an effort to minimize errors, ICF used visual cues to help guide the respondent through the skip patterns (see Dillman et al. 2014). However, despite these visual cues, all mail respondents had at least one error, and most had multiple errors. In most cases, mail respondents provided a response to a question that they should have not responded to if skip logic was followed. Table 19 shows questions with the greatest frequency of these types of errors. While there were also cases where respondents left questions blank, it is not possible to distinguish whether these were intentionally refused or unintentionally skipped due to respondent error. As a result, these instances are included in 6.1 Differences in Item Refusal

Table 19 Questions answered in error (inconsistent with base logic)

#

Question

Answered in error

34

Do arthritis or joint symptoms now affect whether you work, the type of work you do, or the amount of work you do? Please think about work for pay.

212

35

During the past 30 days, to what extent has your arthritis or joint symptoms interfered with your normal social activities, such as going shopping, to the movies, or to religious or social gatherings?

178

36

On a scale of 0 to 10 where 0 is no pain or aching and 10 is pain or aching as bad as it can be, during the past 30 days, how bad was your
joint pain on average?

164

72

Do you now use e‐cigarettes or other electronic “vaping” products every day, some days, or not at all?

154

80

How many times per week or per month did you take part in this activity during the past month?

116

12

Do you have more than one person you think of is your personal doctor or health care provider?

110

24

Do you still have asthma?

110

79

What type of physical activity or exercise did you spend the most time doing during the past month? For example, playing basketball, gardening, hiking, or weight lifting at a gym.

106

75

Considering all types of alcoholic beverages, how many times during the past 30 days did you have 5 or more drinks on an occasion?

104

76

Considering all types of alcoholic beverages, how many times during the past 30 days did you have 4 or more drinks on an occasion?

103

2

Do you live in college housing?

97

81

And when you took part in this activity, for how many minutes or hours did you usually keep at it?

87

87

During what month and year did you receive your most recent flu shot injected into your arm or flu vaccine that was sprayed in your nose? (year)

79

87

During what month and year did you receive your most recent flu shot injected into your arm or flu vaccine that was sprayed in your nose? (month)

76

19

Are you currently taking medicine prescribed by a doctor or other health professional for your blood cholesterol?

73

69

How long has it been since you last smoked a cigarette, even one or two puffs?

58

50

How many of these telephone numbers are residential numbers?

55

67

Do you now smoke cigarettes every day, some days, or not at all?

55

59

To your knowledge, are you now pregnant?

49

68

During the past 12 months, have you stopped smoking for one day or longer because you were trying to quit smoking?

33

32

How old were you when you were told you have
diabetes?

24

77

During the past 30 days, what is the largest number
of drinks you had on any occasion?

21



6.3 Differences in Demographic Comparisons

The following section discusses pilot test results for 10 demographic variables (displayed in Table 20). Demographic variables are first compared by survey mode (CATI, web, and mail) and then by survey frame (RDD and ABS).



Table 20 Key Demographic Variables

Key Demographic Variables

Age

Sex

Marital Status

Children in household

Race

Education

Employment

Income

Active Duty Military

Internet use



6.3.1 Demographic Differences by Mode (includes RDD and ABS)

ICF compared key demographic variables by three modes: self-administered web, self-administered mail, and telephone. Comparisons by mode are provided below.

Age. Mail respondents were older than CATI and web respondents, with 75% of respondents reporting that they were 55 or older, and 50% of respondents reporting that they were 65 or older. Only 5% of mail respondents were under the age of 34. The CATI and web modes were able to capture younger respondents (18% of CATI respondents and 13% of web respondents were 34 or younger).

Sex. Mail and web respondents were more likely to be female then male. Mail had the highest proportion of female respondents at 63%, compared to 55% for web and 49% for CATI. Of the three modes, CATI had the most equal distribution of males (51%) and females (49%).

Marital Status. Web respondents were the most likely to be married (56%) followed by mail (51%) and CATI (44%). Mail respondents were more likely to be widowed (16%) than mail (10%) or web (8%). Of the three modes, more CATI respondents reported that they have never been married (23%) than web (18%) or mail (14%).

Children at home. Respondents were asked how many children less than 18 years of age live in their household. Mail respondents were the least likely to have children living at home, with 84% of respondents reporting that they had no children under 18 living in their household. CATI respondents were more likely to have 2 children living at home than (10%) than web (6%) or mail (5%). Other responses were similar across the three modes.

Ethnicity. Respondents were asked if they were of Hispanic, Latino/a, or of Spanish origin. CATI respondents were more likely to be of Hispanic, Latino/a, or Spanish origin (11%) than mail (8%) or web (7%).

Race. Web had a slightly higher proportion of white, non-Hispanic respondents (84%) than mail respondents (81%) and CATI respondents (72%). CATI had the highest proportion of respondents who selected “other” or more than one race (9%, compared to 1% for web and mail).

Education. Web respondents completed more years of school than the other two modes, with 77% reporting that they were a college graduate, and 92% reporting that they have attended at least 1 year of college. In comparison, 49% of CATI respondents reported that they were a college graduate, and 47% of mail respondents reported that they were a college graduate.

Employment. Mail had a lower proportion of respondents who were employed for wages (35%) than either web (50%) or CATI (54%). Instead, mail respondents were more likely to be retired (45%) compared to 34% of web respondents, and 20% of CATI respondents.

Income. Web respondents had higher income than the other two modes, with 49% of respondents reporting an annual household income of $75,000 or more, compared to 38% of CATI respondents and 35% of mail respondents. The mail mode had a higher proportion of lower income respondents than CATI or web modes.

Active Duty Military. Mail respondents were more likely to have served on active duty in the United States Armed Forces (13%) than web (11%) or CATI respondents (9%).

Internet Use. As would be expected, web respondents had the highest proportion of respondents who reported that they had used the internet in the past 30 days (99%), followed by CATI (85%) and mail (84%).



Table 21 Demographics by Mode

Demographics

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Age

18-24

7
3.7

6
2.7

6
0.9

19
 

25-34

27
14.4

24
10.6

25
3.7

76
 

35-44

33
17.7

30
13.3

42
6.3

105
 

45-54

32
17.1

30
13.3

93
13.9

155
 

55-64

38
20.3

54
23.9

168
25.2

260
 

65+

47
25.1

79
35.0

334
50.0

460
 

Gender

Male

95
50.8

101
44.7

255
37.3

451
 

Female

92
49.2

125
55.3

428
62.7

645
 

Marital Status





Married

82
44.1

127
56.2

342
50.9

551
 

Divorced

23
12.4

25
11.1

88
13.1

136
 

Widowed

19
10.2

17
7.5

110
16.4

146
 

Separated

4
2.2

1
0.4

19
2.8

24
 

Never married

43
23.1

40
17.7

90
13.4

173
 

A member of an unmarried couple

12
6.5

16
7.1

23
3.4

51
 

Children in Household

0

132
72.1

175
77.4

540
84.0

847
 

1

21
11.5

26
11.5

51
7.9

98
 

2

19
10.4

13
5.8

34
5.3

66
 

3

7
3.8

6
2.7

10
1.6

23
 

4

3
1.6

6
2.7

6
0.9

15
 

6

0
0.0

0
0.0

2
0.3

2
 

Ethnicity

No, not of Hispanic, Latino/a, or of Spanish origin

165
88.2

209
92.5

595
92.1

969
 

Yes, of Hispanic, Latino/a, or of Spanish origin

21
11.2

15
6.6

51
7.9

87
 

Race

White, Non-Hispanic

134
71.7

188
83.9

541
81.2

863
 

Black/African American

14
7.5

9
4.0

43
6.5

66
 

American Indian/Alaska Native

3
1.6

0
0.0

2
0.3

5
 

Asian

4
2.1

11
4.9

21
3.2

36
 

Hispanic/Latino/Spanish

13
7.0

13
5.8

50
7.5

76
 

Other (Including multiple selections)

16
8.6

3
1.3

9
1.4

28
 

Education

Never attended school or only attended kindergarten

2
1.1

0
0.0

3
0.4

5
 

Grades 1 through 8 (Elementary)

2
1.1

1
0.4

8
1.2

11
 

Grades 9 through 11 (Some high school)

2
1.1

3
1.3

23
3.4

28
 

Grade 12 or GED (High school graduate)

41
22.0

15
6.6

133
19.6

189
 

College 1 year to 3 years (Some college or technical school)

46
24.7

34
15.0

191
28.2

271
 

College 4 years or more (College graduate)

92
49.5

173
76.6

320
47.2

585
 

Employment

Employed for wages

100
54.4

114
50.4

241
35.5

455
 

Self-employed

19
10.3

15
6.6

52
7.7

86
 

Out of work for 1 year or more

3
1.6

1
0.4

9
1.3

13
 

Out of work for less than 1 year

6
3.3

4
1.8

8
1.2

18
 

A Homemaker

6
3.3

7
3.1

22
3.2

35
 

A Student

3
1.6

6
2.7

7
1.0

16
 

Retired, or

37
20.1

76
33.6

306
45.1

419
 

Unable to work

8
4.4

3
1.3

34
5.0

45
 

Income

Less than $10,000

8
4.4

4
1.8

32
5.2

44
 

$10,000 to less than $15,000

3
1.7

6
2.7

34
5.5

43
 

$15,000 to less than $20,000

3
1.7

3
1.3

33
5.3

39
 

$20,000 to less than $25,000

9
45.0

5
2.2

39
6.3

53
 

$25,000 to less than $35,000

13
7.2

8
3.5

82
13.3

103
 

$35,000 to less than $50,000

20
11.1

33
14.6

77
12.5

130
 

$50,000 to less than $75,000

24
13.3

49
21.7

102
16.5

175
 

$75,000 or more

69
38.1

111
49.1

219
35.4

399
 

Active Duty Military

Yes

16
8.7

24
10.6

85
12.5

125
 

No

168
90.8

202
89.4

593
87.5

963
 

Internet Use

Yes

152
84.9

223
98.7

567
83.8

942
 

No

26
14.5

3
1.3

110
16.3

139
 



6.3.2 Demographic Differences by Frame (includes web, mail, and CATI modes)

ICF compared key demographic variables by sample frame (RDD compared to ABS). Survey frame did not appear to have an effect on demographics. For most variables, the differences between the two sample frames were between 1 and 3 percentage points. There were three variables where larger differences between the sample frames were observed.

  • Marital status: 49% of ABS respondents reported that they were married, compared to 39% of RDD respondents. In addition, 13% of ABS respondents reported that they were divorced, compared to 9% of RDD respondents, and 17% of ABS respondents reported that they had never married, compared to 12% of RDD respondents.

  • Education: 83% of RDD respondents reported that they had completed at least some years of college, compared to 77% of ABS respondents.

  • Income: 46% of RDD respondents reported that their annual income was $75,000 or more, compared to 37% of ABS respondents.

One emerging conclusion from looking at the effects of sample frame and mode separately is that mode is probably a more important factor than frame. This finding is consistent with the findings of a similar BRFSS mode and frame experiment conducted by ICF.

6.4 Differences in Health Outcomes

The following section discusses Pilot test results for 20 key health outcome and behaviors variables (displayed in Table 22). These key variable estimates were chosen because they reflect high-profile topics that appear in official BRFSS reports, and are topics of interest to many BRFSS data users and policy makers. Analysis of responses to these key variables help to understand the effects of data collection by mode. Health variables are first compared by survey mode (CATI, web, and mail) and then by survey frame (RDD and ABS).

Table 22 Key Health Outcome/Behavior Variables

Key Health Outcome/Behavior Variables

General Health

Physical Health

Mental Health

Days where poor health interfered

Healthcare coverage

Heart Attack

Angina or Coronary Heart Disease

Stroke

Ever had Asthma

Currently have Asthma

Arthritis

Have been told have diabetes

Deafness/Difficulty Hearing

Blindness/Difficulty seeing with Glasses

Cigarette smoker

e-cigarette smoker

Physical activity

Flu Vaccine

Pneumonia vaccine

HIV test


6.4.1 Differences by Survey Mode (includes RDD and ABS)

ICF compared key outcome variables by three modes: self-administered web, self-administered mail, and telephone. Comparisons by mode are provided below.

General Health

Respondents were asked to rate their general health as excellent, very good, good, fair or poor. In comparing general health by mode, web respondents reported better general health, with 68% of respondents reporting “Excellent” or “Very good” health. By comparison, 58% of CATI respondents reported “Excellent” or “Very good” health, followed by 52% of mail respondents for these categories. Only 6% of web respondents reported fair or poor health, followed by 12% of mail respondents and 17% of CATI respondents. Overall, web respondents reported better general health than CATI or web. This is consistent with findings of the analysis of demographic difference by mode (Section 7.3.1) which found that web respondents are also younger, have higher incomes, and more years of education.

Table 23 General Health by Mode

Self- Reported General Health

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Excellent

42
19.2

45
19.3

98
14.4

185
 

Very good

86
39.3

113
48.5

257
37.9

456
 

Good

52
23.7

60
25.8

241
35.5

353
 

Fair, or

28
12.8

10
4.3

69
10.2

107
 

Poor

10
4.6

5
2.2

14
2.1

29
 

Don't Know / Not Sure

1
0.5

0
0.0

0
0.0

1
 

Total

219

233

679

1131



Physical and Mental Health

Adults in poor physical health are defined as having reported 14 or more days for which their physical health was “not good,” within the past 30 days. Adults in poor mental health are defined as having reported 14 or more days for which their mental health was “not good,” within the past 30 days. CATI respondents were more likely to have poor physical health (13%) followed by mail (11%) and web (9.5%). As with physical health, CATI respondents were the most likely to report poor mental health (14%) followed by mail (10%) and web (9%).

Respondents were also asked for about how many days in the past 30 days did poor physical or mental health keep them from doing their usual activities, such as self-care, work, or recreation. The findings for this question are consistent with the findings of the questions about poor physical/mental health. CATI respondents were most likely to report 14 or more days in the last 30 days where they lad to limit their activities (15%) followed by mail and web (6%).

Table 24 Physical and Mental Health by Mode

Physical Health and Mental Health

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Poor Phys Health (> 14 days in past month

27
12.7

22
9.5

72
11.1

121
 

Poor Mental Health (> 14 days in past month

29
13.94

21
9.13

67
10.37

117
 

Limited Activities (> 14 days in past month

16
15.0

13
5.7

41
6.4

70
 



Healthcare Coverage

Respondents were asked if they had any kind of health care coverage, including health insurance, prepaid plans such as HMOs, government plans such as Medicare, or Indian Health Service. Reports of health care coverage were similar across the three modes of data collection, with a high percentage of respondents reporting coverage. Of the three modes, web reported the highest rates of coverage (99%).

Table 25 Health Care Coverage by Mode

Table of s3q1 by mode

Do you have any kind of health care coverage, including health insurance, prepaid plans such as HMOs, government plans such as Medicare, or Indian Health Service?)

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Yes

188
92.6

226
98.7

655
97.5

1069
 

No

15
7.4

2
0.9

17
2.5

34
 

Don't Know / Not Sure

0
0.0

1
0.4

0
0.0

1
 

Total

203

229

672

1104



Chronic Conditions

Heart Attack. BRFSS respondents were asked if they were ever told they had a heart attack. Responses to this question were consistent across the three modes. Three (3) % of CATI and mail respondents reported that they have had a heart attack, compared to 2% of web respondents.

Angina or Coronary Heart Disease. BRFSS respondents were asked if they were ever told they had angina or coronary heart disease. For this question, the percentage of mail respondents reporting heart disease (7%) was slightly higher than CATI (4%) and web (3%).

Stroke. BRFSS respondents were asked if they were ever told they had a stroke. For this question, responses across the three modes were again similar. Once again, web respondents reported slightly better health outcomes with 2% of respondents reporting they have had a stroke, compared to 3% of CATI and 4% of mail.

Asthma. Respondents were asked if a doctor or health professional had ever told them they had asthma. CATI respondents reported higher rates of a history of asthma (15%) than web (13%) or mail (11%). This is the only of the 20 health variables analyzed where web respondents reported worse health outcomes than mail. Respondents who answered that a medical professional told them that they have asthma were asked if they currently have asthma. As with the previous question, more CATI respondents reported currently having asthma (64%) than web (55%) or mail (26%).

Arthritis. Respondents were asked if they were ever told they had some form of arthritis, rheumatoid arthritis, gout, lupus, or fibromyalgia. For this question, 43% of mail respondents reported having some form of arthritis, rheumatoid arthritis, gout, lupus, or fibromyalgia, compared to 33% of web respondents and 27% of CATI respondents.

Diabetes. Respondents were asked if they had ever been told they had diabetes. Women with diabetes during pregnancy were coded as not having diabetes. Mail respondents were the most likely to report that they had ever had diabetes (12%) followed by CATI (7%) and web (5%).

Deafness/Difficulty Hearing. Respondents were asked if they are deaf or if they have serious difficulty hearing. Ten percent (10%) of mail respondents reported that they were deaf or have serious difficulty hearing, compared to 7% of CATI respondents and 4% of web respondents

Blindness/Difficulty seeing with Glasses. Respondents were asked if they are blind or if they have serious difficulty seeing, even when wearing glasses. CATI and mail respondents’ responded similarly to this question, with 3% reporting blindness or difficulty seeing. Less than 1% of web respondents answered that they are blind or have difficulty seeing.

Table 26 Chronic Conditions by mode

Chronic Conditions

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Heart Attack

6
3.1

4
1.8

21
3.1

31
 

Angina or Coronary Heart Disease

7
3.7

7
3.1

49
7.3

63
 

Stroke

7
3.7

5
2.2

23
3.4

35
 

Ever had Asthma

28
14.7

29
12.8

75
11.1

132
 

Currently have Asthma

18
64.3

16
55.2

46
25.7

80
 

Arthritis

51
26.8

74
32.7

293
43.1

418
 

Diabetes

14
7.4

12
5.3

84
12.4

110
 

Deafness/Difficulty Hearing

12
6.8

8
3.5

68
10.0

88
 

Blindness/Difficulty Seeing

6
3.3

2
0.9

23
3.4

31
 



Cigarette Smoker

Respondents were asked if they had smoked at least 100 cigarettes in their life. Those who did were asked if they currently smoked every day, some days or not at all. The proportion of CATI respondents who smoke cigarettes everyday was 19%, which is higher than mail (14%) and web (8%). Web respondents were the most likely to not smoke at all (91%) compared to mail (81%) and CATI (67%).

Table 27 Cigarette Smoker, by mode

Do you now smoke cigarettes every day, some days, or not at all?

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Every day

12
18.8

6
8.1

51
14.1

69
 

Some days

9
14.0

1
1.4

18
5

28
 

Not at all

43
67.2

67
90.5

293
80.9

403
 

Total

64

74

362

500

e-cigarettes

Respondents were asked if they had ever used e-cigarettes. While web respondents were the least likely to smoke cigarettes every day, they were the most likely to smoke e-cigarettes every day (13%). Mail respondents were the least likely to smoke e-cigarettes at all, with 95% of respondents reporting that they never smoke e-cigarettes, compared to 80% of CATI and web respondents.

Table 28 e-cigarettes, by mode

Do you now use e-cigarettes or other electronic vaping products every day, some days, or not at all?

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Every day

2
6.7

2
13.3

3
1.5

7
 

Some days

4
13.3

1
6.7

8
4

13
 

Not at all

24
80.0

12
80.0

192
94.6

228
 

Total

30

15

203

248



Physical Activity

Respondents were asked to report whether they had participated in any physical activities or exercises such as running, calisthenics, golf, gardening or walking, other than for their job. More web respondents (81%) reported engaging in physical activity than CATI respondents (80%) or mail respondents (67%).

Table 29 Physical activity, by mode

During the past month, other than your regular job, did you participate in any physical activities or exercises such as running, calisthenics, golf, gardening, or walking for exercise?

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Yes

132
79.5

180
81.1

451
67.4

763
 

No

32
19.3

42
18.9

218
32.6

292
 

Don't Know / Not Sure

1
0.6

0
0.0

0
0.0

1
 

Refused

1
0.6

0
0.0

0
0.0

1
 

Total

166

222

669

1057



Vaccines

Respondents were asked if during the past 12 months they had either a flu shot or a flu vaccine that was sprayed in their nose. Web respondents reported higher rates of receiving the flu vaccine in the past 12 months (66%) than either mail respondents (54%) or CATI respondents (44%). Respondents were asked if they had ever had a pneumonia shot. As with the flu vaccine, more web respondents reported that they had received the pneumonia vaccine than either mail (45%) or CATI respondents (35%).

Table 30 Vaccines by mode

Vaccines

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Flu Vaccine

72
44.4

145
65.6

367
54.3

584
 

Pneumonia vaccine

56
34.6

104
47.1

299
44.8

459
 



Human Immunodeficiency Virus (HIV) Testing

Respondents were asked if they had ever been tested for HIV, not counting testing while giving blood. For this question, the proportion of respondents who reported that they had ever received an HIV test was slightly higher for web (43%) than for CATI (41%). Mail respondent were the least likely to report that they had ever received an HIV test, with only 31% responding affirmatively to this question.

Table 31 HIV test, by mode

Have you ever been tested for HIV? Do not count tests you may have had as part of a blood donation. Include testing fluid from your mouth.

Survey Mode

Frequency

CATI

#

%

Web

#

%

Mail

#

%

Total

Yes

66
40.7

95
43

206
30.7

367
 

No

87
53.7

97
43.9

413
61.6

597
 

Don't Know / Not Sure

8
4.9

29
13.1

52
7.8

89
 

Refused

1
0.6

0
0.0

0
0.0

1
 

Total

162

221

671

1054



6.4.2 Differences by Survey Frames (includes web, mail, and CATI modes)

ICF compared key outcome variables by sample frame (RDD compared to ABS). Survey frame did not appear to have an effect on key variables for analysis. For most variables, the differences between survey frames were between 1 and2 percentage points. There were three variables where ABS respondents reported slightly better health outcomes/behaviors than RDD respondents:

  • Stroke (6% of RDD respondents reported having a stoke, compared to 2% of ABS respondents)

  • Arthritis (42% of RDD respondents reported that they have some form of arthritis, rheumatoid arthritis, gout, lupus, or fibromyalgia, compared to 37% of ABS respondents)

  • E-cigarettes (7% of RDD respondents reported that they use e-cigarettes or other electronic vaping products every day, compared to 2% of ABS respondents)

As with the comparison of demographic across survey modes and frames, comparisons of health outcomes and behaviors once again suggest that survey frame has less of an impact than survey mode.

    1. Drop Point Experiment

As described in Section IV, ICF conducted a sub-experiment with drop point addresses in the ABS sample. The objective for the experiment was to see whether response rates at Drop Point addresses could be improved by researching building unit numbers in advance. First, the drop point records were randomly split into two experimental groups. For Group 1, research (e.g., internet searching of realty websites) was conducted to determine the naming conventions associated with the units at a given address. If no information was available online, drop points were labelled “units” and were numbered consecutively (Unit 1, Unit 2, Unit 3) for the number of units at that address. After these naming conventions were determined, each record was copied to create a new record containing the original drop point street address and unique secondary addresses.

For Group 2, record’s “drop count” variable (i.e., the number of units associated with a given drop point address) was used to determine the number of copies to make of that record. Table 32 provides completes by mode for each experimental condition.

Table 32 Drop Point Completes


Sample

Web Completes

Mail Completes

Bounce Backs

Group 1: Units specified (Experimental Group)

427

6

8

52

Group 2: Units not specified

(Traditional Drop Point Approach )

406

3

14

3

Results indicate that Group 1 performed slightly better on web than Group 2, although the number of completes for web was very low. However, for mail, the group not researched (Group 2) actually performed better than the researched group. As shown in Table 33, this is mostly likely due to the number of undeliverables within that sample (52 undeliverables, or 6% of the Group 1 sample). However, of those undeliverables, 85% had some other address error unrelated to assignment of drop points (e.g., primary address was incorrect). These findings could indicate that conducting advance research on drop point addresses could still potentially be a cost effective way of increasing response rates for these addresses, but more research is needed.

Table 33 Drop Point Experiment 1 Undeliverables

Drop Point Group

Invitation Letter Bounceback Reason

Counts

Group 1

101=Letter Undeliverable - Other reason

10

Group 1

110=Letter Undeliverable - Insufficient Address

22

Group 1

125=Letter Undeliverable - No Such Number

9

Group 1

130=Letter Undeliverable - Not Deliverable As Addressed

6

Group 1

135=Letter Undeliverable - No Mail Receptacle

3

Group 1

140=Letter Undeliverable - Vacant

2

Group 2

120=Letter Undeliverable - Temporarily Away

2

Group 2

140=Letter Undeliverable - Vacant

1



  1. Discussion

A summary of key findings of the BRFSS Multimode Pilot is summarized below:


Response Rates:
For both sample frames, the web + mail phase of the protocol had a higher response rate than the web only. In addition, the ABS CATI phase had a higher response rate than the other two phases. In comparing the response rates of the three modes analyzed, it should be considered that web respondents only received one invitation to the study, compared to mail (2 contacts) and CATI (at least 3 contacts). Alternative protocols (e.g., beginning with CATI and then moving to mail or web; or offering multiple contacts to participate in web) would likely impact the response rates of each mode.

Accuracy of Address Matching: For RDD, 22.4% of the sample drawn was able to be matched to addresses. Of these, 6% of the matched sample were returned as undeliverable. Of the 8,700 ABS addresses drawn, 6,097 (or, 70.1%) were matched to telephone numbers. Of these, 536 records (or 9% of the sample) were returned as undeliverable.

In the mail survey, respondents were asked whether anyone in their household had the phone number listed in the sample file. Forty eight (48) percent of ABS respondents said that was their correct phone number (compared to 59% of RDD mail respondents). An additional 7% of ABS respondents reported that the number was correct for another person in their household, compared to 9% of RDD mail respondents. This finding indicates that the RDD sample had a more accurate phone number/address match than the ABS.

In addition, ABS respondents who completed the survey by phone were asked whether they remembered receiving a letter and paper survey in the mail. A vast majority of respondents (73%) indicated that they did not remember receiving the survey. One possible explanation for this could be that the phone and address for that record were mismatched. However, there could be alternative explanations for this as well (the survey was never opened, the survey was misplaced by another member of the household, or the respondent forgot that they received the survey).

ABS CATI respondents were also asked to confirm the address in the sample file. While the majority of respondents (60%) confirmed their address (indicating a correct phone and address match in the sample file), 28% of ABS CATI respondents answered that the address in the sample file was NOT their current address. This indicates that in addition to the 9% of addresses that were returned as undeliverable, a relatively high percentage of survey may have been delivered to unintended recipients due to an incorrect or out of date phone match.

Questionnaire Data Quality: ICF also examined the quality of the data collected by examining navigation errors, refusals, and comparing response distributions for selected key variables by frame and mode. Response distributions for most demographic questions were similar between the two sample frames, with some exceptions (marital status, education, and income). There were larger differences when comparing demographic questions by mode. Similarly, response distributions for key health outcome and behavior questions were similar between the two frames, but the differences were larger when comparing the three modes. In all but two variables (asthma and e-cigarette use), web respondents reported more positive health outcomes than CATI or mail. One emerging conclusion from looking at the effects of sample frame and mode separately is that mode is probably a more important factor than frame. This finding is consistent with the findings of a similar BRFSS mode and frame experiment conducted by ICF. In addition, the mail mode had a high level item if missingness, either from refusals or navigation errors.

Cost Considerations: ICF considered the costs associated with conducting this Pilot study, as well as a calculated cost per complete that is comparable to the current annual BRFSS computer assisted telephone interview (CATI) costs per complete for data collection from RDD NY cell phones and landlines. Both the web and mail modes are projected to cost less than the CATI cell data collection (when averaging landline and cell costs). This is consistent with the findings of a similar BRFSS pilots conducted by ICF. When comparing costs per complete by sample frame, ABS web completes were more expensive than RDD web completes. However, the reverse was true for mail mode, where ABS completes were less expensive than RDD completes.




Appendix A: Citations

Barlas, FM, Thomas, RK, and Graham, P. 2015. Purposefully mobile: Experimentally assessing device effects in an online survey. Paper presented at annual conference of the American Association for Public Opinion Research, May 14-17, Hollywood, FL.


Bergstrom, RJC, Erdman, C, and Lakhe, S. 2016. Navigation buttons in web-based surveys:  Respondents’ preferences revisited in the laboratory. Survey Practice, 9(1).


Couper, MP, Baker, R, and Mechling, J. 2011. Placement and design of navigation buttons in web surveys. Survey Practice, 4(1).


Dillman, D, Smyth, JD, and Christian, LM. 2015. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. Hoboken, N.J.: Wiley & Sons.


Izrael, D, Battaglia, MP, and Frankel, MR. 2009. Extreme survey weight adjustment as a component of sample balancing (a.k.a. Raking).Paper 274-2009, SAS Global Forum 2009.


Oh, HL and Scheuren, F. (1978). Some unresolved application issues in raking ratio estimation. 1978 Proceedings of the Section on Survey Research Methods, Washington, DC: American Statistical Association, pp. 723-728.


Revilla, M, Toninelli, D, and Ochoa, C. 2015. An experiment comparing grids and item-by-item formats in web surveys completed through PC and smartphones (RECSM Working Paper No. 46). Research and Expertise Centre for Survey Methodology: Universitat Pompeu Fabra, Barcelona.




Appendix B. Mail Instrument









1 For the CATI cases with an undeliverable mailing were not excluded, as long as they had a matched phone number.

2 *Note2: There were a total of 508 records in the DP group that were NOT matched to a phone number.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleReport Template Blue
AuthorMicrosoft Office User
File Modified0000-00-00
File Created2021-01-13

© 2024 OMB.report | Privacy Policy