BRFSS 2019-2022 Justification Part B March 8 final

BRFSS 2019-2022 Justification Part B March 8 final.docx

2019 Field Test Behavioral Risk Factor Surveillance System (BRFSS)

OMB: 0920-1061

Document [docx]
Download: docx | pdf






Behavioral Risk Factor Surveillance System (BRFSS)





OMB No. 0920-1061, Exp. Date 3/31/2021



Revision



Supporting Statement



Part B





Submitted by:

Division of Population Health

National Center for Chronic Disease Prevention

Centers for Disease Control and Prevention

Atlanta, Georgia


March 8, 2019









Attachments


  1. Authorizing Legislation: Public Health Service Act



  1. List of BRFSS Awardees

3a. Even-numbered Year Core Questionnaire

3b. Odd-numbered Year Core Questionnaire


4. Approved Optional Modules


5. 2018 BRFSS Questionnaire


6a. Federal Register Notice

6b. Summary of Public Comments


7a. Adult Population and BRFSS Sample Size By State, 2016

7b. BRFSS Sampling Geostrata by State

7c. BRFSS Weighting Process

7d. BRFSS Overview


8. Landline and Cell Phone Screener Script/ Verbal Consent


9. Optional Modules by State


10. Data Collectors’ Protocol


11. Summary Data Quality Report for 2015


12. BRFSS Questionnaire Development Process


13. Example of Annual Field Test Supplement








B. Collection of Information Employing Statistical Methods

The Behavioral Risk Factor Surveillance System (BRFSS) is a collaborative project of the Centers for Disease Control and Prevention (CDC) and U.S. states, the District of Columbia, and U.S. territories (collectively called “states” or “jurisdictions” in this document). The BRFSS is a coordinated series of interviews that collects information about preventive health practices and behavioral risk factors that are linked to chronic diseases, injuries, and preventable infectious diseases. Respondents are adults, ages > 18 years. Information collection is conducted annually.

The BRFSS is administered through cooperative agreements with state health departments. A representative sample of respondents is drawn for each state. Each state administers a state-tailored questionnaire which consists of (1) a standard core administered by all states, and (2) optional modules selected by the states. The state-tailored questionnaires and samples are designed individually by each state with technical assistance provided by the CDC.

BRFSS is a unique collaboration between the federal government and states. It is highly responsive to diverse needs and priorities for states, the federal government, and non-governmental agencies. Although federal funding is essential to its continued operation, the BRFSS includes funding from state and local government sources as well as from nonprofit agencies in some instances. Thus, decisions about the BRFSS encompass consideration of need for all of these partners. All BRFSS collaborators recognize the need for a high level of data standardization while still being flexible to meet the particular public health surveillance needs of individual states. For example, regular training programs and contact with the data collectors and state BRFSS coordinators help to establish agreed-upon protocols for data collection.


However, it may be difficult in some cases to find balance and consensus among the CDC, other federal agencies, and state-specific needs. As a result, specific methodological challenges may arise from this unique partnership but also provide great opportunity for innovation. BRFSS provides guidance to both collaborators and users of BRFSS data as necessary to address methodological issues. BRFSS is addressing specific issues as follows:


1. Sampling

a) Strengths: Drawing samples at the state level samples produces datasets that are actionable for the principal use, such as, managing public health programs at the state/ jurisdiction level. States may determine that substate regions should be drawn by health districts, counties or groups of counties as determined by state health policy implementation and service delivery.

b) Challenges: Given the diverse public health needs and focus of the BRFSS, a major challenge is to be able to produce reliable and valid estimates at differing geographic levels, including sub-state, state and national levels.

c) Comments: The BRFSS has undertaken a number of steps to make sampling more efficient and to recruit respondents to take part in the survey. These efforts include: enhancements to increase cell phone participation and response and the geographic targeting of cell phone numbers, overlapping sampling of landline and cell phone respondents, and inclusion of respondents who live in college housing. In order to ensure that reduced response rates do not result in bias, enhancements have also been made in post data collection processes. Studies are being conducted to provide guidance on both national and sub-state estimates from the state-based BRFSS estimates.

d) Recent pilots: Using the CDC/ATSDR Formative Research and Tool Development generic package (OMB #0920-1154), a pilot of sampling methods was conducted in 2018. The BRFSS Preliminary Experiment for Sample and Mode tested several hypotheses regarding differences between Address-Based Sampling (ABS) and Random Digit Dialing (RDD) samples. The project also tested the effectiveness of different modes (telephone interview, mailed questionnaire, web-based surveys) of data collection using the two samples. Results of the preliminary experiment indicated that ABS did not improve response rates over RDD and that respondents were not likely to complete web-based surveys when contacted by phone or mail. In addition ABS samples which included apartment buildings were likely to result in partial addresses (without specific apartment numbers) causing a lower contact and response rate. Such “drop point” addresses introduced bias into the resulting respondent pool.


2. Telephone-based mode of survey administration

a) Strengths: Telephone surveys have been shown to be relatively agile and low-cost methods for population studies. The BRFSS remains current in terms of population data collection methods. The BRFSS follows standards of the American Association of Public Opinion Research (AAPOR) and presents results of internal research on BRFSS paradata at annual conferences. Validity checks are conducted by comparing data from the BRFSS against similar questionnaire items from other surveys (usually the NHIS, NHANES and/ or NSDUH). Updating methods also includes constant review of all field methods, questionnaire design and item analyses.

b) Challenges: In recent years, there have been declines in response rate for phone surveys, although the BRFSS performs well when compared to other telephone surveys. Assessment of the quality of telephone surveys requires assessment of alternative modes of data collection.

c) Comments: The Division of Population Health will pursue assessment studies of complementary modes of data collection as well as continue to examine nonresponse bias to ensure data quality and representativeness of the population.

d) Recent pilots: As noted above, the PHSB conducted a pilot of samples and modes in 2018. Although this initial test indicated that a change to ABS would not improve administration of the survey, testing will continue as personal communication patterns change. In 2019 a feasibility test will be conducted to determine whether some respondents can be moved to the web. Persons selected for this test will be cell phone respondents who live in states other than the state in which they have been sampled, after completing the core questionnaire, these respondents will be asked to complete the state-specific portions of the survey on the web. Three methods of recruitment (by texted link, emailed link and mailed invitation) will be tested. It is not known whether cooperating respondents can be moved from one mode to another, and by which method of invitation respondents are most likely to complete web-based surveys.


3. Overall complexity of the system

a) Strengths: The BRFSS platform is a source of innovation in data collection and has been a leader in understanding how population surveys can contribute to understanding of state-level health data.

b) Challenges: There is a constant need to maintain data quality standards, provide training and technical assistance, and to keep up with new personal communication technologies.

c) Comments: The DPH conducts field studies and provides training, technical assistance, and quality improvement services to the states at annual meetings and through bimonthly webinars.

d) The Division of Public Health is currently working on the redesign of the core questionnaire to reduce the number of questions and streamline content. In addition research is being conducted to determine the impact of removing the landline portion of the sample. The use of a single, cell phone based sample of phone numbers could simplify the design weighting process and reduce the administrative burden of incorporating two samples into a single weighting process. If adopted, the cell phone only sample could be implemented in the 2021 BRFSS data collection. All changes to sampling methodologies and data collection protocols will be reviewed by an expert panel of statisticians, methodologists and public health professionals prior to implementation. Representatives from federal surveys, including the National Health Interview Survey will be included in the expert panel.

1. Respondent Universe and Sampling Methods

Respondent Universe

The target population for BRFSS information collection is adults (18 years of age or older) living in private households or college housing. An eligible household is defined as a housing unit that has a separate entrance, where occupants eat separately from other persons on the property, and that is occupied by its members as their principal or secondary place of residence. The following are non-eligible households: vacation homes not occupied by household members for more than 30 days per year, group homes, institutions, and (in the landline telephone sample) households in states other than the one conducting the particular BRFSS questionnaire.

Eligible household members include all related adults (aged 18 years or older), unrelated adults, boarders/roomers, and domestic workers who consider the household their home, even though they may not be home at the time of the call. Household members do not include adult family members who are currently living elsewhere. Since 2012, adult students living in college housing have been included as eligible respondents. Persons living in college housing are treated as single adult households.

Proxy interviews are not conducted within the BRFSS.

State-tailored Samples

An independent sample is drawn for each state by the state health department. The size of each state sample is estimated according to the number of completed interviews for the previous year, but may be adjusted depending on state objectives and funding. States may subdivide their state by geographic region/geostrata (such as public health districts, counties or groups of counties). States may also target population groups within their sample. States with sufficient sample size may choose to “split” samples into versions order to obtain information on a broader array of topics. A state may field up to three versions of its annual questionnaire, where each version is comprised of the standard core and a different set of optional modules. To ensure an adequate number of responses for weighting purposes, the state must conduct at least 2,500 interviews for each version of the BRFSS questionnaire. The minimum number is set to allow for comparisons by sex, age and/or racial groups. The minimum sample size for the split versions was set in response to a recommendation from a working group of the American Statistical Association when versions were first used in 2004. See Attachment 7a for a summary of the U.S. adult population and estimated size of the current BRFSS sample, by state. For the BRFSS as a whole the total number of respondents varies from year to year but will not exceed 450,000 in 2018-2020. As was noted earlier, the size of the sample for each state is determined by state needs and resources and not by the CDC. For example, in every third year, Florida receives county-level funding from local governments to greatly expand its sample size in order to provide local estimates. Sample sizes for a single state may therefore vary across years. Samples in any given year also vary by state, according to the resources that states have, their particular needs in that year and their needs to target specific groups. CDC provides technical assistance to states in drawing their samples, but does not determine the sample design nor the size of the sample. CDC provides technical assistance to the states in sampling design as needed or requested. Thus the final data set is an aggregate of state-level samples, not a national sample.

States draw their samples based on substate regions (with a few exceptions of states which do not have substate regions and draw only a state-wide sample). Attachment 7b is a summary of state geostrata which illustrates the variability in the samples drawn by the states. Such variability has an impact on the design effect.

Sampling Frame

To provide rapid and flexible access to respondents and contain costs, BRFSS data collection is conducted through telephone interviews (except in very limited circumstances in which interviews must be conducted in-person). A sample record is one telephone number in the list of all telephone numbers selected for dialing. To meet the BRFSS sample design standards, sample records must be justifiable as a probability sample of all persons with telephones in the state. The BRFSS sample is randomly selected from working phone numbers within each jurisdiction. CDC uses an overlapping sample of landline and cell phone numbers. No direct method of accounting for non-telephone coverage is employed for the BRFSS.

  • The landline sample for each state is based on a disproportionate stratified sample (DSS) design in which telephone numbers are assigned to two separately sampled strata based on the presumed density of residential (non-business) telephone numbers. The high-density and medium-density strata contain telephone numbers that are expected to belong mostly to households. Whether a telephone number goes into the high-density or medium-density stratum is determined by the number of listed residential numbers in each hundred block, or set of 100 telephone numbers with the same area code, prefix, the first two digits of the suffix and all possible combinations of the last two digits. Numbers that come from hundred blocks with one or more listed household numbers (“1+ blocks,” or “banks”) are put in either the high-density stratum (“listed 1+ blocks”) or medium-density stratum (“unlisted 1 + blocks”). The sampling ratio between listed one-plus block and not-listed one-plus block household density strata in a DSS design is 1.5:1 in which the listed will be sampled at the rate of 1.5 times that of not-listed. Geographic stratification for landline telephones within a state is defined by counties, health districts, cities, zip codes, and/or census tracts.



  • The cellphone sample for each state is randomly selected from lists of all working cell phone numbers. Cellular telephone interviews are conducted with respondents who answer the number called and are treated as one-person households. Persons who have moved to other states and who have cell phone numbers with area codes/ prefixes from other states are eligible for interview. Data collected from persons who have moved into a state with cell phone prefixes from other states, will be transferred to the appropriate data file at the end of the calendar year. In order to allow for weighting, the BRFSS has a target of 45 percent of the total number of interviews completed to be conducted with persons who are cell phone only (i.e., those who do not have access to personal landline phones). The geographic stratification for cellular phones is by county/set of counties as defined by rate centers by billing areas.



  • The field test sample is a smaller sample designed to produce a minimum number of completed responses (300-500) per field test. The field test is conducted after the BRFSS annual meeting, when new questions are voted on by the states. Generally field tests take place in early summer or late spring, annually. The sample is taken from a single state with some landline stratification. The state which conducts the field test is selected from among those states which have in-house capacity to conduct surveys using their own Computer Assisted Telephone Interview (CATI) systems. States may volunteer to conduct the test on behalf of the BRFSS and report back to their BRFSS colleagues and partners. The field test sample is not designed to be representative, and is not intended for population estimation, but is used to test the questionnaire and related software prior to the launch of the annual survey.

CDC obtains the lists of phone numbers from a private vendor (currently GENESYS Sampling-Marketing Systems Group MSG) and distributes it to the states on a monthly or quarterly basis, according to the state’s preference. Within the calling centers, samples are released in a controlled fashion by replicate. Each replicate is a sub-sample containing 30 telephone numbers. The BRFSS list-assisted method (using numbers that are present in a directory of household phone numbers) has been shown to improve efficiency and reduce unproductive calling among landline numbers. Samples of landline telephone numbers purchased using this method have been prescreened to ensure that they are residential phone numbers. Dedicated FAX and computer lines, ported cell phone numbers, business numbers and other ineligible numbers have been identified prior to implementation of the survey. The list assisted design reduces the number of unproductive screening calls while maintaining sampling weights that are roughly equal.



Summary of Similarities and Differences in Sampling by State

Similarities

Differences

  • Targets for proportions of cell/landline phone numbers

  • Single sampling vendor

  • Sampling designs/geostrata

  • Oversampling of targeted populations

  • Quarterly/monthly sampling

  • Split sampling to allow for optimal number of modules in some states

  • One territory with low telephone coverage uses a geographic sample to conduct personal interviews

The BRFSS sample is weighted to state populations and is intended for use as separate statewide datasets. However many researchers aggregate the state data to estimate prevalence at the national level. The PHSB recognizes that there may be differences in national estimated drawn from a national sample and nationwide estimates drawn from a set of state-level samples. For example, although each state’s population is appropriately weighted, the estimated percentage for Hispanics in the aggregated data set for 2015 was15.5%, while a national weighting method would reduce that proportion to 15%, a more accurate representation of national percentages. The potential demographic biases in the aggregated method, therefore, may have implications for health outcomes that may show variations across demographic groups. To control for this potential bias, the national weights could be raked at the national level using as many of the raking dimensions—among those used at the state level—as possible for convergence and stability. In addition, national raking methods could use states as an additional weighting margin to preserve state totals and to reproduce state estimates. PHSB therefore examined a series of reweighting methods using a range of raking margins. Six methods for adjustment in the aggregated dataset were tested using estimates from the National Health Interview Survey (NHIS) as a benchmark. A number of prevalence estimates from chronic conditions, health behaviors and demographic characteristics were tested using these six methods. Although differences in estimates were minor across the tested indicators, the research resulted in a methodology for national weights for the state-based BRFSS. Data users who aggregate data from all states would benefit from the use of these new national weights. However, researchers using data from only a few states would find that the weights associated with state level populations would be better suited to their analyses. Likewise an analysis that used data from a BRFSS modules administered to residents in only a few states should use state-level weights rather than a national weight. This research resulted in a peer-reviewed article entitled “National Weighting of Data from the Behavioral Risk Factor Surveillance System (BRFSS)” and was published in BMC Medical Research Methodology. It is available using a link from the BRFSS website or at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5109644/.

2. Procedures for the Collection of Information

Procedures are constructed to produce a coordinated series of state-tailored surveys that are unified by a common purpose, a common reference set of questions (divided into core questions and optional modules), and common protocols for sampling and questionnaire administration. Some flexibility in the content of each state-tailored questionnaire and operations management is allowed within parameters established by the BRFSS cooperative agreement. Participants in the design and implementation of the data collection process include CDC, state health departments; and data collection contractors.

Summary of Steps, Roles, and Responsibilities

  1. CDC and BRFSS awardees participate in an annual discussion (generally in late March or early April) to determine the exact wording of the questions in any part of the BRFSS and vote to adopt (or reject) changes or new questions. Changes to the core, other than editorial changes, are rare, but must also be approved by the states. In some instances, changes are required when questions become outdated. (For example when preventive screening questions refer to testing which is no longer recommended.) Discussions that precede voting include cognitive testing results, field testing results from previous years, question validity testing from other surveys and other validity testing if available. States may question the programs or agencies which propose new items or changes in wording based on their own experiences. In most cases, questions which are adopted for inclusion as optional modules have been used on other surveys, or as state added questions. Changes to the reference set of approved BRFSS questions may be submitted by BRFSS awardees or CDC programs for consideration. The state BRFSS coordinators may also vote to add questions on emerging issues (such as the H1N1 flu questions added in 2009). Once voting is complete, states may not change the wording of any questions in the approved reference set of core questions and optional modules. This discussion also determines the content of the next (calendar) year’s core survey.

  2. A field test of new questions is conducted by a single state with the oversight of the CDC (generally in June or early July). Field testing will be conducted in a manner that mimics the full-scale project protocol, to the degree that is feasible, but with only those parts of the questionnaire which are new, have had substantive change or which are proximal to changed/new sections. Field tests are used to identify problems with instrument documentation or instructions, problems with conditional logic (e.g., skip patterns), or other implementation and usability issues. Approximately 500 respondents will be recruited by phone for the field test. Field tests are not cognitive tests, but may identify some problems with question wording or response sets that have not been noted previously. Following the field tests, suggestions for change may be forwarded to the states for review prior to finalization of the questionnaire. Suggestions for question order, interviewer instructions, CATI programming or other protocols may also be made as a result of field testing.

  3. CDC annually compiles the revised reference set of approved BRFSS core questions and approved changes to the optional modules. CDC also produces data processing layouts, while taking state priorities, potential funding, and other practical aspects into consideration. The new BRFSS materials for the next surveillance (calendar) year are then sent to the states, which then may add their own state-added questions that they have designed or acquired. Because states determine which of the optional modules they will include and may also add state added questions, the final questionnaire produced for each state is unique to that state, although all states must include questions from the common core.

  4. Information collection is conducted by telephone interview. CDC provides Computer-Assisted Telephone Interviewing (CATI) programming to states for their use. States may opt to use their own CATI programming software. States may send advance letters to households/persons using address matching to inform potential respondents of their selection and increase response rates.

  5. CDC works with each state to determine its sample size and then communicates sample specifications to the vendor. The vendor will release lists of telephone numbers to each state on a monthly or quarterly basis, as requested. All calls for a given survey month (or quarter) should be completed within the same sample month (or quarter).

  6. BRFSS awardees are responsible for field operations and determine how their data will be collected within BRFSS guidelines. States may collect data using in-house calling centers, hire vendors using RFP procedures, or contract with universities. Data collectors must develop and maintain procedures to ensure respondents’ privacy, assure and document the quality of the interviewing process, and supervise and monitor the interviewers. Files containing phone numbers must be maintained separately from any files containing responses.

  7. States submit de-identified data files to CDC on a monthly basis for cleaning and weighting. CDC returns clean, weighted data files to the state of origin for its use. Through the BRFSS website, CDC also makes cleaned subsets of state data files available for public use, along with information about data quality and analysis, weighting (see Attachment 7c, BRFSS Weighting), and any restrictions on publication or use of the data (see Attachment 7d, BRFSS Overview). Detailed information about selected steps in the process is provided below and in Attachment 12, BRFSS Questionnaire Development Process.

Content and Construction of the Annual BRFSS Questionnaire(s)

The BRFSS questionnaire is comprised of an annual standard core which includes questions asked of respondents each year, a biannual rotating core which include questions asked only in even or odd numbered years, optional modules which includes standardized questions adopted verbatim by the states, and state-added questions which can be individually customized by states. Attachment4a provides a list of all optional modules which are currently approved and Attachment 4b provides questions for each approved optional module. All questions included in the BRFSS core and optional modules, with the exception of state-added questions, are cognitively tested prior to inclusion in the questionnaire.

  1. Fixed Core Questions: The portion of the questionnaire that is included each year and must be asked by all states. Each year, the core includes questions about emerging or “late-breaking” health issues. After one year, these questions are either discontinued or incorporated into the standard core, rotating core, or optional modules. (See Attachments 3a and 3b.)

  2. Rotating Core Questions: The portion of the questionnaire asked by all states on an every-other year basis. These are questions which regularly appear in even and odd numbered years. (See Attachments 3a and 3b.) A small number of questions may only appear in three year cycles. These include questions on immunization (TDAP, shingles, pneumonia and place of flu vaccination).

  3. Optional Modules: Sets of standardized questions on various topics that each state may select and include in its questionnaire. Once selected, a module must be used in its entirety and asked of all eligible respondents. If an optional module is modified in any way (e.g., if a question is omitted), then the questions will be treated as state-added questions (see below). (See Attachment 4.)

  4. State-added Questions: States may choose to gather data on additional topics related to their specific health priorities through the use of extra questions they choose to add to their questionnaire. The CDC neither reviews, makes suggestions for nor processes data related to state added questions.

All versions of the BRFSS questionnaire include the fixed core. An updated version of each year’s questionnaire will be uploaded to Reginfo.gov. Attachment 5 provides a final version of the 2018 BRFSS Questionnaire including even-numbered year questions and optional modules available in 2018. Not all modules are used in every year. A list of optional modules used, by state, is provided in Attachment 9. Each year, the previous year’s approved list of optional modules will be used for planning purposes, and the list will be updated as plans are finalized. This listing also illustrates how the states split samples in order to include a wider range of topics on optional modules.

CDC takes care to ensure that all new questions are adequately cognitively tested. The DPH recognizes the need to enhance cognitive testing methods. In 2019 and 2020, the DPH will submit proposals to enlarge the recruitment of cognitive testing subjects and increase evaluative capacity on all new questions and modifications of questions for use on the BRFSS. The current practice of a limited number of subjects for cognitive testing does not allow for the complete evaluation of question format, wording and order on the questionnaire. The PHSB will determine whether it will be more effective to submit a separate package for cognitive testing, use the current generic mechanism or submit questions to the National Center for Health Statistics cognitive testing lab for review. Whichever of these methods are adopted, an annual change request will be submitted to OMB for each wave of cognitive testing, outlining the number of subjects, the topics and questions to be tested and the program narrative for inclusion in the survey. Any and all additional information collection activities will be submitted to OMB in separate requests, and may be requested under current generic approvals.


In addition, the DPH works with other CDC programs and federal agencies to harmonize question format, wording and response sets whenever possible. In cases where new topics are under development (such as is the case with opioid use surveillance) the DPH is an active partner in cross- agency committees and workgroups to work on the development of new questions.



Call/Interview Guidelines

Data collection follows a suggested BRFSS interviewing schedule. The protocol suggests up to 15 calling attempts for each landline phone number and up to 8 for each cell phone number in the sample, depending on state regulations for calling and outcomes of previous calling attempts. Some states make calling attempts over the totals suggested by the BRFSS protocol. Although states may have some flexibility in distribution of calling times, in general, surveys are conducted using the following calling occasions:

  • Conduct 20% of the landline interviews on weekdays (prior to 5:00 pm)

  • Conduct 80% of the landline interviews on weeknights (after 5:00 pm) and weekends

  • Conduct cell phone interviews during all three calling occasions (weekday, weeknight and weekend) approximately 30% of cell phone calls on weekend calling occasions.

  • Change schedules to accommodate holidays and special events

  • Make weeknight calls just after the 5:00 pm

  • Make callbacks during hours that are not scheduled for other interviews, generally on weekdays

  • With the exception of verbally abusive respondents, eligible persons who initially refuse to be interviewed may be contacted at least one additional time and given the opportunity to be interviewed. Preferably, this second contact will be made by a supervisor or a different interviewer. Some states have regulations on whether refusals should be called again.

  • Adhere to respondents’ requests for specific callback times whenever possible

Since response rates are calculated from optimizing calling protocols, there is an incentive to work the sample as efficiently as possible over the course of the month/quarter that it is allocated. Poor use of the sample would be noted by the CDC in the YTD quarterly reports of sample use which are provided on the upload site to the states throughout the year and in the annual summary data quality report on the public website.

Calling Dispositions

States are required to give a final disposition for every number in the sample, usually within the same month of the sample. Each telephone number in the CDC-provided sample must be assigned a final disposition code to indicate a particular result of calling the number:

  • A completed or partially completed interview or

  • A determination that:

    • A household was eligible to be included but an interview was not completed or

    • A telephone number was ineligible or could not have its eligibility determined.


An interview is considered to be a partial complete if respondents are asked questions which are used in weighting (approximately half-way through the core BRFSS questionnaire). These variables include, race, ethnicity, sex, marital status, education, home ownership, type of phone ownership (i.e. landline only, cell phone only or dual user), and geographic/ (sub) state region. If values on weighting variables are not entered due to respondent refusal, imputed values will be generated and used only to assign weights.

The final disposition codes are then used to calculate response rates, cooperation rates and refusal rates. The distribution of individual disposition codes and the rates of cooperation, refusal, and response are published annually in the Summary Data Quality Reports. The BRFSS uses standards set by the American Association of Public Opinion Research (AAPOR) to determine disposition codes and response rates. All BRFSS disposition codes and rules for assigning disposition codes are provided in Attachment 10: Data Collectors’ Protocol. Data collectors must adhere to the rules for assigning disposition codes and train and monitor interviewers in the use of specific dispositions.

Procedures to Promote Data Quality and Comparability

In order to maintain consistency across states and allow for state-to-state comparisons, the BRFSS sets standard protocols for data collection which all states are encouraged to adopt with technical assistance provided by CDC. The following items are included in the BRFSS survey protocol:

  1. All states must ask the core questions without modification. States may choose to add any, all, or none of the optional modules and state-added questions after the core component. Interviewers may not offer information to respondents on the meaning of questions, words or phrases beyond the interviewer instructions provided by CDC and/or the state BRFSS coordinators.

  2. Interviewers should be trained specifically for the BRFSS and retrained each year.

  3. Systematic, unobtrusive electronic monitoring is a routine and integral part of monthly survey procedures for all interviewers. States may also use callback verification procedures to ensure data quality. Unless electronic monitoring of 10% of all interviews is being routinely conducted, a 5% random sample of each month’s interviews must be called back to verify selected responses for quality assurance.

General calling rules, listed below, are established by the BRFSS and states are encouraged to adhere to them whenever possible. It is understood that the calling rules are not universally applicable to each state.

  1. All cellular telephone numbers must be hand-dialed.

  2. If possible, calls made to non-English speaking households and assigned the interim disposition code of 5330 (household language barrier), should be attempted again with an interviewer who is fluent in the household language (e.g. Spanish).

  3. States should maximize calling attempts as outlined in Attachment 10. The maximum number of attempts (15 for landline telephone and 8 for cellular telephone) may be exceeded if formal appointments are made with potential respondents.

  4. Calling attempts should allow for a minimum of 6 rings and up to 10 rings if not answered or diverted to answering devices.

The BRFSS produces a Summary Data Quality Report which is published annually on its websites. The report includes information on calling attempts, sample quality, rates of completion and response. The 2015 Summary Data Quality Report is provided in Attachment 11.

3. Methods to Maximize Response Rates and Deal with Nonresponse

The vendor prescreens landline phone numbers to determine whether they are assigned to businesses or are nonworking.

The sampled phone numbers are called by each state or their data collector and screened to determine whether the numbers dialed are residential numbers and, for those that are, eligible individuals are identified.

As noted in the preceding section, the BRFSS uses a number of techniques to deal with nonresponse. These include providing the interview in languages other than English, creating a number of call back protocols designed to convert refusals, and alternating times and days of calling attempts. In addition, the BRFSS advises states to make use of caller ID to inform potential respondents that state health departments are making the calls. States may also use advance letters to inform respondents that their household or cell phone number has been selected to participate in the BRFSS for landline interviews. Potential respondents are informed of the purpose of the call and the importance of their response early in the screening script. Experienced interviewers are used for callbacks when respondents initially refuse to take part in the survey. Hard refusals (where potential respondents state that they are not interested in completing the interview) are not called back.

States must maintain training for all interviewers involved in the BRFSS. Issues related to response rates are discussed in large annual meetings of the data collectors. Data collectors also participate in bimonthly conference calls organized by the CDC to discuss best practices, and share experiences.

The CDC and the states have conducted a number of pilot studies in recent years to identify methods that might improve response rates and alleviate potential nonresponse bias. These have included mailing questionnaires to nonresponding households in the landline sample using address matching, and inviting respondents to participate online using mailed web links. These pilot tests indicated that the persons who were likely to respond to mailed questionnaires or web invitations were similar demographically to those who were likely to respond by phone (higher income, white, and older than the general population). As noted above in 2018, using the CDC/ATSDR Formative Research and Tool Development generic package (OMB #0920-1154), a pilot of sampling methods was conducted. The BRFSS Preliminary Experiment for Sample and Mode tested several hypotheses regarding differences between Address-Based Sampling (ABS) and Random Digit Dialing (RDD) samples. The project also tested the effectiveness of different modes (telephone interview, mailed questionnaire, web-based surveys) of data collection using the two samples. Results of the preliminary experiment indicated that ABS did not improve response rates over RDD and that respondents were not likely to complete web-based surveys when contacted by phone or mail. In addition ABS samples which included apartment buildings were likely to result in partial addresses (without specific apartment numbers) causing a lower contact and response rate. Such “drop point” addresses introduced bias into the resulting respondent pool. Positive results indicate that RDD sampled cell phone numbers can be matched to addresses in sufficient numbers to allow for advance letters to potential respondents, a change that was incorporated into the Data Collection Protocol (see Attachment 10) for 2019.

In 2019, a feasibility test is planned to determine whether cell phone respondents who are sampled by one state but who live in another can be diverted to the web in order to complete state-specific modules which otherwise would have resulted in missing data. (Currently these respondents are interviewed but only core questions are asked). Other pilots and changes which are under consideration to maintain the most current methodological practices for the BRFSS may include the exclusive use of the cell phone sample in the future (thereby eliminating the landline portion of the sample), testing feasibility of diverting respondents to the web at the outset of the questionnaire, piloting mobile based applications of the BRFSS questionnaire and the use of wearable device and other data to supplement the BRFSS. Currently research is underway to test the effects of moving to a cell phone only sample, which is similar to research done prior to the introduction of cell phone respondents in 2011. The PHSB is also examining the effects of removing the landline sample from the previous three years of data, reweighting cell phone interviews and comparing prevalence estimates with the landline/cell phone sample. In addition, in 2021, the DPH will be undertaking a redesign of the core questionnaire. The purpose of the redesign is to establish standards for core questions, reduce the length of the core and update question formats. Discussions with data users and BRFSS Coordinators will be held at the annual meeting of the grantees.

Comparisons of unweighted demographics indicate that nonresponse bias among younger, male, and/or minority respondents improved when cell phones were introduced in 2011. CDC staff present findings of research on response rates among demographic groups at professional meetings, including weighting methods that are appropriate to correct, to the extent possible, for demographic differences between those who participate in the survey and the population of the state or substate area represented. To date, the most appropriate adjustments to nonresponse bias have been the inclusion and increasing proportions of cell phone interviews and the changes of weighting methods to raking, both of which were introduced in 2011.

Although response rates overall for telephone surveys are declining, the BRFSS maintains a relatively high level of response when compared to other telephone based surveys. The table below provides some comparisons of the BRFSS response rate when compared to other, similar surveys.

Table 1.
Examples of Survey Response Rates by Cellular Telephone and Landline Telephone


Response Rates

Survey

Year(s)

Landline

Cell Phone

California Health Interview Survey (CHIS)a

2015

12.3%

9.5%

National Immunization Survey (NIS) b

2014

62.6%

33.5%

Pew Research Center Library Survey c

2013

10.0%

13.0%

PSRAI Omnibus Survey d

2015

5.0%

4.0%

National Adult Tobacco Survey (NATS) e

2012-2013

47.2%

36.3%

BRFSS f

2017

45.3%

44.5%

a CHIS 2015 Methodology Report Series. (2016) http://healthpolicy.ucla.edu/chis/design/Documents/chis-2015-short-methodology-report-4_response-rates_2016-12-13.pdf

b Unlike the BRFSS, the NIS does not include household sampling in the landline portion of the study but interviews the adult who self-identifies as the most knowledgeable about household immunization information. http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6433a1.htm

c http://www.pewinternet.org/2014/04/03/methods-28/

d http://www.pewinternet.org/2015/04/01/appendix-a-about-the-december-week-1-and-week-3-omnibus-survey/

e http://www.cdc.gov/tobacco/data_statistics/surveys/nats/pdfs/2012-2013-nats-methodology-final.pdf

f BRFSS response rates are presented here as median rates for all states and territories.

Response rates, cooperation rates, and refusal rates for BRFSS are calculated and published annually using standards set by the American Association of Public Opinion Research (AAPOR) [1]. The BRFSS calculates response rates using AAPOR Response Rate #4, which is in keeping with rates provided by BRFSS prior to 2011 using rates from the Council of American Survey Research Organizations (CASRO) [2].

Based on the guidelines of AAPOR, response rate calculations include assumptions of eligibility among potential respondents/households that are not interviewed. The BRFSS calculates “likely eligible” phone numbers using the proportions of eligible households among all phone numbers where eligibility has been determined. This “eligibility factor” appears in calculations of response-, cooperation-, resolution-, and refusal rates.

The calculations of calling outcome rates are based on final disposition codes that are assigned after all calling attempts have been exhausted. The BRFSS may make up to 15 attempts to reach respondents prior to assigning a final disposition code. The BRFSS uses a single set of disposition codes for both landline and cell phones, adapted from standardized AAPOR disposition codes for telephone surveys. A few disposition codes apply only to landline telephone or cellular telephone sample numbers. For example, answering-device messages may confirm household eligibility for landline telephone numbers but are not used to determine eligibility of cellular telephone numbers. Disposition codes reflect whether interviewers have completed or partially completed an interview (1000 level codes), determined that the household was eligible without completing an interview (2000 level codes), determined that a household or respondent was ineligible (4000 level codes), or was unable to determine the eligibility of a household and/or respondent (3000 level codes). The BRFSS uses an overlapping sample in that persons who are reached on cell phones are eligible even if they also have landline phones and vice versa. If a number in the landline sample reaches a person on a cellphone that person is not within the landline sample and therefore not eligible. If the same number appears in the cell phone sample then the person would be eligible to be interviewed. Attachment 10 provides disposition codes used by the BRFSS and it notes the instances where codes are used only for landline telephone or cellular telephone sample numbers. Assignment of disposition codes may vary slightly from one state to another, but all numbers must be assigned a final (not an interim) disposition code prior to data submission. Factors affecting the distribution of disposition codes by state include differences in telephone systems, sample designs, surveyed populations, and data collection processes.

The table below illustrates the categories of disposition codes which are used to calculate outcome rates. The totals for each category are then used in the formulae below to determine rates of completion, cooperation, refusal, and response according to AAPOR standards and guidelines. Response rates are provided by landline and cell phone. An overall response rate is also calculated for each state, weighted by the proportions of the sample which are landline and cell phone. Details for calculations of these rates are published annually for each state on the BRFSS website at http://www.cdc.gov/brfss/annual_data/annual_data.htm .

Table 3
Landline and Cellular Telephone BRFSS Disposition Codes


Category

Disposition Code
Definitions


Formulae Abbreviation

Completed interviews

1100+1200

COIN

Eligible

1100+1200+2111+2112+2120+2210+2220+2320+2330

ELIG

Contacted eligible

1100+1200+2111+2112+2120+2210+2320+2330

CONELIG

Terminations and refusals

2111+2112+2120

TERE

Ineligible phone numbers

All 4000 level disposition codes

INELIG

Unknown whether eligible

All 3000 level disposition codes

UNKELIG

Eligibility factor

ELIG/(ELIG + INELIG)

E


Eligibility Factor

E = ELIG/ (ELIG + INELIG)

The Eligibility Factor is the proportion of eligible phone numbers from among all sample numbers for which eligibility has been determined. The eligibility factor, therefore, provides a measure of eligibility that can be applied to sample numbers with unknown eligibility. The purpose of the eligibility factor is to estimate the proportion of the sample that is likely to be eligible. The eligibility factor is used in the calculations of refusal and response rates. Separate eligibility factors are calculated for landline telephones and cellular telephone samples for each state and territory.

Resolution Rate

((ELIG + INELIG) / (ELIG+INELIG+UNKELIG))*100

The Resolution Rate is the percentage of numbers in the total sample for which eligibility has been determined. The total number of eligible and ineligible sample phone numbers is divided by the total number of phone numbers in the entire sample. The result is multiplied by 100 to calculate the percentage of the sample for which eligibility is determined. Separate resolution rates are calculated for landline telephone and cellular telephone samples for each state and territory.

Interview Completion Rate

(COIN / (COIN + TERE)) * 100

The Interview Completion Rate is the rate of completed interviews among all respondents who have been determined to be eligible and selected for interviewing. The numerator is the number of complete and partially completed interviews. This number is divided by the number of completed interviews, partially completed interviews, and all break offs, refusals, and terminations. The result is multiplied by 100 to provide the percentage of completed interviews among eligible respondents who are contacted by interviewers. Separate interview completion rates are calculated for landline telephone and cellular telephone samples for each state and territory.

Cooperation Rate

(COIN / CONELIG) *100

The AAPOR Cooperation Rate is the number of complete and partial complete interviews divided by the number of contacted and eligible respondents. The BRFSS Cooperation Rate follows the guidelines of AAPOR Cooperation Rate #2. Separate cooperation rates are calculated for landline telephone and cellular telephone samples for each state and territory.

Refusal Rate

(TERE / (ELIG + (E * UNKELIG))) * 100

The BRFSS Refusal Rate is the proportion of all eligible respondents who refused to complete an interview or terminated an interview prior to the threshold required to be considered a partial interview. Refusals and terminations (TERE) are in the numerator, and the denominator includes all eligible numbers and a proportion of the numbers with unknown eligibility. The proportion of numbers with unknown eligibility is determined by the eligibility factor (E; described above). The result is then multiplied by 100 to provide a percentage of refusals among all eligible and likely to be eligible numbers in the sample. Separate refusal rates are calculated for landline telephone and cellular telephone samples for each state and territory.

Response Rate

(COIN / ((ELIG + (E * UNKELIG))) * 100

A Response Rate is an outcome rate with the number of complete and partial interviews in the numerator and an estimate of the number of eligible units in the sample in the denominator. The BRFSS Response Rate calculation assumes that the unresolved numbers contain the same percentage of eligible households or eligible personal cell phones as the records whose eligibility or ineligibility are determined. The BRFSS Response Rate follows the guidelines for AAPOR Response Rate #4. It also is similar to the BRFSS CASRO Rates reported prior to 2011. Separate eligibility factors are calculated for landline telephone and cellular telephone samples for each state and territory and a combined Response Rate for landline telephone and cellular telephone also is calculated. The combined landline telephone and cellular telephone response rate is generated by weighting to the respective size of the two samples. The total sample equals the landline telephone sample plus cellular telephone sample. The proportion of each sample is calculated using the total sample as the denominator. The formulae for the proportions of the sample are found below:


P1 = TOTAL LANDLINE SAMPLE /
(TOTAL LANDLINE SAMPLE + TOTAL CELL PHONE SAMPLE);



P2 = TOTAL CELL PHONE SAMPLE /
(TOTAL LANDLINE SAMPLE + TOTAL CELL PHONE SAMPLE);

The formula for the Combined Landline Telephone and Cellular Telephone Weighted Response Rate, therefore, is described below:

COMBINED RESPONSE RATE=
(P
1 * LANDLINE RESPONSE RATE) + (P2 * CELL PHONE RESPONSE RATE).


4. Tests of Procedures or Methods to be Undertaken

BRFSS protocols have been adapted over time to meet the needs of the data collection process and maximize response rates while minimizing respondent burden. The BRFSS continually assesses its methods and procedures through comparisons with industry standards, consultation with BRFSS coordinators and other experts in the field, and real-world experience and feedback from the BRFSS data collectors. The PHSB convenes an Expert Panel meeting approximately every three years. The last Expert Panel meeting was held in 2017. All questions which are included in the BRFSS are cognitively tested, with the exception of state-added questions. Field tests are also conducted each year to ensure that the questionnaire is ready for fielding. New questions are tested each year before adoption, mostly as changes to existing or additional optional modules. The methods by which questions are adopted are provided in Attachment 12. As this document indicates, SMEs from CDC and other federal agencies, state health department representatives and survey experts are involved in the process of question development. Many of the questions which are included in the BRFSS appear on other surveys including the National Health Interview Survey (NHIS), the National Adult Tobacco Survey (NATS), the National Health and Nutrition Examination Survey (NHANES), the National Immunization Survey (NIS) and others. The use of identical or similar questions is advantageous in that it allows researchers to make comparisons across different samples, different geographic areas or over time. The BRFSS website maintains a listing of validity studies conducted by CDC staff (http://www.cdc.gov/brfss/publications/ methodology/data_qvr.htm) and other researchers (http://www.cdc.gov/brfss/publications/ methodology/mvr.html) on validity and reliability tests of the BRFSS, its methods and questions.

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

CDC personnel are responsible for all statistical aspects of the BRFSS including data analyses and reporting. The following staff members are primarily responsible for BRFSS data reporting.

Name

Title

Phone

E-mail

Machell Town

Population Health Surveillance Branch Chief

770-488-2533

[email protected]

William Garvin

Team Lead, survey operations

770-488-2459

[email protected]

Carol Pierannunzi

Senior Survey Methodologist

770-488-4609

[email protected]



In addition staff members attend the annual meetings of the American Association of Public Opinion Research (AAPOR) and the Joint Statistical Meetings (JSM) at which experts provide guidance, comments and suggestions on staff methodological research presentations.

References

1. The American Association for Public Opinion Research. 2011. Standard Definitions: Final dispositions of case codes and outcome rates for surveys. 7th edition.
http://www.aapor.org/AM/Template.cfm?Section=Standard_Definitions2&Template=/CM/ContentDisplay.cfm&ContentID=3156

2. The Council of American Survey Research Organizations. 2013. Code of standards and ethics for market, opinion, and social research http://c.ymcdn.com/sites/ www.casro.org/resource/resmgr/code/september_2013_revised_code.pdf?hhSearchTerms=%22casro+and+response+and+rate%22

13


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy