November 2016 Interim Report on Terms of Clearance

Original terms of clearance activities and update Revised.docx

Behavioral Risk Factor Surveillance System (BRFSS)

November 2016 Interim Report on Terms of Clearance

OMB: 0920-1061

Document [docx]
Download: docx | pdf




Review of Information on OMB Terms of Clearance for BRFSS 0920-1061



Background


In 2015, the OMB approved the initial review of the Behavioral Risk Factor Surveillance System (BRFSS) with terms of clearance that included calculating and communicating effective sample sizes, ensuring calculation of appropriate standard errors, and research on non-response bias and benchmarking (including some discussion on a national weight for the BRFSS). In 2016, as part of the change request for the 2017 questionnaire, additional terms of clearance were added. These included cognitive testing of new questions and working toward harmonizing question wording for core and rotating modules. This document briefly outlines the work done by the Population Health Surveillance Brach (PHSB) under each of those topics. To date, progress has been made in standardizing annual activities to review data quality internally, publication of two studies on non-response bias, and a peer-reviewed paper on national weighting of BRFSS data. Sampling strategies have also been updated and new online tools for prevalence estimation and production of standard errors are in process.


Working toward the goals of reduction of bias, extensive cognitive testing and data quality is and has been an ongoing process. In the subsections below, information is provided for each of the terms of clearance.


Original terms of clearance


Calculating and communicating effective sample sizes


The PHSB has a leadership role with states on sampling. As landline samples become more unstable, and additional calling is required to get an equal number of interviews, the BRFSS has changed the standards for landline and cell phone frames. Moreover, working with the vendor which supplies states’ samples, refinements in the sampling process to more accurately identify eligible numbers are being made. The PHSB has also advised states that will be participating in oversampling minority populations in 2016. All states must have sampling designs approved by the PHSB prior to the implementation of the survey. The PHSB has also produced a number of technical documents and online tools for prevalence estimation which are available on the public website (http://www.cdc.gov/brfss/).


Each state designs a sample strategy for landline and cell phone samples. As the proportion of cell phone only users has increased overall, the percentage of interviews with cell phone respondents has increased. In 2017 states are required to have a minimum of 35% of all interviews by cell phone. Most states have a much larger percentage of cell phone interviews, which in some cases is over 70%. Because of the information tied to landline numbers, geographic distribution within a state is obtained by retaining a portion of interviews by landline. Moreover, the Population Health Surveillance Branch (PHSB), in which the BRFSS is housed has published a number of studies1 that indicate that the landline only population is significantly different in terms of chronic disease, and that these differences vary by state.

States prepare sample designs based on their specific health surveillance needs. Often sub-state regions (representing public health districts, political jurisdictions, metropolitan statistical areas (MSAs), or other geographic areas) are created for samples. Minimum sample sizes2 of respondents are set at the state level (n = 4,500) and for each substate region (n = 500). If states use split samples, which include the core questionnaire and subsets of optional modules, a minimum sample is set for each split at 2,500. Each state submits the sampling plan to the PHSB where it is reviewed for feasibility of implementation (geographic areas must be large enough to produce sufficient responses within demographic groups). Oversampling may also be requested by states to examine the health status of specific groups; in 2017 ten states will oversample Native American populations. States may opt to follow a pattern of sampling over time. For example, some states oversample counties every three years in order to assess county-by-county public health needs. The PHSB provides technical support to ensure that the sample is large enough within subgroups, which each state is collecting appropriate proportions of interviews on landline and cell phone, and that oversampling strategies are realistic. In most cases, states have sufficient expertise to design their own samples with minimum oversight; for states with fewer resources, the PHSB may take a leading role in sample design.


A vendor is contracted by the PHSB to produce the phone numbers which comprise both the landline and cell phone sample on a monthly basis. Procedures for handling the sample within the month are described in the Data Collectors’ Protocol (available upon request) and are discussed at annual meetings with contracted data collectors and BRFSS state coordinators.


Ensuring calculation of appropriate standard errors


All BRFSS users are instructed to analyze data using complex survey design tools provided by statistical software (such as SAS, STATA and SPSS). The BRFSS website provides statistical briefs and comparability documentation for each annual data release3. The BRFSS website provides sample SAS code which illustrates to data users the appropriate analysis plan. In addition, training on the use of BRFSS data is provided to states at the annual meeting and through PHSB webinars. Analyses of BRFSS data without using complex survey design analysis procedures will result in biased standard errors.


The PHSB also provides oversight on the publication of BRFSS data in summary form. CDC standards of stability of estimates are adhered to in these publications. Estimates are not published if the relative standard error (RSE) is .3 or higher. States are provided information on publication of summary data with unstable estimates, and state publications that are reviewed by PHSB also adhere to these standards.

Many states have limited resources for data analysis. The PHSB has updated online tools that will be available in the near future. In 2017 new data tools will be available on the site through the BRFSS Web Enabled Access Tools (WEAT) for updated data sets. These online tools allow for the production of state-level estimates and standard errors on a number of health indicators, which is particularly helpful for states with limited analytical resources.


Research on non-response bias and benchmarking


Internal data analyses

The PHSB produces state-by-state data to inform BRFSS Coordinators of demographic differences in the unweighted data. Data are provided quarterly to the states on the upload site using the YTD Data Quality Reports. Unweighted age, race and sex information by the type of sample (landline and cell phone) are presented in the report. In addition states are provided YTD information on the distribution of the sample throughout the state, by substate regions defined during sampling. Recently the PHSB has begun to produce annual internal documentation that combines state information. This has allowed for a closer examination of national benchmarks and BRFSS data. In 2017 the BRFSS will present a paper4 at the American Association of Public Opinion Research (AAPOR) annual conference comparing BRFSS unweighted and weighted responses by state, sampling frame against American Community Survey (ACS) demographic profiles (age, race/ethnicity, sex, education level, and insurance coverage) of state residents. The Data Quality Tables Comparing MSEA has been developed which illustrates how the unweighted and weighted sample compares with ACS standards using the Mean Squared Error (MSE). Results indicate that the landline MSE has increased over time, while cell phone MSE has decreased. MSE also differs greatly by demographic characteristics within sampling frame, usually illustrating that the two samples have unweighted bias in contrary directions: Cell phone samples are biased toward younger respondents and landline samples are biased toward older respondents. As expected, weighting controls for much of the variance of MSE by demographics, which is also improved by the combination of data from the two sampling frames. As the percentage of cell phone respondents in the BRFSS sample increases, unweighted MSE has declined for most variables, but not all. Specifically bias related to respondent education level needs additional scrutiny. State-to-state comparisons also show that the potential for bias differs by geographic locations of the samples. As noted in the Data Quality Tables Comparing MSE, weighting reduces the MSE of BRFSS estimates when compared against ACS demographic characteristics.


BRFSS also ensures data quality by comparing vendors/data collectors on an annual basis. Standards of data collection are set by the Data Collectors’ Protocol (available upon request). Project officers make site visits in part to observe data collection for individual states and for data collectors who have contracted with one or more states. Individual states receive YTD Data Quality Reports (also available per request) and a Summary Data Quality Report is provided annually on the BRFSS website. Comparison Data Quality Reports (DQR) are produced internally to illustrate data collection patterns by vendor. Outlying results that may indicate problems with data collection are investigated by the project officers and the Survey Operations Team. Conference calls with state coordinators and data collectors include discussions on alleviating potential bias in the data collection process.


Literature review of published validity and reliability studies of the BRFSS

As a standard practice the BRFSS maintains and regularly updates a bibliography of peer-reviewed studies of reliability and validity of BRFSS questions. Both an annotated version and a reference listing are published on the BRFSS website (http://www.cdc.gov/brfss/publications/index.htm). In addition, BRFSS staff produce meta-analyses of these publications5, as well as statistical comparisons of BRFSS prevalence estimates and other surveys6. An annotated bibliography of peer-reviewed publications that have recently appeared, and which are not listed in the most recent meta-analysis is provided in (see attached). Currently an update of the meta-analysis is in production.


National weighting methodology for the BRFSS

In 2016, partly as a result of recommendations from OMB, the BRFSS began working on a methodology for a national weight to be added to the public use dataset. Data users often aggregate BRFSS state samples for national estimates without accounting for state-level sampling, a practice that could introduce bias because the weighted distributions of the state samples do not always adhere to national demographic distributions. We examined six methods of reweighting, and compared them to key health indicator estimates from the National Health Interview Survey (NHIS) using 2013 data. Although one of the six methods reduces the variance of weights and design effect at the national level, statistically significant differences between aggregated state weights and national weights did not occur in these analyses. We found that the variance of weights was reduced using some of the methods tested, but the practice of aggregation of state samples did not show a bias related to state weighting. Our research has been accepted for publication in BMC Medical Research Methodology, and should appear in print early in 2017.



Terms of clearance added in 2016


Cognitive testing of new questions


The PHSB requires all programs that sponsor new questions to provide information on prior use and cognitive testing, as well as providing references to any validation studies that may have been completed on the questions proposed. If the questions need additional testing, the PHSB has contracted with a vendor to conduct testing in person and by phone with up to 9 people. In many instances several versions of questions may be tested. The PHSB acknowledges that testing questions with such a limited number of persons is not optimal. Once a generic OMB package for CDC cognitive testing has been approved, the PHSB will increase the number of persons included in the cognitive testing process.


All new questions are also included in the field test. Since the field test sample is approximately 300 persons, and is included in the OMB approval of BRFSS, this activity has allowed for any refinement of questions prior to implementation. Sponsoring programs are informed during the field test of any problems associated with question wording, skip patterns or interviewer instructions. In 2016, the field test was instrumental in the final changes to wording of fruit and vegetable questions, even after cognitive testing and a year of review by programs and states.


Harmonizing question wording for core and rotating modules


In 2016 the BRFSS undertook a comprehensive comparison of all questions on BRFSS 2015 and 2016 questionnaires, including core, rotating core and optional modules, and a number of other surveys. Because the BRFSS optional modules that cover a range of health related topics, a large number of questionnaires from other surveys were included in the crosswalk , this crosswalk provides an in depth comparison of the questions as well as supporting information as to current phrasing and is available upon request. These include the most recent publically available versions of the National Health Information Survey (NHIS), the National Health and Nutrition Examination Survey (NHANES), the National Survey on Drug Use and Health (NSDUH), the Health Information National Trends Survey (HINTS-FDA), the National Immunization Survey (NIS), the National Adult Tobacco Survey (NATS), the National Youth Tobacco Survey (NYTS), the Youth Risk Behavior Surveillance System (YRBSS), the Medicare Current Beneficiary Survey (MCBS), the Pregnancy Risk Assessment Monitoring System (PRAMS), the Health and Retirement Study (HRS), the National Crime Victimization Study (NCVS), the Motor Vehicle Occupant Safety Survey (MVOSS), the American Housing Survey (AHS), the Current Population Survey (CPS) and the National Survey of Family Growth (NSFG). In the aforementioned crosswalk questions from these surveys and the 2015 and 2016 BRFSS are presented. This document will be the basis of a review of differences between BRFSS and other surveys that will be presented to the programs.


Each time a question or module is proposed to the BRFSS, the sponsoring programs are required to provide information on previous use, cognitive testing and validation studies that may be available. The information in the developed crosswalk will provide a basis for question comparison which may be useful to programs and states as new questions are considered for adoption by the BRFSS.

This work is in its infancy and there are no plans at this time to change extant BRFSS questions simply to conform to those of other surveys, especially since the mode of information collection for many of these surveys differs from that of the BRFSS. The crosswalk document is a guide to inform future question wording.




1 Pierannunzi C, Chowdhury P, Town M (2015) The Effects of Overlapping Sampling on the BRFSS, Paper Presented at the 2015 American Association of Public Opinion Research (AAPOR).

2 Some territories, such as Guam and Palau are exempt from these requirements

3 See BRFSS Comparability of Data Documentation for each public use dataset. For 2015, the document is located at http://www.cdc.gov/brfss/annual_data/2015/pdf/compare_2015.pdf.

4 Pierannunzi C, Xu F, Chowdhury P, Garvin W. Comparison of Survey Response and Sampling Bias by Sample Frame, Paper to be presented at the 2017 American Association of Public Opinion Research (AAPOR) May 2017.

5 Pierannunzi, C., et al. (2013). "A systematic review of publications assessing reliability and validity of the Behavioral Risk Factor Surveillance System (BRFSS), 2004-2011." BMC Medical Research Methodology 13: 49.

6 Li, C., et al. (2012). "A comparison of prevalence estimates for selected health indicators and chronic diseases or conditions from the Behavioral Risk Factor Surveillance System, the National Health Interview Survey, and the National Health and Nutrition Examination Survey, 2007-2008." Preventive Medicine 54(6): 381-387.

Page 4 of 4


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPierannunzi, Carol (CDC/ONDIEH/NCCDPHP)
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy