RANDS - 10 Day Letter

RANDS 10-day letter 102418.docx

Collaborating Center for Questionnaire Design and Evaluation Research

RANDS - 10 Day Letter

OMB: 0920-0222

Document [docx]
Download: docx | pdf

DEPARTMENT OF HEALTH & HUMAN SERVICES Public Health Service

Centers for Disease Control and Prevention

Shape1 National Center for Health Statistics

3311 Toledo Road

Hyattsville, Maryland 20782

October 30, 2018


Margo Schwab, Ph.D.

Office of Management and Budget

725 17th Street, N.W.

Washington, DC 20503


Dear Dr. Schwab:


The staff of the NCHS Collaborating Center for Questionnaire Design and Evaluation Research (CCQDER) (OMB No. 0920-0222, exp. 07/31/2018), in collaboration with the Division of Research and Methodology, plans to conduct a methodological study exploring if and how data from non-opt-in web panels may be linked to existing NCHS datasets and to quantify measurement error. We are requesting permission to administer two rounds of a short web survey (to be known as the NCHS Research and Development Survey, or RANDS) using a pre-existing, non-opt-in, commercial survey panel (in this case, NORC’s AmeriSpeak panel). We propose to administer the first round of data collection in the final quarter of calendar year 2018. The second round would be administered in either the second or third quarter of calendar year 2019.


We propose to begin administration of the first round as soon as we receive clearance.


Please note that in this package, we are only requesting approval for the web survey itself; you previously approved the cognitive evaluation of the RANDS questionnaire on February 19, 2018.


Proposed Project: The NCHS Research and Development Survey (RANDS)


As the nation’s principal health statistics agency, the National Center for Health Statistics (NCHS) is responsible for not only producing high-quality statistics on the health and wellbeing of the American public and the state of the country’s health care system, but also for contributing to the development of survey methodologies that will allow the agency to continue producing these health statistics in the future.


To this effect, NCHS’ Division of Research and Methodology (DRM) proposes a methodological survey that will allow the agency to continue the long process of studying whether or not, and how, web panels may be integrated into NCHS’ existing survey systems. The over-arching goal of this information collection is to discover new, and to improve existing, methods that will increase data quality in the midst of declining response rates and increased costs. It is well documented that the American public are becoming less likely to respond to surveys. While, as a whole, government surveys enjoy a higher response rate than ones conducted by private organizations, this overall trend is undeniable. Methodological research into new ways to capture respondents while minimizing their burden is necessary to ensure that the government is adapting to this changing survey environment in a proactive and efficient manner. The scope of this project involves both measurement error and sampling strategies. In particular, the project will research how measurement error can be examined in order to inform better-constructed questionnaires as well as laying the foundation for how recruited, web-based samples might be integrated alongside traditional sampling methods.


This current proposal is for two new rounds of data collection that follow the findings from the previous rounds of RANDS, which were conducted in late 2015 and early 2016. The first two rounds of RANDS used the Gallup Panel, and allowed NCHS to begin the process of studying both how estimates from web panels can be linked to ones from address-based sample surveys and how the use of web probing can supplement the existing questionnaire evaluation process at NCHS. Progress has been made of both fronts.


On the estimation side, DRM staff have studied a number of modelling techniques that attempt to estimate target variables using data from both the NHIS and RANDS. The largest successes to this point have come from the estimation of chronic conditions; as such the proposed RANDS questionnaire for these new rounds will include a larger proportion of chronic condition questions in order to aid this effort. On the question evaluation front, CCQDER has shown that the use of web probes is a viable way to explore not only the distribution of patterns of interpretation across a larger sample, but also explore how sub-groups within that sample think about and answer questions. For example, we have found that web probing can be used to examine whether or not out-of-scope patterns of interpretation are more likely to occur for particular respondent groups (i.e. a method for investigating measurement bias). In addition, methodological work has suggested that the optimal format of web probes are forced-choice questions with only a few number of answer categories. The proposed rounds of RANDS will allow CCQDER to expand upon these results and continue to explore the most efficient and effective ways to incorporate web probes into questionnaire evaluation studies. For instance, we plan on incorporating split-panel experiments (see below) into RANDS that will further explore the optimal format of the web probes. We are also including questions that we have previously found to have low construct validity on the questionnaire in order to better develop the web probing as a tool for quantifying measurement error and measurement bias.


Linking Dissimilar Datasets. Non-probability sampling and the use of web-based panels are survey methods that have been extensively used in the business and marketing environments. The American Association for Public Opinion Research (AAPOR), a professional association that focuses on the practical uses of surveying, has released two reports, a report on online panel surveys in 20101, and a report outlining the advantages and challenges of using non-probability samples in 20132. Methodological questions regarding the reliability and potential utility for the collection of official statistics remain—even for non-opt-in, recruited panels that claim national representativeness. In an effort to respond to these methodological issues, RANDS will examine how the results of a web-administered survey using current National Health Interview Survey (NHIS) (OMB No. 0920-0214, Exp. Date 12/31/20 questions on NORC’s AmeriSpeak Panel compare to the results of the current production NHIS. Furthermore, NCHS will attempt to adjust the AmeriSpeak Panel results to make them more comparable to the NHIS results, using a variety of modelling techniques. This analysis will move beyond the various attempts in the statistical literature that use demographic or socioeconomic variables to link datasets, and will instead test whether or not topically-relevant (in this case, health conditions and outcomes) variables are more successful.


Measurement Error. Construct validity, or the degree to which a question actually measures a concept that it purports to measure, is one of the key determinants of a survey’s data quality and its potential response burden. If a questionnaire’s items all have strong construct validity, it will tend to have lower respondent burden and higher data quality, whereas items with low construct validity will increase respondent burden and typically produce less reliable data. Construct validity is typically examined via cognitive interviewing methodology since this qualitative method can determine the phenomena that respondents consider when formulating responses. Thus, cognitive interviewing methodology, with its ability to reveal the substantive meaning behind a survey statistic, can provide critical insight into question performance. Nonetheless, cognitive interviewing methodology has its own limitations and cannot provide an entire picture of question performance. While cognitive interviewing can show that a particular interpretive pattern does indeed exist, it cannot determine the extent or magnitude to which that pattern would occur in a survey sample. Nor can cognitive interviewing studies reveal the extent to which variation of interpretive patterns would occur across various groups of respondents. Additionally, the method cannot fully determine the extent to which respondents’ experience difficulty when attempting to answer a question. In short, as a qualitative methodology, cognitive interviewing studies lack ability to provide quantitative assessment—a component particularly essential to the field of survey methodology. Likewise, strictly quantitative methods of question and questionnaire evaluation using metrics such as item non-response and missing rates can indicate, but not explain, sources of response error. Recent works have attempted to bridge this issue by integrating qualitative and quantitative question evaluation methods through the use of targeted probing and follow-up questions.


RANDS/NORC Panel


The 2018 RANDS itself will be conducted for NCHS by NORC (subcontracted through CCQDER’s contractor Swan Solutions), using their standing, proprietary, recruited panel (branded as the “AmeriSpeak Panel”). The survey will be administered across two rounds, and will capture a total of 4000 complete cases, and will include current NHIS questions, as well as an interspersed set of CCQDER-developed structured web probe questions.


Background Information about the AmeriSpeak Panel: The sample for completing the 4000 web surveys will be based on the AmeriSpeak Panel. NORC recruits panel members using address-based sampling (ABS) to contact U.S. households at random. During recruitment, respondents take a short demographic survey, and are asked if they would be interested in participating in additional surveys as a member of the AmeriSpeak. Unlike opt-in panels, the recruitment process for AmeriSpeak’s panel starts with a random sample of addresses and, as a result, it is possible to derive the selection probability and hence the sampling weight for each respondent on the panel. There is no time commitment to membership in AmeriSpeak. Rather, households and individuals are encouraged to remain members as long as they are willing and interested. Surveys are self-administered. As with any longitudinal design, AmeriSpeak is affected by attrition; NORC makes significant effort to retain panelists for as long as possible.


As mentioned above, AmeriSpeak will be used as the sample source for completing the surveys. The sampling frame information will be protected under Section 308(d) of the Public Health Service Act [42 U.S.C. 242m(d)] and the Confidential Information and Statistical Efficiency Act or CIPSEA (Title V of PL 107-347).


All Panel participants have been fully screened and a substantial amount of background data have already been collected (e.g., health and well-being, socio-economic and occupational status, media usage, political views, age, gender, race, ethnicity, etc.), which will be attached to the final files delivered by NORC to NCHS, allowing for extensive non-response bias analysis. Following the delivery of the second dataset and the final methodological report, NORC will remove all RANDS data from its servers, including backups. This will include not only the responses to the survey itself, but the metadata associated with the RANDS (including response, non-response, participation, and sampling flags identifying the 2018 RANDS sample). NORC has extensive cyber and physical security in place, including a CIPSEA Information Protection Plan approved by the NCHS Confidentiality Officer and the NCHS Information Systems Security Officer, in order to protect both the security of the front-end survey interface and the back-end storage of the survey’s data. Additionally, all NORC employees working on the 2018 RANDS will complete NCHS confidentiality training, sign the NCHS affidavit of nondisclosure (see Attachment 2), and will be NCHS designated agents via the Designated Agent Agreement between Swan Solutions, LLC and NCHS.


Specific Plans for the Proposed Study: The 2018 RANDS web survey will be conducted over two separate rounds, with each round potentially using a slightly different questionnaire. The first round of the RANDS will administer a set of NHIS questions (that have been adapted for a self-report web mode by NCHS, with advice from NORC) and an interspersed set of structured probe questions. These embedded web probes will be included to better understand and quantify measurement error for certain RANDS questions. The questionnaire for the first round can be seen in Attachment 1; no questionnaire is currently included for Round 2 as DRM may make changes between the rounds based on analytic needs. Substantive changes will be shared with OMB as a separate GenIC, if changes are indeed made to the first round version for the second round.


The questions, in the form that they will actually be used on the RANDS, will be pre-tested by NCHS after NORC has programmed the questionnaire into its Computerized Self-Administered Questionnaire (CSAQ) software. Please note that this usability pre-test was already approved by OMB on 02/09/18 as part of the approval for the cognitive evaluation of the RANDS questionnaire.


As noted above, we plan on including a series of split-sample experiments into the RANDS web survey in order to better explore the distribution of response errors and patterns of interpretation. A brief summary of each of the five experiments we are planning is presented here:


1. General health rating: A split ballot experiment will be conducted in which a randomly selected half of the web sample will receive the standard NHIS general health rating item and 5 response options [PHSTATA], while the other half of the sample will receive a similar question, but with the balanced response options typically found on European censuses [PHSTATB]. We will examine whether the two versions of the question function similarly via quantitative comparisons of their response distributions. Additionally, we will examine whether responses to each version capture similar types of people and similar cognitive response processes, as indicated by answers to a series of follow-up probe questions.


2. Emphysema, chronic obstructive pulmonary disease (COPD), and chronic bronchitis: We will conduct a split ballot experiment in which half of the sample receives 3 separate items assessing emphysema [EPHEV], COPD [COPDEV], and chronic bronchitis [CBRCHYR], while the other half of the sample receives a question that combines all three conditions into one survey item [NEWLUNG]. To examine whether the combined item provides similar prevalence estimates as the 3 separate items, we will present a follow-up probe asking respondents to specify which condition they have (emphysema, COPD, or chronic bronchitis). In addition to comparing prevalence estimates between the two split halves of the sample, we are interested in whether the combined item might reduce previously observed error associated with the chronic bronchitis question.


3. Pain: A split ballot experiment will be conducted to assess the effect of reference period on responses to survey questions about pain frequency and pain limitations. Half of the web sample will receive pain frequency and pain limitation items with a six-month reference period [CHPAIN6M, PAINLMT6], while the other half of the sample will receive the items with a three-month reference period [PAIN_2, PAINLMT3]. All respondents will receive an identical item assessing pain intensity [PAIN_4], and will be presented with an identical series of follow-up probe questions.


4. E-cigarettes: A split ballot experiment will test comprehension of the term “e-cigarette”. Prior cognitive testing has suggested the term is widely used and understood; thus, we suspect burden can be reduced by eliminating the long definition of e-cigarette from the standard question stem. We will test this by presenting half of the web sample with the standard question with the long definition [ECIGEV_AE] and the other half with a significantly shortened version of the question that assumes comprehension of the term “e-cigarette” [ECIGEV_AF]. We will examine comparability of prevalence estimates generated by the two versions of the question and will test comprehension of the shortened version [ECIGEV_AF] with a follow-up probe question.


5. Affect: A split ballot experiment will examine three psychological affect scales that the NHIS plans to begin rotating on an annual basis in 2018: the K6, the GAD7, and the PHQ9. The GAD7 and PHQ9 will be administered to half of the sample, and then compared with the results of both the K6 and the Washington Group anxiety and depression questions (respectively). An identical probe will be administered after each of these scales in order to examine the constructs each are capturing.


In order to facilitate these experiments, all respondents will be assigned to one of two instruments (an “A” form and a “B” form). Assignment will be random at the form level; respondents will not be randomly assigned to each of the five experiments separately. The two forms of the questionnaire are included as Attachment 1.


As noted above, a slightly different set of NHIS items and web probes may be used on the second round of the 2018 RANDS. The second round will also include both NHIS questions and web probes. Changes made to the second-round questionnaire will be based on the needs of the DRM research teams, and may include dropping or adding variable or focusing web probes on different NHIS questions. If the content areas of this revised questionnaire differ substantively from what is included in Attachment 1, the second-round questionnaire will be submitted to you for review.


The RANDS survey itself will begin with an introduction screen similar to what is seen at the beginning of Attachment 1, explaining the general purpose of the survey and providing the confidentiality and Paperwork Reduction Act language. The NCHS ERB has agreed to a waiver of signed informed consent for this project, as signed consent is not possible for internet surveys where the population of respondents is anonymous to NCHS, as in a commercial panel. The introduction page will require the respondent to manually click through to the first page of questions; this action therefore implies consent.


Following each individual round of the 2018 RANDS, NORC will process the survey data and prepare data files. The data files will not include the respondents’ names, addresses, or any other primary personally identifiable information (PII), including any ISP data NORC has about the computer from which the respondent replied to the survey. All metadata tying the respondents to their inclusion in the RANDS sample will be eliminated from the NORC servers, including the backups, following final delivery. The data files will be transferred to NCHS via either a secure File Transfer Protocol (FTP) web portal or by loading them directly on an encrypted memory stick. Following confirmation that the second-round transfer is complete and successful, NORC will delete the data file from their secured servers and will provide a certificate of destruction certifying that all RANDS-related data and metadata have been removed from their servers and backups. All NORC staff who will be working on the RANDS data will have to successfully complete NCHS’ confidentiality training https://www.cdc.pgov/nchs/training/confidentiality/training/) and sign the NCHS Contractor Non-Disclosure Affidavit (Attachment 2).


Respondents will not receive an incentive for participating in RANDS.


In total, for this project, the maximum respondent burden will be 1333 hours. A burden table for this project is shown below:



Form Name


Number of

Participants


Number of

Responses/

Participant

Average hours

per response


Response

Burden

(in hours)

Questionnaire

4000

1

20/60

1333

Total

1333




Attachments (2)

cc:

V. Buie

T. Richardson

DHHS RCO


1 https://www.aapor.org/Education-Resources/Reports/Report-on-Online-Panels

2 https://www.aapor.org/Education-Resources/Reports/Non-Probability-Sampling.aspx

4 | Page


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-20

© 2024 OMB.report | Privacy Policy