Supp Statement B_RANDS COVID19_UPDATED 05 08 2020

Supp Statement B_RANDS COVID19_UPDATED 05 08 2020.docx

RANDS during COVID19

OMB: 0920-1298

Document [docx]
Download: docx | pdf

Supporting Statement B for Request for Emergency Clearance:

NATIONAL CENTER FOR HEALTH STATISTICS RESEARCH AND DEVELOPMENT SURVEY


OMB No. 0920-XXXX

Expiration Date: XX/XX/XXXX


Contact Information:


Paul J Scanlon Jr, PhD.

Senior Behavioral Scientist

Collaborating Center for Questionnaire Design and Evaluation Research

Division of Research and Methodology

National Center for Health Statistics/CDC

3311 Toledo Road

Hyattsville, MD 20782

301-458-4649

[email protected]




May 08, 2020

Table of Contents


B. Collections of Information Employing Statistical Methods


B.1. Response Universe and Sampling Methods 3

B.2. Procedures for the Collection of Information 6

B.3. Methods to Maximize Response Rates and Deal with Nonresponse…………………….10

B.4. Tests of Procedures or Methods to be Undertaken………………………………………11

B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting

and/or Analyzing Data…………………………………………………………………...12


LIST OF ATTACHMENTS


Attachment A - Pub Health Serv Act

Attachment B - Intro Screen and Questionnaire

Attachment C - Telephone Screening Script

Attachment D Cognitive Interview Telephone/Video Introduction Script

Attachment E - NORC CIPESA Protection Plan

Attachment F - NCHS Non-Disclosure Affidavit for NORC Organization Staff working on RANDS

Attachment G - CCQDER Data Storage and Access Policy

Attachment H - NCHS ERB Approval Notice

Attachment I - Recruitment Advertisement

Attachment J - Sample script of CCQDER voice mail.doc

Attachment K - Respondent Data Collection Sheet

Attachment L – Abstracts of Previous RANDS Research on Estimation and Calibration

Attachment M - RANDS during COVID-19 Variable Crosswalk, Uses, and Power Calculation

Attachment N - Health and Health Care Access Questions Across Federal Surveys During COVID-19


B. STATISTICAL METHODS


As explained in detail in Supporting Statement A, this emergency information collection request has two purposes: a) generation of data that can help explain health-related experiences of the US population during the pandemic and b) continuation of developmental survey methods research.

These purposes encompass three distinct, but related, activities:

  1. The RANDS-COVID-19 survey, which will be conducted by NORC using their commercially-available probability Amerispeak survey panel and whose data will be calibrated to the National Health Interview Survey (NHIS) based on previous NCHS RANDS research findings (estimation production)

  2. Evaluation and calibration of the differences between Amerispeak, NHIS and the nonprobility panel (estimation research)

  3. The evaluation and validation of the RANDS during COVID-19 questions via cognitive interviewing and probing (measurement research).

The statistical methods for these three activities are largely different, and the responses in Part B are separated to reflect this.


In short, for the first and second activities, production and estimation research, the two rounds of the RANDS during COVID-19 survey will use both a statistically-sampled, recruited panel (NORC’s AmeriSpeak) and an opt-in supplementary panel (NORC’s TrueNorth) as their frames. The survey will be administered by NORC in two modes (web for the majority of responses and telephone for the rest) using their web survey and CATI applications respectively, and will use reminder emails and telephone calls to maximize response. For the third activity, measurement research, the cognitive interviewing study of the RANDS-COVID-19 questionnaire will follow the typical procedures laid out in CCQDER’s existing question evaluation generic clearance (OMB # 0920-0222, current expiration date: 8/31/2021) using a purposive sample recruited by CCQDER staff via advertisements and social media posts. It will be conducted either in person, over the phone, or via video conferencing software (as the situation permits).


1. Respondent Universe and Sampling Methods


RANDS during COVID-19 Surveys


NCHS plans to conduct two rounds of data collection using NORC’s AmeriSpeak Panel to inform understanding of the population during the COVID-19 pandemic for production. For the estimation research, we will take the opportunity of fielding this study to continue the developmental research comparing NHIS with probability and nonprobablity panels available in the commercial section. As such, we will conduct two rounds of the same questions with a supplemental non-probability (opt-in) panel (branded as TrueNorth1). The rounds will be conducted between six and eight weeks apart, and will begin either at the end of April or as soon as OMB approval is received. NCHS plans on obtaining 12,000 responses to the first round of the survey and 10,000 to the second; these sample targets take into account the response rates previously observed for these two NORC panels. Of this sample, half of the respondents in each round will come from NORC’s probability-sampled AmeriSpeak panel (the same panel previously approved for use in the previous two rounds of RANDS under CCQDER’s generic clearance OMB # 0920-0222, current expiration: 8/31/2021), and the other half will come from their supplemental non-probability panel, TrueNorth. The AmeriSpeak panelists (but not the TrueNorth panelists) will be re-contacted for both rounds—giving NCHS the ability to do longitudinal analysis and explore how the same respondents’ experiences and perceptions of covid-related health characteristics change over a short period of time for the production activity. (Because its opt-in nature, NORC does not have the ability to recontact the full set of sampled TrueNorth panelists; thus a new sample of TrueNorth panelists will be drawn for the second round, limiting the longitudinal element to the AmeriSpeak panelists only.)


NORC’s AmeriSpeak Panel: Half of the sample for each round of RANDS-COVID-19 will be based on the AmeriSpeak Panel. NORC recruits panel members using address-based sampling (ABS) to contact U.S. households at random. During recruitment, respondents take a short demographic survey, and are asked if they would be interested in participating in additional surveys as a member of the AmeriSpeak panel. Unlike opt-in panels, the recruitment process for AmeriSpeak’s panel starts with a random sample of addresses from an address frame (NORC has developed an in-house address frame based on USPS’ delivery sequence file and updated through field operations and canvassing), and as a result, it is possible to derive the selection probability and hence the sampling weight for each respondent on the panel. There is no time commitment to membership in AmeriSpeak. Rather, households and individuals are encouraged to remain members as long as they are willing and interested. As with any longitudinal design, AmeriSpeak is affected by attrition; NORC makes significant effort to retain panelists for as long as possible.


Most panelists complete AmeriSpeak surveys using NORC’s web interface, though about 10% of the panel has indicated they prefer to complete surveys via telephone calls. As with the previous round of RANDS that was approved (again, under #0920-0222), RANDS-COVID-19’s sample will include both web and phone respondents. This is important given that panelists who prefer phone (either by choice or because they do not have ready access to an internet-capable device) may have different health outcomes, behaviors, and attitudes; therefore excluding them from the sample would lead to coverage bias beyond what is already expected from using a commercial survey panel (as compared to a direct ABS sample, such as that used by the National Health Interview Survey (NHIS).


As with prior rounds of RANDS, a stratified sample of AmeriSpeak panelists will be contacted for the first round (and those same panelists will then be re-contacted for the second round). The sample design will use the panel information that NORC holds about the AmeriSpeak members to create strata based on age, race/ethnicity, and educational attainment.


Estimation Procedures: For the first purpose of generating health-related pandemic data, given the major differences in sample quality between RANDS and NCHS’ household surveys that produce official statistics (such as the NHIS and National Health and Nutrition Examination Survey (NHANES), a series of estimation procedures that the NCHS Division of Research and Methodology (DRM) has developed over the past three rounds of RANDS data collection and analysis will be used (to be attached at a later date). DRM will use both traditional survey estimation procedures, as well as model-assisted methods. Informative estimates will be publicly released and based on the model-assisted methods applied to the AmeriSpeak panel respondents, and technical notes will accompany the estimates that describe the methods and their limitations.

For the model-assisted methods applied for purpose of generating health-related pandemic data, we will develop calibrated survey weights for both the AmeriSpeak and TrueNorth samples using the NHIS for calibration. (A number of NHIS questions are included verbatim on the RANDS-COVID-19 questionnaire specifically for this purpose.)

Generally, the calibrated survey weights calculated by NCHS will interact with, or start with, the RANDS sample weights provided by NORC for RANDS. The variables used by NCHS will be those included in RANDS for the purpose of adjusting the weights. The method and variables used by NORC for constructing the AmeriSpeak panel weights and the RANDS sample weights are described below.


More specifically to produce estimates, we will adjust the RANDS sample weights provided for RANDS from NORC for health related factors (i.e. the chronic conditions collected on both RANDS and NHIS, see Attachment M for “calibration” variables) using raking methods.   This adjustment is intended to control for possible health differences between the NHIS and RANDS samples due to different response propensities, coverage and/or sample variability.  Underlying health may affect participation in RANDS and response to COVID 19 related questions for some participants.  Based on our past and ongoing research, additional calibrations to the weights provided for RANDS can reduce differences between RANDS and NHIS estimates, though the effects depend on the outcomes and calibration variables, as well as these relationships. Unlike our usual data releases for NCHS core surveys, it is possible that some refinements to the weighting calibrations will be identified and updated estimates will be made available and documented.


The effectiveness of the model-assisted approach has been documented in statistical literature2. This strategy has also been explored in our research on previous RANDS recruited panel survey data. To illustrate this research, Attachment L presents abstracts for 1) a draft journal article currently in the journal review stage evaluating the need to temporally align RANDS with NHIS for estimates, 2) a draft journal article at the last stage of the NCHS clearance process describes both the measurement and estimation research using RANDS 2, and 3) a draft report in the final stages of NCHS clearance that will be released as a NCHS Series 1 Report giving an overview of the estimation methodology research conducted on RANDS 1 and RANDS 2. For the first production estimates, a model based on our experience with prior rounds of RANDS will be used; later estimates may be based on updated models.

For the first purpose of generating health-related pandemic data, data from the second round of RANDS-COVID-19 from the AmeriSpeak panel, both cross-sectional and longitudinal estimates will be released. Cross-sectional approaches will be those described above for the first round. For initial released estimates of change between time periods, existing methods used in established longitudinal surveys will be used. For the second purpose of continuing our survey methods research, the follow up components for both panels will allow NCHS to further develop methods that combine calibration with longitudinal methods for both the probability panel and the opt-in panel for estimating change. Based on this research, updated estimates may be provided.


AmeriSpeak sample weights provided by NORC. For the purpose of generating estimates, NCHS weighting approaches will use the RANDS sample weights provided by NORC for the Amerispeak panel as primary inputs and adjust them as described above. The RANDS sample weights provided to NCHS by NORC for the AmeriSpeak data are a combination of two processes, the weights used for the AmeriSpeak Panel itself and the weights calculated for RANDS. (NCHS will use the TrueNorth sample weights for research purposes, not for the purpose of generating estimates).


NORC has provided the following information about the AmeriSpeak Panel weights, which is similar to the information available for other AmeriSpeak surveys (see, for example, the NORC Documentation for the Data Coalition’s COVID-Impact Survey: https://static1.squarespace.com/static/5e8769b34812765cff8111f7/t/5eb0a6aab2f2aa04386c1032/1588635307466/COVID_Impact_Survey_W1_Field+Report_final_Web_Update.pdf):


Panel base weights for all sampled housing units are computed as the inverse of the probability of selection from the NORC National Frame (the sampling frame that is used to sample housing units for AmeriSpeak) or other address-based sample frames (supplemental panel samples were selected from frames developed from the USPS Delivery Sequence Files). The sample design and recruitment protocol for the AmeriSpeak Panel involves subsampling of initial non-respondent housing units for an in-person follow-up. The subsample of housing units that are selected for the nonresponse follow-up (NRFU) have their panel base weights inflated by the inverse of the subsampling rate. The base weights are then adjusted to account for unknown eligibility and nonresponse among eligible housing units (see below for nonresponse adjustment variables). To produce the final household panel weights, the household-level nonresponse adjusted weights are post-stratified to external counts for number of households obtained from the Current Population Survey.


Final household weights are assigned to each eligible adult in the recruited household. These person-level weights are then adjusted to compensate for nonresponding adults within a recruited household.


The HH nonresponse adjustment cells are defined by crossing partisan score categories, young adult/minority categories, and TargetSmart flag for Republican (obtained from external vendors including TargetSmart and MSG). Additional person level non-response to the panel is adjusted by age and sex. 


Finally, to produce the RANDS-specific weights, the person weights are raked to population benchmarks for age, sex, education, race/Hispanic ethnicity, housing tenure, telephone status, and Census Division.


Research:


For the second purpose of this ICR, continuing our survey methods research, RANDS-COVID-19 will continue to function as a methodological study on how data about an ongoing public health crisis and large shift health and social behaviors can be collected. As RANDS-COVID-19 is a part of the RANDS series, the methodological findings—both on the estimation and measurement error sides—from prior rounds of RANDS will be incorporated in this survey to inform the calculation of substantive estimates. Previous rounds of RANDS have been used to evaluate and develop methods for calibrating external data sources, such as data from commercial panels, with the NHIS and this work will inform the current data collection.


Estimation research


The estimation research activity of this ICR is to begin the next phase of our developmental research, which, in the absence of the COVID-19 pandemic, would have been to compare the probability-based panel (AmeriSpeak, which has been approved for two other rounds of RANDS to date) and an opt-in panel (NORC’s TrueNorth). The addition of data from TrueNorth will allow us to compare our approaches for opt-in panel data to those for the probability-sampled panel data and assess the statistical purposes for which the TrueNorth data are best suited. Evaluations of the strengths and weaknesses of information from non-probability panel-based web survey data for various uses by NCHS are a natural continuation of our current estimation research activities.


For this estimation research activity, the use of TrueNorth is a natural and important extension of our current research. In the ongoing series of RANDS data, NCHS continues to assess the quality of the probability-recruited panels (probability sampled Gallup Panel and the NORC AmeriSpeak Panel) by comparing the estimates derived from these panels with those from established household face-to face surveys (such as the NHIS). For the estimation research activity, similar model-based methods as used for the Amerispeak panel component and the TrueNorth sample weights provided by NORC will be use for the TrueNorth panel data and evaluations of the results will be provided in technical reports accessible to the public. Based on results and conclusions in these research areas, additional production estimates may be provided from both sets of data (see below).


It is worth repeating in this ICR that NCHS’ research with RANDS includes the aims of to understanding and assessing the properties of commercial survey panels and their data and developing effective analytic strategies to combine information from multiple data sources and the opportunities for subgroup analysis that a large supplemental sample may afford. NCHS believes that the analytic and research possibilities of using the TrueNorth supplement outweigh the inherent methodological issues of using opt-in data.


As with AmeriSpeak, TrueNorth has been well developed and maintained by NORC, the contractor. However, TrueNorth is an opt-in panel, and probabilities of selection cannot be assigned to respondents. We understand that the use of TrueNorth constitutes a major expansion and innovation of NCHS’ web survey research. NCHS’ rationales behind the use of TrueNorth for both estimation research and measurement research include:


  1. We have conducted empirical and theoretical research to combine the information from RANDS and NHIS data sources. The next step is to extend this research to include opt-in panel-based web survey data such as TrueNorth. This research is supported by the fielding of the same questionnaire on both the AmeriSpeak and TrueNorth samples and by the inclusion of a set of NHIS questions on the questionnaire for benchmarking and calibration. For example, this research might indicate there are severe limitations of opt-in surveys or, if possible, may show how data from opt-in web surveys can be combined effectively with probability-based web surveys as well as established household surveys.



  1. Other statistical agencies (namely Bureau of Labor Statistics (BLS), Census, and National Center for Science and Engineering Statistics (NCSES)) have used non-probability, opt-in panels for questionnaire and measurement error evaluation. As NCHS has communicated to OMB before, the Center believes that its goals with question evaluation are best served with a probability panel because they provide a better opportunity to extrapolate patterns of interpretation to a population. However, to NCHS’ knowledge, no head-to-head comparison of the strengths and weaknesses of probability and opt-in panels in specific regards to question evaluation has yet occurred—either within the federal statistical system or outside of it. This expansion of the RANDS-COVID-19 sample to include TrueNorth will allow CCQDER to directly examine the differences these sample types present for question evaluation studies.



Measurement research


For the measurement research activity of this ICR, the past rounds of RANDS have been used to develop and refine how NCHS can use “web probes” (or set cognitive probes in the case of a multi-mode survey like RANDS-COVID-19) alongside experimental design to determine the extent of patterns of interpretation and the relatively measurement quality of similar items. These methods will be leveraged in the case of RANDS-COVID-19 to provide information on the interpretation of survey items on other planned information collections, including the NHIS, the Current Population Survey, the Census Bureau’s Covid-19 Household Pulse Survey, the Medicare Current Beneficiary Survey, and BLS’ National Longitudinal Survey.



Cognitive Interviews of RANDS-COVID-19 Questionnaire


NCHS typically conducts cognitive evaluations using cognitive interviewing before a survey is fielded—not only to fulfill the pretesting requirements under OMB’s Statistical Standards and Guidelines, but also to as an exercise that allows subject matter experts to plan their analyses. However, given both CDC’s and NCHS’ immediate need for Coronavirus-related data and the current social environment, cognitive interviewing for this project will not take place before the fielding of the RANDS during COVID-19 survey itself. Rather, cognitive interviews will be conducted either when the social environment allows or when CCQDER has the resources and procedures in place to conduct interviews during periods of social distancing. These interviews will serve as a validity test of the RANDS during COVID-19 questionnaires and will provide insight and guidance to NCHS staff and subject matter experts as they analyze the RANDS data.


A primary goal of this cognitive interviewing study is to determine the experiences or phenomena counted by respondents when formulating their answer, thus, indicating the actual construct captured by the question. This type of validity study allows for a more accurate interpretation of the resulting survey data.


Like CCQDER’s other cognitive interviewing projects (typically conducted under its generic clearance, #0920-0222, current expiration: 8/31/2021), the cognitive interviews encompassed in this ICR will use a purposive sample of the public and will follow best practices for obtaining a high-quality qualitative sample. While survey research employs a deductive, quantitative methodology and relies on a relatively large population-based probability sample to support statistical inference and representativeness, methods such as cognitive interviewing employs an inductive, qualitative methodology and generally relies upon a relatively small sample. Unlike survey research, the primary objective of the qualitative methods CCQDER employs is not to produce statistical data that can be generalized to an entire population. Rather, their objective is to provide an in-depth exploration of particular concepts, processes and/or patterns of interpretation. Samples used for qualitative research generally do not achieve full inclusivity of all social and demographic groups. As a general rule, sample definitions are based upon the content of the survey, as well as the purpose and objectives of the particular study.


In this particular case, CCQDER will recruit respondents in a way that attempts to provide a final cognitive interviewing sample that is diverse across age, race/ethnicity, education, and experience with the novel Coronavirus and COVID-19. Recruitment for the cognitive interviews will be carried out through a combination of a newspaper advertisement, flyers, special interest groups, and word-of-mouth. The newspaper advertisements/flyers used to recruit respondents are shown in Attachment I. The 5-minute screener used to determine eligibility of individuals responding to the newspaper advertisements/flyers is shown in Attachment C. It is anticipated that as many as 150 individuals may need to be screened in order to recruit 100 cognitive interviewing respondents


2. Procedures for the Collection of Information


RANDS-COVID-19 Surveys


Questionnaire:


As noted above in B1, two rounds of RANDS-COVID-19 will be conducted on the previously approved AmeriSpeak web and telephone samples and on the TrueNorth opt-in web sample. The questionnaires for the two rounds will be very similar. The questionnaire for the first round is found in Attachment B. Following the first round and an initial analysis of the results, small changes in question wording, the addition of questions (such as probing questions), or the removal of questions may be made. If any substantive changes are made to the questionnaire between rounds, the revised instrument will be shared with OMB via a separate non-substantial change request.


The questionnaire is composed of items that serve five purposes: estimation, secondary Coronavirus-related variables, alignment, calibration, and measurement research.


  • Estimation: NCHS will produce experimental estimates of selected health care access measures from the AmeriSpeak component (the probability-sampled component) of RANDS during COVID-19 for release as tables on the NCHS RANDS web page. Weighted estimates will be calculated using the sample weights calibrated to the NHIS and will be produced for all adults and for population subgroups stratified by age group, sex, and race/Hispanic origin. Additional subgroups will be defined by underlying health characteristics (e.g. diagnosed diabetes, hypertension, asthma), demographic characteristics (e.g. educational attainment, income), health behavior (i.e. smoking), and other health care access variables (e.g. health insurance coverage and usual place of care) depending on available sample. The intention of subgroup calculations is not to infer associations or causation between these factors and the COVID-19 variables, rather to provide information for possible high-risk population subgroups and other subgroups of interest.

  • Secondary Coronavirus-related variables: A small number of variables—including about health insurance loss and Coronavirus prevention behaviors—are included not to produce estimates, but to serve as covariates and sources of ancillary information for both the estimation and measurement research goals of RANDS during COVID-19.

  • Alignment: As noted in A2, to evaluate RANDS during COVID 19 and to permit triangulation, we include some questions on other sources, primarily the NHIS but also the CPS and Census Pulse Survey. Comparisons of estimates from RANDS recruited panels surveys after statistical adjustment to those from the NHIS using questions fielded on both sources continues to be an important tool for NCHS to gauge the effectiveness of its statistical methods. We will expand these comparisons to include the TrueNorth component to evaluate statistical approaches applied to these nonprobability data. In addition, comparisons of estimates from the RANDS during COVID 19 Amerispeak component (after calibration) to those from the Census Pulse Survey and the CPS will inform inferences from all surveys. One developmental research area for RANDS is evaluation of alignment of key estimates across subgroups given possible different participation and response propensities for subgroups not accounted for in weighting.

  • Calibration: For the purpose of generating data that can help explain health-related experiences of the US population during the pandemic, we will calibrate the sample weights provided with the NORC AmeriSpeak data to the NHIS using a set of questions fielded on both surveys (see B1 for details). Questions identified for calibration are those that adjust for possible differences in underlying health between the samples (e.g. diagnosed asthma) and mode (telephone versus web), but that are not considered to be related to the pandemic (e.g. mental health).

  • Measurement Research: As with previous rounds of RANDS, NCHS plans to use RANDS during COVID-19 for methodological work related to measurement error and question design. This work will largely rely on the use of set cognitive probes and experimental design. As noted in A2, this measurement research will contribute some of the first data to the corpus of information relating to how survey respondents understand Coronavirus-related questions and concepts. This information will be used to not only inform future NCHS surveys (such as the NHIS’ planned changes due to COVID-19), but also the design and analysis of other Federal and non-Federal Coronavirus surveys and questionnaires.

Additionally, some of these variables will also be used as covariates in the estimates. Attachment M details which purpose category each questionnaire item falls into, and whether or not it will be used as a covariate.


Data Collection Procedures:


Beyond the sample size (as noted above in B1, n=12,000 for the first round, n=10,000 for the second with half of each round coming from AmeriSpeak and the other half coming from TrueNorth) and the questionnaire (if any changes are indeed made between rounds), the procedures for the collection of information will be identical across both rounds of RANDS during COVID-19.


The survey will be conducted using either NORC’s in-house web survey platform or CATI application, depending on the chosen mode of the respondent. As with previous rounds, the RANDS survey itself will begin with an introduction screen (or introduction text for telephone respondents) similar to what is seen at the beginning of Attachment B, explaining the general purpose of the survey and providing the confidentiality and Paperwork Reduction Act language. As signed consent is not possible for surveys where the population of respondents is anonymous to NCHS, a waiver of signed informed consent has been requested from the NCHS ERB. The introduction page will require the respondent to manually click through to the first page of questions (or agree to continue and not hang up for telephone respondents); this action therefore implies consent.


Following each individual round of RANDS-COVID-19, NORC will process the survey data and prepare data files. The data files will not include the respondents’ names, addresses, or any other direct personally identifiable information (PII), including any ISP (internet service provider) data NORC has about the computer from which the respondent replied to the survey. All metadata tying the respondents to their inclusion in the RANDS-COVID-19 sample will be eliminated from the NORC servers, including the backups, following final delivery. The data files will be transferred to NCHS via either a secure File Transfer Protocol (FTP) web portal or by loading them directly on an encrypted memory stick. Following confirmation that the second-round transfer is complete and successful, NORC will delete the data file from their secured servers and will provide a certificate of destruction certifying that all RANDS-COVID-19-related data and metadata have been removed from their servers and backups.


Respondents will not receive an incentive for participating in RANDS-COVID-19.


Cognitive Interviews of RANDS-COVID-19 Questionnaire


Questionnaire:


The questionnaire used for the cognitive interviews will be the same as that used for the survey itself, and is found in Attachment B.


Data Collection Procedures:


Procedures for the cognitive interviews will follow what has been approved in CCQDER’s generic clearance (0920-0222, current expiration: 8/31/2021). In short, potential respondents will be recruited using advertisements in media or via word of mouth and screened by CCQDER recruiters in order to construct a suitable purposive sample. Interviews will be conducted either face-to-face or via telephone or video conference, depending on the social situation and CCQDER’s policies and abilities. Interviews will be video or audio recorded, and these recordings and the interviewers’ notes will be used in CCQDER’s Q-Notes software to conduct analysis.

Recruitment As noted above in B1, respondents will be recruited by means of flyers and other advertisements posted in public places, newspaper advertisements, or word-of-mouth (the advertisement to be used in this effort is available as Attachment I). CCQDER’s experience has shown that advertisements in local newspapers and flyers attract a large pool of potential respondents. These recruitment mechanisms have been productive in the past for obtaining a diverse group of respondents to help us determine potential sources of error in survey questions.


Screening and scheduling procedures The first contact with potential respondents will occur in response to the flyers or advertisements. Interested persons will leave contact information (name and telephone number) on the CCQDER voice mail system. A CCQDER Recruiter/CCQDER Staff person then calls the person back, will give a brief description of the nature of the study, video/audio recording procedures, and the fact that $40 will be offered. First, the CCQDER Recruiter/CCQDER Staff person determines through a brief series of questions (Attachment C) whether the volunteer possesses the desired research characteristics (e.g., we ask for gender and age to avoid interviewing people with very similar demographic characteristics). If the person does possess the desired research characteristics and would like to participate, he/she will be scheduled for an interview. Otherwise, the volunteer will be asked whether he/she would be interested in participating in future laboratory interviews. Telephone numbers and the minimal demographic information listed earlier will be obtained for all scheduled volunteers and for those who would like to be contacted in the future. For those callers who are ineligible for the study and do not want to be contacted in the future, only demographic characteristics will be maintained for future analysis of successful recruitment efforts. Attachment J contains a sample CCQDER voice mail script that the recruiters will use.


Interview Methodology Cognitive interviews for the RANDS-COVID-19 project will be one-on-one between a single interviewer and a respondent and will be an hour long at most. Given the circumstances of the pandemic, interviews will most likely take place over the phone, or through Skype or Zoom. The interview will begin with introduction text read by the CCQDER interviewer explaining the general purpose of the survey, providing the confidentiality and Paperwork Reduction Act language, and informing the respondent that their participation is voluntary and that they may refuse to answer any question (Attachment D). It will furthermore inform the respondent about the need to audio or video record the interview (depending on the mode).


After respondents have been briefed on the purpose of the study and the procedures that CCQDER routinely takes to protect human subjects, respondents will be asked to vocally affirm their consent to being interviewed. Because many interviews will not take place in person, and it will not possible for respondents to read and sign the usual informed consent document, a waiver of signed informed consent has been requested from NCHS’ ERB. If a respondent does not end the call, consent will be assumed. In the rare instance that consent to record the interview is not granted, the session is not recorded in audio or video. If the respondent grants consent to record the interview but changes his/her mind while the session is being recorded, the interviewer will ask for verbal consent to retain the interviewing materials and the portion already recorded. The interviewer will get verbal consent from the respondent to do so prior to turning off the recording software. If the respondent does not give consent for the CCQDER to retain the recording it will be labeled for destruction. Upon return to the QDRL, the researcher will give their encrypted flash drive with the recording marked for destruction to the CCQDER technician. The CCQDER technician will delete the file from the encrypted flash drive. Once deleted, the file is no longer available for use. A note will be place in the hardcopy file and the CCQDER database indicating that particular recording (identified by the unique identification number assigned to the respondent) has been destroyed.


The interview will begin with the respondent answering questions located on Respondent Data Collection Sheet (Attachment K), which is estimated to take approximately 5 minutes of the total hour-long interview. Interviews will be conducted using concurrent probing, whereby respondents are presented survey questions and asked to explain how and why they answered as they did. The interviewer will use probes extensively to ascertain the degree of comprehension and the recall processes involved. The interviewer may also ask the respondent to think aloud while answering.


At the conclusion of the interview, the respondent will be given a $40 incentive along with information explaining the terms of consent and contact information for the CCQDER Laboratory Manager, the NCHS ERB Chair, and the NCHS Confidentiality Officer.


3. Methods to Maximize Response Rates and Deal with Nonresponse


RANDS-COVID-19 Surveys


NORC employs several approaches to maintain panel participation and to maximize response for the fielded surveys, such as RANDS, particularly with the recruited AmeriSpeak panel. NORC employs dedicated staff to maintain panelists’ participation in AmeriSpeak and limits the number of surveys any one panelist may be asked to complete per month. This leads to a relatively high participation rate (for instance, the prior RANDS data collection in April of 2019 had a participation rate of 62% among sampled AmeriSpeak panelists, with an overall response rate—from panel selection to survey participation and completion—of 18%). There are two levels of non-response that need to be accounted for when using survey panels, such as those proposed for RANDS-COVID-19: non-response (and coverage bias) during the panel recruitment phase and non-participation in the actual survey.


The first of these two levels of non-response and coverage bias is one of the greatest limitations in using commercially-available panels, though NCHS has implemented some processes to mitigate them. The panel provider NCHS has selected for this survey (and previous rounds of RANDS) does internal work to address the problem. First, NORC uses a well-maintained ABS frame that is constantly updated as NORC conducts their field surveys for panel recruitment. Second, for AmeriSpeak in particular, a large non-response follow-up effort (NRFU) is made during the panel recruitment effort, and NORC estimates that over half of its panelists are brought into AmeriSpeak because of this face-to-face effort (as opposed to the initial mail-back recruitment survey). However, NORC has not published a complete non-response bias analysis that takes into account both the NRFU non-response and any coverage issues their ABS frame may have. Because of this, NCHS has developed a series of model-based estimation procedures that use covariates to model the propensity to respond to web surveys (as detailed in B2) in an effort to correct for these inherent biases of representation.


As to the non-participation in AmeriSpeak survey (which can also be thought of as the survey’s direct non-response), NCHS will conduct a non-response bias analysis of RANDS’ sample. All of NORC’s panel participants have been fully screened and a substantial amount of background data is available for each (e.g., health and well-being, socio-economic and occupational status, age, gender, race, ethnicity, and geographic characteristics). These data will be attached to the final files delivered by NORC to NCHS, which will allow NCHS to examine whether or not the unit (or individual item) non-responses to RANDS-COVID-19 are systematic or appear at random. The results of these analysis will permit NCHS to further refine its model-based estimation techniques and create calibrated survey weights. Both the original (which NORC supplies on the final files) and calibrated survey weights account for the nonresponses in the survey process. Therefore, estimates based on these weights are expected to cover the nonresponse error in the survey data.


Cognitive Interviews of RANDS-COVID-19 Questionnaire


CCQDER’s experience has shown that advertisements in local newspapers and social media attract a large pool of potential laboratory research respondents. These recruitment mechanisms have been productive in the past for obtaining a diverse group of respondents to help the program determine potential sources of error in survey questions. Also, the offer of $40 has been a proven motivation for volunteers to participate in the study.


After potential cognitive interviewing respondents have been recruited, screened and scheduled, the probability of the respondent failing to show is minimized by making reminder phone calls to these volunteers.


4. Tests of Procedures or Methods to be Undertaken


RANDS-COVID-19 Surveys



Additionally, NCHS is partnering with the US Census Bureau as it conducts pre-testing for its Covid-19 Household Pulse Survey. NCHS staff (alongside staff from BLS) are working with the Census Bureau to conduct pre-testing via web probing or debriefing using Census’ Affinity Panel. NCHS staff are helping plan this pretest and will assist in analysis; findings from this effort will provide guidance on the questions planned for RANDS-COVID-19.


Cognitive Interviews of RANDS-COVID-19 Questionnaire


This ICR requests authorization to conduct tests of procedures and methodologies typical in cognitive testing research. The purpose of questionnaire evaluation is not to obtain survey data, but rather to obtain information about the processes people use to answer questions as well as to identify any potential problems in the questions. This work has been effective for enhancing the quality of data of CDC, NCHS, and other Federal surveys cognitively tested by the CCQDER over the past 29 years.


5. Individual Consulted on Statistical Aspects and Individuals and/or Analyzing Data


The person with overall responsibility for the methodological and technical aspects of the

described activities is:


Jennifer Parker, Ph.D.

Director, Division of Research and Methodology

National Center for Health Statistics

3311 Toledo Road

Hyattsville, Maryland

(301) 458-4419

[email protected]


Kristen Miller, Ph.D.

Director, Collaborating Center for Questionnaire Design and Evaluation Research

National Center for Health Statistics

3311 Toledo Road

Hyattsville, Maryland

(301) 458-4625

[email protected]


1 Information on NORC’s TrueNorth approach is at http://amerispeak.norc.org/our-capabilities/Pages/TrueNorth.aspx.

2 See for instance, Lee S., Valliant R. (2009) Estimation for Volunteer Panel Web Surveys Using Propensity Score Adjustment and Calibration Adjustment. Sociological Methods and Research. 37, 319-343. Doi: https://doi.org/10.1177%2F0049124108329643.

13


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Statement for Request for Clearance:
AuthorKaren Whitaker
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy