supporting_statement_a_FINAL

supporting_statement_a_FINAL.doc

Comparing Health Insurance Measurement Error (CHIME)

OMB: 0607-0983

Document [doc]
Download: doc | pdf

SUPPORTING STATEMENT

U.S. Department of Commerce

U.S. Census Bureau

Comparing Health Insurance Measurement Error (CHIME)

OMB Control Number 0607-<XXXX>

Part A – Justification


Question 1. Necessity of the Information Collection


Several federal surveys include a module that measures health insurance coverage, including three Census Bureau Surveys (the Current Population Survey Annual Social and Economic Supplement (CPS ASEC), the American Community Survey (ACS) and the Survey of Income and Program Participation (SIPP). Other key surveys include the National Health Interview Survey (NHIS) sponsored by the National Center for Health Statistics, and the Medical Expenditure Panel Survey (MEPS), sponsored by the Agency for Healthcare Research and Quality. State agencies as well as private research agencies also conduct studies measuring health insurance. All these surveys have different origins and methodological constraints (e.g. timing of data collection, reference period, and mode), they serve different purposes and they all have different strengths and weaknesses. They also produce different estimates of coverage (Davern, 2009). For example, the estimate of those uninsured throughout calendar year 2012 was 15.4 percent in the CPS and 11.1 percent in the National Health Interview Survey (NHIS). The NHIS also produces an estimate of those uninsured at a single point-in-time, and in 2012 it was 14.7 percent. That estimate happens to be close to the CPS 2012 calendar year estimate (SHADAC, 2013). Discrepancies between surveys are less pronounced with the CPS redesign. The estimate of those uninsured throughout calendar year 2013 was 13.4 percent in the CPS and 10.7 percent in the NHIS. The NHIS point-in-time rate for the first quarter of 2014 was 13.1 percent, which is close to the CPS redesign February to April rate of 13.8 percent (SHADAC, 2014). Indeed, trying to reconcile differences in these estimates and confidently choose one estimate over another has eluded policy makers for years. Potential contributors to the variation in estimates include the context (both content of the overall survey and placement of health insurance questions within the survey), sample design, weighting and imputation schemes, mode (e.g., in-person, telephone, mail, internet), interviewer training routines and the questionnaire. Previous research indicates that much of the variation in the estimates is rooted in subtle differences in the questionnaires (Pascale, 2009; Call et al, 2014; Call et al 2007; Swartz, 1986).


All survey data come with some degree of measurement error – a difference between the “true value” of the construct being measured and the statistic produced by the survey questions. Much of the literature on health coverage measurement is dominated by an implicit assumption that coverage is under-reported, and that higher levels of coverage indicate more accurate estimates. Indeed, under-reporting of Medicaid in surveys is well-documented (Call et a, 2012; Pascale, Roemer and Resnick, 2009; Klerman, Ringel, and Roth 2005; Eberly et al. 2008; Blumberg and Cynamon 1999; Czajka and Lewis 1999; Lewis, Ellwood, and Czajka 1998). However, several state-level record-check studies have also shown that the vast majority of Medicaid enrollees who fail to report that coverage do report some other type of coverage and do not get incorrectly classified as uninsured (Call et al, 2008). Furthermore, there is evidence of Medicaid over-reporting. For example, a CPS-Medicaid record-check study found that among those Medicaid enrollees who, according to the records, had coverage at the time of the survey (March) but not at any time in the previous calendar year, 25.8 percent were incorrectly reported as having Medicaid in the past year (Klerman, Davern et al, 2009). Medicaid has received substantial study and attention with regard to reporting accuracy, in part due to the existence and accessibility of fairly high-quality records. Yet even within the Medicaid reporting literature it is not entirely clear how misreporting of Medicaid affects estimates of other plan types, and the ultimate measure of the uninsured, at the national level. Because surveys derive the estimate of the uninsured by taking into account reporting on a range of plan types, misreporting of all plan types needs to be considered collectively when assessing the accuracy of the uninsured estimate. The accuracy of reporting of other plan types has received less rigorous study than Medicaid, and these types of studies are more difficult due in part to less accessible, more disparate sources of validation. Hill, 2008/2009, represents a rare investigation validating reports of private coverage, and Davern et al., 2008 and Nelson et al, 2000, represent the only record check studies of both private and public insurance markets to date. In sum, while the level of uninsured tracks lower in some surveys (e.g., the CPS) than other surveys (e.g., the SIPP and NHIS), there is no definitive study or data source that indicates what the “true” level of uninsured really is.


A common strategy for assessing the validity of a self-reported measure from a survey is a reverse “record check” study in which administrative records are assumed to contain the correct status on a given measure (e.g.: health insurance coverage). Contact information from the records is used as sample to conduct a survey in which the same information, in this case health insurance, is asked about. Data from the records is then compared to the answers from the survey to assess reporting accuracy. The proposed study – Comparing Health Insurance Measurement Error or CHIME – will survey a sample of people enrolled in Medica Health Plans (a Minnesota based health insurance plan) whose coverage type is known from the records to be Medicaid, MinnesotaCare, employer-sponsored insurance, non-group coverage within the marketplace (called MNSure) or non-group coverage outside the marketplace. The sample will be randomly assigned to one of two questionnaire modules on health insurance – the newly-redesigned CPS or the ACS – in order to contrast reporting error across different questionnaire versions. In order to minimize respondent burden but mimic the CPS and ACS context to some extent, typical questions on demographics (e.g., age, race, and education), employment status, and government program participation will precede the health insurance questions.


With regard to health reform, the redesigned CPS already contains questions specific to marketplace coverage which can be evaluated with the current design. The ACS does not include questions specific to health reform, but there would be advantages to evaluating questions about marketplace within the CHIME ACS health module. However, because the ACS is a person-level survey, adding questions about the marketplace for each person could contaminate reporting for subsequent household members within a household. To avoid this contamination but still exploit the opportunity to learn something about marketplace reporting in the ACS question series, marketplace-specific questions will be added to only the health module of the last person in a given household.




Question 2. Needs and Uses


The goal of the study is to assess measurement error in health coverage estimates that is ascribable to the questionnaire across the CPS and ACS health insurance modules using administrative records as a truth source. Both “absolute” reporting accuracy (the survey report compared to the administrative record data) and “relative” reporting accuracy (comparing absolute accuracy across questionnaire treatments) will be evaluated. The analysis will be used to understand the magnitude, direction and patterns of misreporting for three main purposes: (1) to provide Census program staff with empirical data to develop and refine edits and/or to include research notes for data users so they can make their own adjustments for misreporting; (2) to equip the wider research community with information that could serve as a guide for deciding which among the various surveys best suits their needs; and (3) to contribute to the general survey methods research literature on measurement error.


Analysis will also inform reporting accuracy of health coverage related to the Affordable Care Act (ACA). Specifically, for coverage that is known to be obtained from the marketplace, we will explore whether respondents report that coverage, the source they cite (direct-purchase, government, etc.), and the accuracy with which they answer a question on subsidized premiums. The specific research questions include:

CPS versus ACS

  1. What is the absolute and relative accuracy of the insured at a point in time?

  2. What is the absolute and relative accuracy of type of coverage?

  3. What is the absolute and relative accuracy of:

    1. whether the coverage was obtained on the marketplace

    2. whether there is a subsidy

    3. cost of the premium?

  1. Among marketplace enrollees (subsidized and unsubsidized) how does the distribution of source of coverage reported (direct purchase, Medicaid, government, etc.) compare across surveys?

Within the CPS

  1. What is the absolute accuracy of months of enrollment (in particular, coverage at the time of the interview versus coverage at any time during the previous calendar year), transitions from one plan type to another and churning on and off the same plan type (to the extent that enrollees stay with Medica as their health plan provider)?

  2. What is the absolute accuracy of marketplace coverage, whether there is a subsidy, and the cost of the premium?

  3. Among marketplace enrollees (subsidized and unsubsidized), what is the distribution of source of coverage reported (direct purchase, Medicaid, government, etc.)?


Question 3. Use of Information Technology


All interviews will be conducted using a Computer-Assisted Telephone Interviewing (CATI) instrument and data will be transmitted electronically from the Hagerstown, Md. telephone facilities to Census Bureau headquarters in Suitland. In general, CATI instruments offer smooth, efficient administration of questionnaires, since the sequencing of questions is handled behind-the-scenes by the program, not by the interviewer. While the CPS ASEC was converted to Computer Assisted Interviewing (CAI) in 1994, the conversion essentially took the questions and skip patterns of the paper questionnaire and put them on a computer screen. Automated data collection methods allow for complicated skips, respondent-specific question wording and, when the same information applies to multiple household members, collecting that data only once. Automation is heavily exploited in the health insurance section, reducing tedium, respondent fatigue and burden. For example, the hybrid household-person-level design takes full advantage of the fact that in many households all or most members share the same plan type. Specifically, as soon as one plan type is identified for a specific household member, questions are asked to determine if other household members share that same plan type. This information is stored and tracked so that when those other household members are asked about, the previously-reported plan can simply be verified and then questions about any additional coverage are asked. A full battery of person-level questions is not needed in many cases, which reduces burden significantly. This method also reduces inconsistencies in the data because only one set of questions is asked about details of a given plan, which eliminates the need to reconcile inconsistent answers if the same set of questions is asked for multiple people.


The computerized questionnaire also permits the inclusion of several built-in editing features, including automatic checks for internal consistency and unlikely responses, and verification of answers. These built-in editing features can catch and correct errors during the interview itself, as opposed to relying on post-collection edits.


Question 4. Efforts to Identify Duplication


Several federal, state and private surveys measure health insurance. There have been several comparative studies that contrast survey methods and estimates, but most have been post-hoc and thus cannot control for the range of variation in survey design features, such as sampling, weighting and imputation. The split-ballot study offers the only opportunity to isolate and compare differences in estimates attributable only to the questionnaire in order to evaluate measurement error. There is also a dearth of validation studies on the reporting accuracy of coverage. While there is a very developed literature on studies that link survey data to Medicaid records to examine the under-count, we know of only one study on validation of private coverage (Hill, 2008/2009), and two studies to date (Davern et al., 2008 and Nelson et al, 2000) that have examined reporting accuracy across both public and private health insurance markets. Furthermore, all of these validation studies precede the introduction of insurance marketplaces, as well as the Census Bureau’s redesign of the CPS. The proposed study addresses these gaps in knowledge.


Question 5. Minimizing Burden


Small businesses or other small entities are not asked to report information. We are also embedding the health insurance modules into a very short questionnaire. The CHIME instrument includes content (and in many cases verbatim question wording) from the full CPS and ACS questionnaires, in order to set context, but we limit the estimated duration of the survey to 13 minutes.


Question 6. Consequences of Less Frequent Collection


Production CPS ASEC is carried out from late February through early April each year and collects data about health coverage from January of the previous calendar year up to the interview date. Previous research indicates that recall, timing and length of the reference period are a factor in reporting accuracy. Thus to evaluate the effectiveness of the questions on retrospective coverage in the CPS, it is essential that this study be carried out in parallel with the timing of production CPS ASEC data collection as closely as possible. If the study is not carried out in the spring of 2015, it will have to wait an entire year and be conducted in the spring of 2016. This will delay research findings that could inform interpretations of estimates from CPS ASEC and ACS production data, which could hinder efforts to evaluate the effects of the ACA.


Question 7. Special Circumstances


There are no special circumstances.


Question 8. Consultations Outside the Agency


Since 1999, Census Bureau staff have been collaborating and communicating with individuals outside the bureau who have been closely involved in the technical matters of health insurance measurement. This particular study is a collaboration between the Census Bureau, the U.S. Department of Health and Human Services Office of the Assistant Secretary for Planning and Evaluation, the State Health Access Data Assistance Center, Medica Research Institute and the Robert Wood Johnson Foundation. In the planning phase of the study, the principal investigators convened a Technical Advisory Group to advise on design details. Consultations and contributions from these experts were on an individual basis – not for purposes of forming a group consensus.


Also, on November 17, 2014, a pre-submission notice was published in the Federal Register to inform the public and the research community of the study plans and to invite input and comments. The pre-submission notice can be found here:

https://www.federalregister.gov/articles/2014/11/17/2014-27085/proposed-information-collection-comment-request-comparing-health-insurance-measurement-error-chime

No comments were received.


Question 9. Paying Respondents


This study will not involve any payments to respondents.


Question 10. Assurance of Confidentiality

Respondents will be informed about the study through an advance letter that Medica will send to its enrollees. The introductory screens in the CATI script inform respondents that the survey takes 13 minutes per household to complete and is voluntary. The CATI script also informs respondents that the survey is being conducted under the authority of Title 13, United States Code, Sections 141, 182 and 193; that Title 13, United States Code, Section 9, requires the Censusu Bureau to keep respondents’ information confidential and use it for statistical purposes only; and that the survey has been approved by the OMB under project number xxxx-xxxx.


Question 11. Justification for Sensitive Questions


No sensitive questions are asked in this study.


Question 12. Estimate of Hour Burden

The CHIME survey will be conducted only one time, by telephone, with enrollees in Medica Health Plans. A single household respondent (18 years old or older) will be asked to report for the entire household. The interview is estimated to take 13 minutes per household on average. The estimated total burden hours are as follows:

  • Interviewing: 5,000 household cases * 13 minutes/case = 1,083 hours

  • Contact attempts not resulting in completed interviews = 11,667 cases * 10 seconds/case = 1,945 hours

  • Total = 3,028 hours


Question 13. Estimate of Cost Burden


There are no costs to respondents other than that of their time to respond.


Question 14. Cost to Federal Government


Total cost to the federal government is $336,530 and is being shared by two agencies, with $136,530 from the Census Bureau and $200,000 from the US Department of Health and Human Services.


Question 15. Reason for Change in Burden


The increase in burden is attributable to the information collection being submitted as new.


Question 16. Project Schedule


Planning for implementation of this field test began in January 2014. Below is a detailed schedule.


Task

Start date

End Date

Staff

Instrument Development




Develop and deliver SCIF for tester’s menu/working instrument (ofwi)

immediately

10/10/14

DSMD, DSD

Write SPIDER specs and flow document

immediately

10/20/14

CSM

Author instrument

immediately

1/9/15

TMO

Deliver working instrument (ofwi)

n/a

10/29/14

TMO

Debug the instrument

immediately

3/6/15

CSM, TMO

Users, Systems and Output Test




Deliver user’s test SCIF and instrument (ofut)

n/a

11/17/14

DSD, TMO

Deliver system’s test SCIFs and instruments (ofst and ofs2)

n/a

12/01/14

DSD, TMO

Write test scripts and corresponding output

immediately

1/9/15

CSM

Conduct integrated users/systems/output test keying

12/8/14

12/10/14

LCC

Examine feedback from field representatives;

make any appropriate changes to instrument

12/11/14

12/29/14

CSM, SEHSD, CES et al

Compare input from scripts to output in

spreadsheets; correct any bugs in instrument

12/11/14

12/29/14

CSM, SEHSD, CES et al

Examine any problems in systems

processing; make corrections

12/11/14

12/29/14

TMO, CSM

Conduct Output Test

1/19/15

1/30/15

CSM

Instrument Delivery




Deliver final training instruments

n/a

3/6/15

TMO

Deliver final production instruments

n/a

3/6/15

TMO

SAMPLE




Develop mock sample layout and data transfer protocols in collaboration with Medica

immediately

10/14/14

DSMD

Receive mock sample file #1 from Medica; refine record layout as needed

10/14/14

10/24/14

DSMD

Receive mock sample file #2 from Medica thru secure data transfer; evaluate data transfer method to ensure it meets Census standards

11/13/14

12/1/14

DSMD

Live Sample and SCIFs/Field Period 1




Receive live sample from Medica

n/a

2/20/15

DSMD

Attach ControlNumber/CASEID; Deliver to DSD

2/20/15

2/27/15

DSMD

Attach SCIF; Deliver to TMO

2/27/15

3/6/15

DSD

Process through WedCATI; Deliver to MCS

3/6/15

3/20/15

TMO

Live Sample and SCIFs/Field Period 2




Receive live sample from Medica

n/a

3/13/15

DSMD

Attach ControlNumber/CASEID; Deliver to DSD

3/13/15

3/19/15

DSMD

Attach SCIF; Deliver to TMO

3/20/15

3/27/15

DSD

Process through WebCATI; Deliver to MCS

3/27/15

4/10/15

TMO

Training




Develop field representative manual

1/1/15

3/6/15

CSM, LCC

Develop training script

1/1/15

3/6/15

CSM, LCC

Develop mock interviews for paired practice

1/1/15

3/6/15

CSM, LCC

Deliver training materials to LCC

n/a

3/6/15

CSM

Conduct training for Field Period 1

3/20/15

3/21/15

CSM

Conduct training for Field Period 2

4/11/15

4/11/15

CSM

Data collection




Field period 1

3/22/15

4/10/15

LCC

Closeout of Field Period 1

n/a

4/10/15

DSD, TMO

Field period 2

4/12/15

4/30/15

LCC

Closeout of Field Period 2

n/a

4/30/15

DSD, TMO

Data processing




Develop and test specs for daily reports

immediately

1/19/15

CSM, DSD

Pick up daily transaction files; produce and deliver daily progress reports to CSM

Field period 1

Field period 2



3/23/15

4/13/15



4/11/15

5/1/15

DSD

Pick up closeout files

Field period 1

Field period 2




4/11/15

5/1/15

DSD

Format closeout files into final person-level SAS dataset; deliver to CSM

Field period 1

Field period 2




4/20/15

5/11/15

DSD

Data analysis and final report




Write program to produce person-month-coverage type flags; conduct analysis and write final report

immediately

9/1/15

CSM, SHADAC, HHS, Medica


Question 17. Request to Not Display Expiration Date


The expiration date will be contained in the advance letter sent to respondents.


Question 18. Exceptions to the Certification


There are no exceptions to the Certification for Paperwork Reduction Act Submissions.


7



File Typeapplication/msword
AuthorJoanne Pascale
Last Modified ByJeannette D Greene-Bess
File Modified2015-05-04
File Created2015-05-04

© 2024 OMB.report | Privacy Policy