White Paper

white_paper_FINAL.doc

Comparing Health Insurance Measurement Error (CHIME)

White Paper

OMB: 0607-0983

Document [doc]
Download: doc | pdf

Attachment A

Comparing Health Insurance Measurement Error (CHIME) White Paper


Joanne Pascale, US Census Bureau

Kathleen Call, State Health Access Data Assistance Center

Angela Fertig, Medica Research Institute

Don Oellerich, US Department of Health and Human Services


February 13, 2015



I. PURPOSE AND BACKGROUND


Several federal surveys include a module that measures health insurance coverage, including three Census Bureau surveys – the Current Population Survey Annual Social and Economic Supplement (CPS ASEC), the American Community Survey (ACS) and the Survey of Income and Program Participation (SIPP). Other key government surveys include the National Health Interview Survey (NHIS) sponsored by the National Center for Health Statistics, and the Medical Expenditure Panel Survey (MEPS), sponsored by the Agency for Healthcare Research and Quality. State agencies as well as private research agencies also conduct studies measuring health insurance. All these surveys have different origins and methodological constraints (e.g. timing of data collection, reference period, and mode), they serve different purposes, and they all have different strengths and weaknesses. They also produce different estimates of coverage (Davern, 2009). For example, in a comparison of major national surveys, estimates of the uninsured throughout calendar year 2012 ranged from 15.4 percent in the CPS to 11.1 percent in the NHIS (SHADAC, 2013). Indeed, trying to reconcile differences in these estimates and confidently choose one estimate over another has eluded policy makers for years. Potential contributors to the variation in estimates include the context (both content of the overall survey and placement of health insurance questions within the survey), sample design, weighting and imputation schemes, mode (e.g., in-person, telephone, mail, internet), interviewer training routines and the questionnaire. Previous research indicates that much of the variation in the estimates is rooted in subtle differences in the questionnaires (Pascale, 2009; Call et al, 2014; Call et al 2007; Swartz, 1986).


All survey data come with some degree of measurement error – a difference between the “true value” of the construct being measured and the statistic produced by the survey. Much of the literature on health coverage measurement is dominated by an implicit assumption that coverage is under-reported, and that higher levels of coverage indicate estimates that are more accurate. Indeed, under-reporting of Medicaid in surveys is well-documented (Call et a, 2012; Pascale, Roemer and Resnick, 2009; Klerman, Ringel, and Roth 2005; Eberly et al. 2008; Blumberg and Cynamon 1999; Czajka and Lewis 1999; Lewis, Ellwood, and Czajka 1998). However, several state-level record-check studies have also shown that the vast majority of Medicaid enrollees who fail to report that coverage do report some other type of coverage and do not get incorrectly classified as uninsured (Call et al, 2008). To muddy the waters further, there is evidence of Medicaid over-reporting. For example, a CPS-Medicaid record-check study found that among those Medicaid enrollees who, according to the records, had coverage at the time of the survey (March) but not at any time in the previous calendar year, 25.8 percent were incorrectly reported as having Medicaid in the past year (Klerman, Davern et al, 2009). Medicaid has received substantial study and attention with regard to reporting accuracy, in part due to the existence and accessibility of fairly high-quality records. Yet even within the Medicaid reporting literature it is not entirely clear how misreporting of Medicaid affects estimates of other plan types, and the ultimate measure of the uninsured, at the national level. Because surveys derive the estimate of the uninsured by taking into account reporting on a range of plan types, misreporting of all plan types needs to be considered collectively when assessing the accuracy of the uninsured estimate. The accuracy of reporting of other plan types has received less rigorous study than Medicaid, and these types of studies are more difficult due in part to less accessible, more disparate sources of validation. Hill, 2008/2009, represents a rare investigation validating reports of private coverage, and Davern et al., 2008 and Nelson et al, 2000, represent the only record check studies of both private and public insurance markets to date. In sum, while the level of uninsured tracks lower in some surveys (e.g., the CPS) than other surveys (e.g., the SIPP and NHIS), there is no definitive study or data source that indicates what the “true” level of uninsured really is.


The purpose of this study is to assess measurement error that is ascribable to the questionnaire across health insurance modules using administrative records as a truth source. The ultimate objective is to understand the magnitude, direction and patterns of misreporting for three main purposes: (1) to provide Census program staff with empirical data to develop and refine edits and/or to include research notes for data users so they can make their own adjustments for misreporting; (2) to equip the wider research community with information that could serve as a guide for deciding which among various surveys best suits their needs; and (3) to contribute to the general survey methods research literature on measurement error.


II. METHODS


A common strategy for assessing the validity of a self-report measure is a reverse record check study. In this case, the approach is to test different surveys’ self-reports with a sample of individuals whose coverage status is known through enrollment records (this is referred to later as “seeded sample”). Enrollment records, no doubt, come with their own sources of error. However, given what is known about misreporting of coverage in surveys, enrollment data provided directly from a private health plan that serves multiple markets brings a unique and powerful outside source of validation to assess relative reporting accuracy across survey instruments. The private health plan “Medica,” which provides health coverage to 1.5 million members in Minnesota, North Dakota, South Dakota, and select counties in Wisconsin has agreed to provide these enrollment records via their affiliate, Medica Research Institute (MRI). We restrict our sample to Minnesota residents. With Medica as a partner, the general study plan is to begin with a sample of enrollees whose coverage type and enrollment dates are known from the administrative records and randomly assign the cases to one of several questionnaire design treatments (or panels). Respondents in all panels will be asked to report the health coverage status for all household members, but the question routine for obtaining that information varies across surveys. The survey reports will then be compared to administrative records, rendering a measure of “absolute” reporting accuracy (survey report versus records) and “relative” accuracy (a comparison of absolute accuracy across surveys).


One of the advantages of partnering with Medica is that it offers coverage in all of the major insurance markets: (1) Medicaid, (2) MinnesotaCare (a state-specific program for low-income families), (3) employer-sponsored insurance (ESI), (4) non-group coverage and (5) non-group coverage purchased in the Insurance Marketplace (referred to as MNsure in Minnesota). As such, there is potential for examining misreporting of coverage type that previous studies have not addressed. For example, a sample member drawn from the non-group marketplace strata could report their coverage as Medicaid (or vice versa), or both. While the research design is imperfect – we could not rule out the possibility that the sample member has Medicaid coverage through a different insurance company – the design, at a minimum, allows us to measure under-reporting across a range of coverage types. It also allows us to measure potential over-reporting and misreporting of plan type and, we hope, gain insight into how misreporting of one coverage type impacts measurement error of other coverage types.


Once this general research approach was established, a technical advisory group (TAG) of experts from several federal, state and private agencies was assembled in order to maximize the utility of the study. Participants were:


Project Team:

Kathleen Call, State Health Access Data Assistance Center

Angela Fertig, Medica Research Institute

Elizabeth Lukenan, State Health Access Data Assistance Center

Don Oellerich, US HHS Office of the Assistant Secretary for Planning and Evaluation

Joanne Pascale, US Census Bureau


Members:
Jessica Banthin, Congressional Budget Office

Jeff Bontrager, Colorado Health Institute

Michel Boudreaux, State Health Access Data Assistance Center

Robin Cohen, National Center for Health Statistics

Mike Davern, NORC

Kathy Hempstead, Robert Wood Johnson Foundation

Jenny Kenney, Urban Institute
Sharon Long, Urban Institute

Jonathan Rodean, US Census Bureau (invited; did not attend)

Ben Sommers, Harvard University (by phone)

Jamie Taber, US Census Bureau (invited; did not attend)

Jessica Vistnes, Agency for Healthcare Research and Quality
Mary Francis Zelenak, US Census Bureau

Jeanette Ziegenfuss, HealthPartners


The TAG met in September 2014 and participants were asked for their input on key design decisions – primarily the selection of which insurance markets and questionnaire treatments to include, and sample size for each. The five insurance markets noted above were identified for their consideration due to their relevance to the post-health reform era. The following questionnaire treatments were considered:


CPS ASEC Redesign: this survey is used to produce official estimates of unemployment and poverty, and it serves as the most widely cited source of estimates on health insurance and the uninsured. The questionnaire went through a major redesign to address persistent concerns over measurement error and the redesign was implemented for the first time in production in March 2014.


ACS: questions about health insurance coverage were added to the ACS in 2008, and it has since become an attractive source of point-in-time insurance coverage estimates. The ACS is conducted every year, draws sample from all counties, and because of its large sample size (approximately 3 million every year), it provides estimates at the state level, sub-state levels of geography such as cities, counties, and even census tract levels through combining multiple years of data (U.S. Census Bureau 2009).


Old CPS ASEC: While the CPS redesign collects point-in-time and monthly coverage indicators, it still collects all the same information as the old CPS. Given the long trend line of coverage available from the old CPS (going back to the early 1980s), knowing how its estimates compare directly with the CPS redesign would enable a clean study of methods effects, which could facilitate harmonization of data collected under the redesigned CPS into the past.


Both the CPS and ACS are vital sources of data, providing annual estimates of coverage at the state level, and empirical evidence on how estimates from the two survey designs compare will help data users make decisions about which data source to use for which purposes. This will be particularly useful now that the CPS redesign produces a point-in-time measure (which the old CPS did not). This validation study will also help data users evaluate the surveys in light of health reform. The CPS redesign includes questions that explicitly measure participation in the new marketplaces and subsidies. The ACS production instrument does not currently contain marketplace-specific questions, but it is currently undergoing testing on this content outside the CHIME study. CHIME staff are working closely with the ACS team and incorporating the most recent findings available in to the CHIME vehicle.


The value of the old CPS was discussed at length, focusing on the potential contribution CHIME results could make toward harmonization of the old and new CPS. Because CHIME is being conducted with a single health insurance provider in a single state, a weighting scheme for making population estimates is not feasible. Furthermore, because sample is being drawn only from pools of known insured people, the CHIME study will not produce an uninsured estimate. Ultimately, it was decided that CHIME was not a good vehicle for contributing to harmonization, and that that would require its own study. Given the limited resources, it was decided that more value would come from understanding the measurement error of surveys that were currently in use, and because the old CPS was being phased out, it was a candidate for being dropped if power calculations indicated that resources were insufficient to field three questionnaire treatments.


With regard to markets, it was a given that marketplace plans would be included. It was also decided to maintain non-group coverage outside the marketplace, as we are interested in learning how accurately people report whether they obtain coverage inside or outside of the marketplace. For public coverage, there was some discussion of combining several of the insurance markets to reduce the number of sample strata and simplify the analysis and power calculations. This discussion included combining Medicaid and MinnesotaCare into one strata. In the end, it was decided that there is value to maintaining separate sample strata for these two public program markets because they are coded separately in Medica data and because of differences in eligibility and benefit structures (e.g., MinnesotaCare enrollees pay a premium) that may affect reporting accuracy. Regarding ESI, the literature suggests that respondents can report this plan type with a fair degree of accuracy. However, because it is by far the most prevalent plan type, even a small degree of misreporting can greatly contribute to measurement error of other plan types (Davern et al, 2008). For this reason, ESI was included as a stratum, but the case count was restricted relative to the other markets of interest.


The TAG discussed the pros and cons of including the three questionnaire design treatments and the five markets noted above. Several factors were taken into consideration. One was September 2014 enrollment data at the member and subscriber level from MRI for the five sample strata under consideration, as well as estimates of the prevalence of churn, length of enrollment and the count of Medicaid children without an adult subscriber in the household. The number of Medica enrollees in MNSure/marketplace and MinnesotaCare was somewhat limited in absolute terms because of low market share in the case of the marketplace and high rates of missing phone numbers in the case of MinnesotaCare. A second critical factor was budget, which would support data collection from 5,000 households. Assuming 2.5 people per household, this would result in data for 12,500 person records. Another general issue raised by TAG members was Medica’s position in Minnesota regarding generalizability by plan type. Specifically, Medica had one of the more expensive marketplace plans and only captured 5% of the Minnesota market. However, the TAG remarked that value of their collaboration in making this project possible outweighs this weakness.


In terms of sample size per strata and power calculations, the statistic of interest was prevalence of underreporting within a market. Where available we based our estimates of this prevalence on the literature (see Table 1 notes for details). The Census standard for an alpha level is 0.10, and a power of 0.80 was deemed acceptable. After producing power calculations for a number of different hypothetical scenarios, and taking into consideration the TAG discussion, the study design shown in Table 1 was decided on: two questionnaire treatments (the CPS redesign and the ACS) and five insurance markets. For three of the strata (ESI, Medicaid and non-group/non-marketplace), the sample sizes are sufficient to detect a minimum difference in underreporting between questionnaire treatments of about three percentage points. As noted above, given the limits on the absolute number of enrollees in MNSure/marketplace and MinnesotaCare, 100% of subscribers with phone numbers in these markets were included but rendered minimum detectable differences somewhat higher than ideal (6.25 and 5.40 percentage points, respectively).


In addition to the five strata, it was decided to oversample cases that moved between ESI and public programs over a 15-month period because CHIME provides an excellent opportunity to look at the impact of churn on reporting accuracy in both survey treatments as well monthly coverage reporting in the CPS redesign treatment. We acknowledge the limitations of this select group, noting that any churn observed may represent enrollees loyal to Medica. Additionally, we will not be able to distinguish between lapses in coverage within Medica plans and those who are plan hoppers. All of the churners were drawn from the Medicaid strata given the sufficiency of size and associated minimum detectable difference.


Table 1: Power Analysis Assumptions and Case Counts Per Treatment/Strata



SAMPLE SIZE


Total

Per Treatment

HH

Person

HH

Person

Ppt Diff

ESI (5%*)

663

1,658

332

829

3.00

Medicaid (17%*)

2,165

5,413

1,083

2,706

2.61

MNSure (marketplace) (11%*)

306

765

153

383

6.25

MinnesotaCare (17%*)

541

1,353

271

676

5.40

Non-group/Non-marketplace (11%*)

1,122

2,805

561

1,403

3.11

Private/Public transition

204

510

102

255

n/a

TOTAL

5,001

12,503

2,501

6,251


*Based on administrative records, 100% of cases are enrolled in the coverage shown. The percentage in parenthesis for each stratum represents the estimated low end of under-reporting for each coverage type based on what can be gleaned from the literature. ESI assumptions are based on Davern et al, 2008; Medicaid assumptions are based on an average from experimental findings in Call et al, 2012; no data exists for marketplace under-reporting so we assume it will be lower than Medicaid, as most marketplace enrollees pay a premium, which could help them identify the coverage as from the marketplace. We assume MinnesotaCare under-reporting to be similar to Medicaid, and that reporting of non-group coverage outside the marketplace will be similar to non-group coverage within the marketplace. We note that if underreporting is lower than expected, fewer person records will be needed than outlined in the table. Alternatively, if underreporting is higher than expected, more person records will be needed.


III. ANALYSIS GOALS


The analysis goals are to measure both absolute and relative reporting accuracy, assuming the information from the enrollment records is true. Research questions include the following measures:

  1. CPS Redesign versus ACS

    1. What is the absolute and relative accuracy of the insured at a point in time?

    2. What is the absolute and relative accuracy of type of coverage?

    3. What is the absolute and relative accuracy of marketplace coverage, whether there is a subsidy, and the cost of the premium?

    4. Among marketplace enrollees (subsidized and unsubsidized) how does the distribution of source of coverage reported (direct purchase, Medicaid, government, etc.) compare across surveys?

  2. Within the CPS Redesign:

    1. What is the absolute accuracy of months of enrollment (in particular, coverage at the time of the interview versus coverage at any time during the previous calendar year), transitions from one plan type to another and churning on and off the same plan type (to the extent that enrollees stay with Medica as their health plan provider)?

    2. What is the absolute accuracy of marketplace coverage, whether there is a subsidy, and the cost of the premium?

    3. Among marketplace enrollees (subsidized and unsubsidized), what is the distribution of source of coverage reported (direct purchase, Medicaid, government, etc.)?


IV.SAMPLE DEVELOPMENT


To recruit respondents, Medica will send an advance letter to enrollees explaining the study and what to expect: “You will get a call from the Census staff in a few months and be asked to take part in a brief, voluntary survey. The survey takes about 15 minutes. No action is necessary from you at this time. However, if you, or any other Medica member in your household, prefer not to receive a phone call from a Census staff member, call…and your household will be removed from the contact list.” The phone numbers of those who receive an advance letter and do not opt out will be provided by Medica to the Census Bureau.


To determine the number of letters that would need to be mailed to achieve the ultimate goal of 5,000 completed interviews, Medica informatics staff provided data within strata on duplicate addresses and missing phone numbers, and estimates of bad address and opt-out rates based on past studies. Furthermore, for ESI coverage, some employers opt-out their employer group from research as part of their contract with Medica, and these enrollees were excluded from those who were mailed a letter. We assumed a 30 percent response rate based on the general decline in response rates for telephone-based surveys, and a response rate of 48 percent from a precursor study to the CHIME (the random digit dial sample of the 2010 Survey of Health Insurance and Program Participation).


Based on informatics data and our assumptions, we calculated that a total of 20,834 letters would need to be mailed. This assumes a loss of 4,167 cases to bad addresses and opt-outs, resulting in 16,667 sample units to be delivered to Census. Assuming a 30% response rate, the 16,667 sample units would yield 5,000 household completes.


In terms of file preparation, Medica will compile a dataset of all enrollees who received an advance letter and did not opt-out, and send Census a final sample file with a unique anonymized household ID, phone number and strata. The date of cutting the file will be as close to actual data collection as possible, but allow adequate time to receive the file and fully test within Census systems prior to data collection. To keep that time span to a minimum, Medica will provide Census with mock sample files in advance of the live sample so that Census staff can test case management systems. Upon completion of data collection, Medica will send Census a second file, with the same anonymized household ID but including all relevant enrollment data fields necessary for analysis (plan type, enrollment dates, etc.).


VI. IMPLEMENTATION


  1. PROJECT MANAGEMENT and COLLABORATION


The study will be run and coordinated by a team of co-principal investigators (Co-PIs): Joanne Pascale at the Census Bureau, Kathleen Call at the State Health Access Data Assistance Center (SHADAC), Don Oellerich at the Office of the Assistant Secretary for Planning and Evaluation (ASPE) within the US Department of Health and Human Services, and Angela Fertig at the Medica Research Institute. Funding will be provided by ASPE, SHADAC, the Robert Wood Johnson Foundation and the Census Bureau. Very generally, staff from the Center for Survey Measurement (CSM) within Census will convene a team of representatives from interested divisions, including:

  • Social, Economic and Household Statistics Division

  • Demographic Surveys Division/CPS Branch

  • American Community Survey Office

  • Center for Economic Studies

  • Center for Administrative Records Research and Applications

  • Technologies Management Office (for both instrument authoring and case management)

  • Telephone Center Coordinating Office

  • Field Division

The Co-PIs and Census team will manage the project, the survey administration will be limited to CATI, and data collection will be carried out by Census Bureau telephone interviewers. Tasks would be as follows:

  1. Co-PIs:

  1. Project management

  1. develop an operating plan and schedule

  2. develop cost estimates

  3. coordinate all operational activities across all divisions

  4. write OMB package

  1. Write instrument specifications; conduct testing and debugging

  2. Write post-processing specifications

  3. Write interviewer training; conduct training

  4. Conduct data analysis

  5. Write report

  1. Field Division and the Telephone Center Coordination Office (TCCO): data collection

  2. Demographic Statistical Methods Division (DSMD), in collaboration with Co-PIs: sampling

  3. Technology Management Office (TMO): instrument and case management programming and

testing.

  1. Policy Coordination Office: develop data stewardship agreements and inter-agency agreements with partner organizations.


  1. FIELD PERIOD


Because the reference period and timing of data collection is central to the research goals, it is important to stay fairly close to the time frame in which the CPS is actually fielded (mainly in March). Therefore the window for data collection is spring, 2015 (accommodating production CPS schedule and other constraints). A two-day training will be conducted immediately prior to the start of production interviewing.


  1. THE CATI INSTRUMENT


In March 2010 the Census Bureau conducted the SHIPP study, a small-scale field test (n = ~ 5,000 households) which was essentially an abbreviated version of the CPS ASEC with an experimental component whereby respondents were randomly selected to receive one of three different versions of health insurance modules – the traditional CPS, the CPS redesign, or the ACS. Hence, for the current study, the SHIPP CATI instrument will be used as a starting point and adapted as needed. Specifically, the “front/back” module, which includes the introductory script, appointments, callback procedures, etc., will be adapted for the new type of sample. The modules on labor force, program participation will be abbreviated somewhat. The ACS health insurance module will be maintained intact but, based on input from the TAG, questions on premiums and marketplace coverage currently being tested in another ACS testing vehicle will be appended to the end of the series of health insurance questions for the last person in the household. For the CPS redesign, the CPS ASEC 2015 health insurance module will be “cut and pasted” into the CHIME instrument. The instrument will be programmed in Blaise and conducted on WebCATI at one or more of the Census Bureau telephone data collection facilities. The interview is expected to take about 12 minutes on average to complete.


  1. DATA COLLECTION


1. Interviewers, Monitors and Supervisors: The SHIPP 2010 instrument took on average about 18 minutes to complete and was conducted over a six-week period (late March through early May) with a total of 21 interviewers, 7 monitors and 3 supervisors. Roughly 5,350 household interviews were conducted. We will work with the call center liaison offices to determine how many interviewers, monitors and supervisors will be needed (and available) to achieve the desired number of completed interviews. All interviewers should have a minimum of three years experience conducting standardized CATI interviews.

2. Training: CSM will provide all study-specific training materials and will conduct training (expected to run about 12 hours spread across 2 days) which will include general background on the study and core concepts, demonstrations and paired practice scripted interviews, and a module on gaining cooperation.

3. Questions and Issues in the Field: An interim 3-hour debriefing will be held approximately four days into data collection to address any questions or concerns arising in the field. All monitors and supervisors will attend the training and debriefing to assist interviewers with any questions that come up during data collection, and they will serve as a liaison between interviewers and Census Bureau staff to resolve any issues. Telephone center staff will monitor CATI interviews to ensure that core concepts are adequately understood by interviewers and that standardized interviewing technique is followed.

4. Interviewer Feedback: A post-interviewing debriefing will be held at the conclusion of data collection.

5. Output Specifications: CSM will prepare detailed specifications for the final SAS file layout.


  1. POST-PROCESSING AND FINAL FILE PREPARATION


Census will prepare a final person-level SAS file with all raw data and post-processed recodes.


  1. ANALYSIS


Co-PIs will produce a final report addressing the above analysis goals.

REFERENCES


Blumberg, Stephen J. and M. L. Cynamon. 1999. Misreporting Medicaid enrollment: Results of three studies linking telephone surveys to state administrative records. Proceedings of the Seventh Conference on Health Survey Research Methods, pp. 189–195.


Call, Kathleen Thiede, Blewett LA, Boudreaux M, Turner J. 2013. Monitoring health reform efforts: Which state level data to use? Inquiry, 50(2):93-105.


Call, Kathleen Thiede, Davern ME, Klerman JA, Lynch V. 2012. Comparing errors in Medicaid reporting across surveys: Evidence to date. Health Services Research, Apr;48(2 Pt 1):652-6 4. doi: 10.1111/j.1475-6773.2012.01446.x


Call, Kathleen Thiede, Gestur Davidson, Michael Davern, and Rebecca Nyman. 2008. Medicaid undercount and bias to estimates of uninsurance: New estimates and existing evidence. Health Services Research, 43(3): 901–914. doi:  10.1111/j.1475-6773.2007.00808.x


Call, Kathleen Thiede, Davern ME, Blewett LA. 2007. Estimates of health insurance coverage: Comparing state surveys to the Current Population Survey. Health Affairs, 26(1): 269-278.


Czajka, John and K. Lewis. 1999. Using universal survey data to analyze children’s health insurance coverage: An assessment of issues. Washington, D.C.: Mathematica Policy Research, Inc.


Davern, Michael E, Call KT, Ziegenfuss J, Davidson G, Beebe TJ, Blewett LA. 2008. Validating health insurance coverage survey estimates: A comparison between self-reported coverage and administrative data records. Public Opinion Quarterly, 72(2):241-259.


Davern, Michael. 2009. “Unstable ground: Comparing income, poverty & health insurance estimates from major national surveys.” Paper presented at the AcademyHealth Annual Research Meeting, June 29, 2009. Chicago.


Eberly, Todd, M. Pohl, and S. Davis. 2008. Undercounting Medicaid enrollment in Maryland: Testing the accuracy of the Current Population Survey. Population Research and Policy Review online April 30. http://www.springerlink.com/content/d56459733gr81vu2/


Klerman, Jacob A., Michael Davern, Kathleen Thiede Call, Victora Lynch, Jeanne Ringel. 2009. Understanding the current population survey’s insurance estimates and the Medicaid ‘Undercount.’ Health Affairs web exclusive, DOI 10.1377/hlthaff.28.6.w991, pp 991-1001.


Klerman, Jacob A., J. S. Ringel, and B. Roth. 2005. Under-reporting of Medicaid and welfare in the Current Population Survey. RAND Working Paper WR-169-3. Santa Monica, Calif.: RAND.


Nelson, David, David E., Betsy L. Thompson, Nancy J. Davenport and Linda J. Penaloza, (2000). What people really know about their health insurance: A comparison of information obtained from individuals and their insurers. American Journal of Public Health. Vol. 90, No. 6 924-928.


Pascale, Joanne. 2009. "Findings from a pretest of a new approach to measuring health insurance in the Current Population Survey." Paper prepared for the Federal Committee on Statistical Methodology Research Conference, November 2-4, 2009. http://www.census.gov/srd/papers/pdf/rsm2009-07.pdf


Pascale, Joanne, Marc I. Roemer, and Dean M. Resnick. 2009. Medicaid underreporting in the CPS: Results from a record check study. Public Opinion Quarterly. 73: 497-520.


State Health Access Data Center (SHADAC). 2013. “Comparing federal government surveys that count the uninsured.”


Swartz, K. (1986). Interpreting the estimates from four national surveys of the number of people without health insurance. Journal of Economic and Social Measurement, 14, 233-242.


U.S. Census Bureau. 2009. “Design and methodology: American Community Survey.” Available at: http://www.census.gov/acs/www/Downloads/
survey_methodology/acs_design_methodology.pdf



12



File Typeapplication/msword
File TitleSurvey of Health Insurance and Program Participation (SHIPP)
AuthorBureau Of The Census
Last Modified ByJeannette D Greene-Bess
File Modified2015-05-04
File Created2015-05-04

© 2024 OMB.report | Privacy Policy