Health Insurance Exchange Consumer Experience Surveys: Qualified Health Plan Enrollee Experience Survey
Supporting
Statement—Part B
Collections of Information Employing
Statistical Methods
Centers for Medicare & Medicaid Services
TABLE OF CONTENTS
Section Page
1. Potential Respondent Universe and Sampling Methods 1
2. Information Collection Procedures 3
Exhibit B-1. Data Collection Protocol for 2017 QHP Enrollee Survey 3
3. Methods to Maximize Response Rates and Address Non-Response Bias 4
3.1 Evaluating Non-Response Bias 5
4. Tests of Procedures 5
5. Statistical Consultants 7
Appendix A: Detailed Findings from Nonresponse Bias Analysis of the 2016 QHP Enrollee Survey 8
This supporting statement includes information in support of the Qualified Health Plan Enrollee Experience Survey (“QHP Enrollee Survey”). CMS developed the QHP Enrollee Survey, which contributes a subset of survey measures to the Quality Rating System (QRS).
CMS designed the QRS to provide comparable and useful information to consumers about the quality of health care services and enrollee experience with QHPs offered through the Health Insurance Exchange (Exchange) (also known as Marketplace). 1
As outlined in 45 CFR § 156.1125(b), Qualified Health Plan (QHP) issuers are required to conduct the QHP Enrollee Survey at the level specified by HHS for all QHPs that had more than 500 enrollees as of the previous year. Beginning with the 2017 QHP Enrollee Survey, CMS also required that reporting units have more than 500 enrollees as of January 1 in order to be required to administer the survey. The goal of this requirement was to reduce the burden on QHP issuers who may have had more than 500 enrollees during the previous year, but saw a reduced number of enrollees for the new year.
CMS has established the reporting unit as the product type (i.e., Exclusive Provider Organization [EPO], Health Maintenance Organization [HMO], Preferred Provider Organization [PPO], Point of Service [POS]) offered by a QHP issuer through the Exchange in a particular state. For example, XYZ issuer’s HMOs offered through the Exchange in Florida would be considered a single reporting unit. Depending on the way a QHP issuer packages its plan offerings, the reporting unit might include anywhere from a single QHP to many QHPs spanning all categories of coverage (i.e., bronze, silver, gold, platinum, catastrophic). QHP issuers will create a sample frame for each product type they offer through the Exchange within a particular state, or reporting unit. Child-only QHPs as well as Standalone Dental Plans (SADPs) are excluded from the QHP Enrollee Survey at this time.
For the 2018 survey, CMS estimates that no more than 300 reporting units will be required to field the QHP Enrollee Survey. This estimate is primarily based on the fact that 262 reporting units are currently administering the 2017 QHP Enrollee Survey. In comparison, there were 298 reporting units in 2015 and 311 reporting units in 2016.
1.2 Sample Frame & Respondent Universe
The sample frame for each reporting unit is generated by the QHP issuer and then provided to the issuer’s CMS-approved survey vendor who is responsible for drawing a simple random sample. Eligible reporting units for the 2018 QHP Enrollee Survey are those that meet both of the following criteria:
The reporting unit was offered through an Exchange in 2017;
The reporting unit will be offered through an Exchange in 2018; and
The reporting unit had more than 500 enrollees, regardless of age, on July 1, 2017 and January 1, 2018.
Issuers must collect and report survey data for all eligible reporting units to comply with the regulation.
The sample frame for the survey includes all enrollees of each eligible reporting unit who meet the following three criteria:
Eligible sample frame members must be 18 years or older on December 31, 2017, and
Eligible sample frame members must have been enrolled in the eligible QHP from July 1, 2017 through December 31, 2017 with no more than one 31-day break in enrollment during those 6 months, and
Eligible sample frame members must have coverage for primary health care through the eligible QHP.
Issuers are to exclude individuals who discontinue their coverage through the QHP for plan year 2018 and those who are deceased as of January 1, 2018. The sample frame is audited by a NCQA-Licensed HEDIS®2 Compliance Organization (NCQA-Certified HEDIS® Compliance Auditor) to ensure that the sample frame conforms to all established specifications. A random sample is drawn from each audited eligible sample frame by the issuer’s survey vendor and inferences are made to the reporting unit.
For the 2018 QHP Enrollee Survey, CMS proposes to continue utilizing a sample size of 1,300 enrollees per reporting unit and allowing QHP issuers to select a larger sample for reporting units (i.e., oversampling) upon approval from the Project Team. In the 2016 administration of the QHP Enrollee Survey, reporting units that had sufficient enrollment to produce the full sample of 1,300 enrollees received an average of 277 responses. An ongoing challenge for the survey is the poor quality of the contact information provided on the sample frame, which results in higher than anticipated noncontact rates. CMS and its contractors have implemented a number of changes to the survey administration procedures in order to attempt to rectify this problem. Additionally, in its technical assistance to QHP issuers, CMS will emphasize the importance of QHP issuers providing up-to-date contact information for all enrollees in order to avoid future increases in the sample size for the survey.
Oversampling in increments of 5 percent and up to 30 percent are permitted for the QHP Enrollee Survey to assist issuers and vendors who anticipate achieving a lower number of completes per reporting unit. The primary implication is that, if response rates are low, some measures may not have denominators large enough to permit QRS scoring. If issuers do not obtain scores for particular measures, then they will not be able to evaluate how well they are performing on that measure. Issuers who are willing to incur the additional costs of collecting data from a larger sample may do so if they wish. Oversampling does have precedence among other CMS CAHPS® surveys, such as MA & PDP CAHPS®, which permits oversampling at the contract level upon request to CMS.
The QHP Enrollee Survey will be conducted by HHS-approved survey vendors that meet minimum business requirements. A similar system is currently used for other CMS surveys, including Medicare CAHPS, Hospital CAHPS® (H-CAHPS), Home Health CAHPS® (HH-CAHPS), the CAHPS Survey for Accountable Care Organizations, and the Health Outcomes Survey. Under this model, all issuers that are required to conduct the QHP Enrollee Survey must contract with a HHS-approved survey vendor to collect the data and submit it to CMS on their behalf (45 CFR § 156.1125(a) ). CMS is responsible for approving and training vendors, providing technical assistance to vendors, and overseeing vendors to ensure that they are following the data collection protocols.
The data collection protocol for the 2018 QHP Enrollee Survey will continue to utilize the same data collection protocol, using a mixed-mode methodology that combines web, mail, and telephone surveys. First, all sampled enrollees will receive a pre-notification letter informing them that they have been sampled for the survey and providing them with information about the survey and how the data collected will be used. The pre-notification letter also provides information on completing the survey online, including the website URL and the sample member’s user ID and password, which are unique to each sample member. Three days after the pre-notification letter, individuals will receive a mail questionnaire, which is then followed-up by a reminder letter 14 days after the questionnaire is mailed and includes the web survey information again. Four weeks after the first questionnaire is mailed, nonrespondents will receive a second questionnaire. Finally, three weeks after the second questionnaire is mailed, survey vendors will initiate telephone follow-up calls making no more than six attempts on varying days of the week and at differing times of the day over a minimum of two weeks.
Task |
Date |
Survey vendors sample enrollees according to sampling protocols. |
January 2018 –February 2018 |
Mail prenotification letter to sampled enrollees.
|
Day 0 |
Customer support phone center opens (toll-free phone number required). |
Day 1 |
Mail first questionnaire with survey cover letter to nonrespondents 3 calendar days after the prenotification letter is mailed. |
Day 3 |
Mail reminder letter to nonrespondents 14 calendar days after the first questionnaire is mailed. If the 14th calendar day after the first questionnaire mailing date falls on a weekend, then survey vendors mail the reminder letter the preceding Friday. Include URL and login credentials that offers the option to complete by Internet. |
Day 17 |
Mail second questionnaire with survey cover letter to nonrespondents 4 weeks (28 calendar days) after the first questionnaire is mailed. |
Day 31 |
Initiate telephone follow-up contact for nonrespondents 3 weeks (21 calendar days) after the second questionnaire is mailed.
|
Days 52–70 |
End data collection activities. |
Day 71 |
All survey estimates generated from the QHP Enrollee Survey are weighted to adjust for the unequal probability of selection across reporting units. Weights are generated for each case by multiplying the inverse of the enrollee’s probability of selection by the enrollee’s probability of response.
Additionally, case-mix adjustment is used when comparing reporting units to the overall national benchmark scores. Case-mix adjustment accounts for the variation in demographics of a particular reporting unit. Case-mix variables include education, age, self-reported health status, mental health rating, receiving help completing the survey, having a chronic condition, the language in which the survey is completed, and survey mode. Additionally, the four-item composite assessing experience with cost will be reported in the quality improvement reports with the list of adjusters above and also with metal level as an additional case-mix adjuster. This composite will not be publicly reported through the Quality Rating System. CMS will continue to monitor the appropriateness of metal level as a case-mix adjuster.
Every effort will be made to maximize the response rate, while retaining the voluntary nature of the effort. The mixed-mode methodology of the QHP Enrollee Survey provides sampled individuals with numerous methods for completing the survey across a 71 day time period. This methodology has been developed using best practices within the survey research field, including the Tailored Design Method.3
The 2014 psychometric test of the QHP Enrollee Survey achieved a 37.2 percent response rate,the 2015 beta test achieved a response rate of 30.9 percent, the 2016 survey achieved a response rate of 30.0 percent, and the 2017 survey achieved a response rate of 26.7 percent. Response rates for the QHP Enrollee Survey are calculated using the Response Rate 3 (RR3) formula established by the American Association for Public Opinion Research (AAPOR). A response is counted as a completed interview if the respondent completes 50 percent of the questions that are applicable to all respondents, excluding the questions included in the “About You” section of the survey.
Given that CMS the response rate was below 80 percent, CMS conducted a nonresponse bias analysis to determine whether nonrespondents systematically differ from respondents based on their experience with their health plan. In the nonresponse bias analysis of the 2017 QHP Enrollee Survey, CMS found that older enrollees (those over the age of 55) continued to be significantly more likely to respond compared to younger enrollees. This pattern was observed in the 2014 psychometric test,the 2015 beta test, and the 2016 Administration. More detailed findings from the 2017 nonresponse bias evaluation have been included in Appendix A of this document. CMS intends to summarize this information in a consumer friendly way once we begin national public reporting.
Thus far, the response rates discussed have been at the unit level, where respondents either completed or did not complete the survey. There is also item-level nonresponse where a respondent answers some, but not all of the questions they are eligible to answer in the survey. Although highly unlikely, if the item response rate is less than 70% for any survey questions, CMS will conduct an item nonresponse analysis similar to that discussed above for unit nonresponse as required by Guideline 3.2.10 of the Office of Management and Budget’s Standards and Guideline for Statistical Surveys.
All reporting websites under CMS’ control will provide Exchange consumers with the overall response rate and the minimum and maximum response rates obtained by reporting units nationwide. This information will also include a statement of findings from the nonresponse bias analysis and CMS’ assessment of the potential implications of those findings for use of the response rates by consumers in choosing a QHP. CMS is assessing information obtained regarding the display of QRS ratings during the pilot and will consider the timeframes for public reporting of quality rating information.
The QHP Enrollee Survey uses the CAHPS® Health Plan 5.0 Survey, which was developed and is maintained by the Agency for Healthcare Research & Quality (AHRQ), as the core of the QHP Enrollee Survey. Additional items included in the survey are from other CAHPS® surveys and supplemental item sets. These items have undergone extensive testing and practical use by AHRQ, CMS, NCQA, and other users since they were first developed nearly 20 years ago. In 2014, CMS conducted a psychometric test to evaluate the psychometric properties of the QHP Enrollee Survey in the Exchange population. This test resulted in numerous questions being dropped from and revised for the questionnaire for the 2015 beta test. More specifically, in 2014, CMS conducted the psychometric test of the QHP Enrollee Survey with 30 sampling units (defined by state, issuer, and product type (i.e., HMO, PPO, POS, or EPO)). The sample unit definition matches the reporting unit definition CMS developed for the Quality Rating System (QRS). Because the Psychometric Test took place in the second half of 2014, the results were not available in time to influence the content of the questionnaire for the 2015 Beta Test, which had to be finalized for distribution to the Beta Test survey vendors by November 2014. As a result, CMS, in consultation with OMB, reduced the beta test questionnaire to include only the survey items that were part of the CAHPS® Health Plan 5.0 core items and the items that were needed for the QRS. Thus, the Beta Test questionnaire contained 31 fewer items than the Psychometric Test questionnaire.
The 2015 Beta Test was mainly intended to test the survey vendor system but also provided additional information about the questionnaire items and data collection procedures. As a result of the Psychometric and Beta Tests, CMS identified changes in the questionnaire items, the data collection procedures, and the sampling specifications that we obtained approval for the 2016 implementation of the survey. When the results of the Psychometric Test became available in early 2015, CMS determined that nine of the questions that had been removed for the Beta Test were vital for understanding enrollee experiences with their QHPs. These questions were approved to be returned to the questionnaire for the 2016 National Implementation. These restored items address enrollees’ experiences with out-of-pocket costs for covered services, health insurance literacy, and health insurance coverage during the previous year, and whether respondents would recommend their QHPs to their friends and family. For the 2017 implementation and beyond, CMS has added six disability status items to address requirements of section 4302 data collection standards of the ACA and removed the question to assess health insurance coverage during the previous year. The disability status items were tested in the 2014 Psychometric Test and have been extensively tested by the US Census Bureau for administration in the American Community Survey.
In consultation with the project’s Technical Expert Panel (TEP), CMS has continued to review the survey to review the impact of the questionnaire length and possibility for reducing the response burden. As part of our analysis, we plan to examine 2015 and 2016 data on partial completes to better understand where in the survey they broke off to determine if there’s a particularly problematic question or a length of time that suggests respondent fatigue has set in. We can also model the propensity for breakoffs to investigate if they have unique characteristics compared to those who completed the survey. We can stratify the analysis by survey mode to see if results differ for telephone or Web respondents. Breakoffs are not a common occurrence since less than 1 % of the total sample in the 2016 QHP Enrollee Survey were partially completed. CMS previously analyzed the 2014 psychometric test data which was the longest version of the survey at 107 questions and did not find evidence of any particularly problematic questions or time points in the survey that produced a large number of breakoffs.
Based on analysis of the 2016 QHP Enrollee Survey data, CMS is proposing to eliminate two survey questions on after-hours care because these questions are no longer used in the QRS Access to Care measure and have consistently had low screen-in rates for these questions. CMS is also proposing to eliminate one survey question about the likelihood of respondents recommending their health plan to friends and family. While this question was included as a result of public comment, analysis of the 2016 QHP Enrollee Survey showed that this item was very highly correlated with the enrollee rating of their health plan item. Finally, CMS is proposing to eliminate all five survey questions related to the QRS Aspirin Use and Discussion given that this measure has been retired from the QRS because of revised guidelines by the US Preventive Services Task Force (USPSTF).
CMS will continue to examine methods for reducing the current QHP Enrollee Survey questionnaire length and will provide opportunities for public comment before these changes are implemented.
This sampling and statistical plan was prepared and reviewed by staff of CMS and by the American Institutes for Research. The primary statistical design was provided by Chris Evensen, MS, of the American Institutes for Research (AIR) at (919) 918-2310; Lee Hargraves, PhD of AIR at (781) 373-7031; and HarmoniJoie Noel, PhD, of AIR at (202) 403-5779.
Unit nonresponse, which is typically referred to as simply nonresponse, occurs when a sampled individual fails to complete the survey. While nonresponse alone is not problematic, if the topic being measured in the survey is directly related to the reason for nonresponse it can result in biased survey estimates. For example, if individuals who had poor experiences with their QHPs were less likely to complete the QHP Enrollee Survey, then the survey results may overestimate the percentage of QHP enrollees who had a positive experience. The mean unit-level response rate across all reporting units that participated in the 2017 QHP Enrollee Survey was 25.9 percent with a minimum of 7.7 percent and a maximum of 38.6 percent. This variation is consistent with the variation in response rates across reporting units that CMS saw in the 2015 QHP Enrollee Survey.
Given the potential detrimental impact that nonresponse bias can have on survey estimates, the Office of Management and Budget (OMB) requires that all federal surveys that achieve a response rate below 80 percent perform a nonresponse bias analysis (Office of Management and Budget, 2006). This nonresponse bias analysis utilized information that QHP issuers included on the sample frame about all individuals, such as age, sex, and Census division.
First, the research team examined cross-tabulated frequencies of the frame variables between respondents and nonrespondents to determine if significant differences existed. The results of this analysis is shown in Exhibit B2.
Exhibit B2. Comparison of Selected Demographics between Sample Frame and Survey Respondents
Demographic |
Weighted Percent of Sample Frame |
Weighted Percent of Survey Respondents |
Difference |
Sex |
|
|
|
Male |
45.4 |
41.5 |
-3.9 |
Female |
54.6 |
58.5 |
+3.9 |
Age |
|
|
|
18-24 |
9.1 |
4.3 |
-4.8 |
25-34 |
18.3 |
10.6 |
-7.7 |
35-44 |
16.8 |
12.3 |
-4.5 |
45-54 |
23.0 |
22.6 |
-0.4 |
55-64 |
31.7 |
48.9 |
+17.2 |
65 or older |
1.1 |
1.4 |
+0.3 |
Metal Level |
|
|
|
Catastrophic |
2.3 |
2.0 |
-0.3 |
Bronze |
6.8 |
6.4 |
-0.4 |
Silver |
71.0 |
73.2 |
+2.2 |
Gold |
19.1 |
17.9 |
-1.2 |
Platinum |
0.6 |
0.3 |
-0.3 |
Missing |
0.2 |
0.2 |
- |
Census Division |
|
|
|
East North Central (IL, IN, MI, OH, WI) |
10.0 |
11.2 |
+1.2 |
East South Central (AL, KY, MS, TN) |
3.8 |
4.1 |
+0.3 |
Middle Atlantic (NJ, NY, PA) |
7.8 |
8.0 |
+0.2 |
Mountain (AZ, CO, ID, MT, NV, NM, UT, WY) |
5.7 |
6.4 |
-0.7 |
New England (CT, ME, MA, NH, RI, VT) |
6.3 |
7.4 |
+1.1 |
Pacific (AK, CA, HI, OR, WA) |
19.7 |
19.0 |
-0.7 |
South Atlantic (DE, DC, FL, GA, MD, NC, SC, VA, WV) |
27.0 |
25.8 |
-1.2 |
West North Central (IA, KS, MN, MO, NE, ND, SD) |
3.9 |
4.8 |
+0.9 |
West South Central (AR, LA, OK, TX) |
15.7 |
13.2 |
-2.5 |
In the 2017 QHP Enrollee Survey, one variable with the largest differences was enrollee age. These differences are not uncommon in surveys. As was the case in previous administrations, we found that younger enrollees (those under the age of 45) were less likely to respond, which resulted in overrepresentation of enrollees between the ages of 55 and 64 because they were more likely to respond to the survey.
Given that the bivariate analysis indicated significant differences in response by consumer characteristics, AIR utilized multivariable logistic regression to determine which consumer characteristics were associated with returning the survey and to estimate the direction and size of the effect of these characteristics. The model estimated the propensity to respond (the dependent variable) based on these characteristics. A value of 1 for the dependent variable indicated that the sampled consumer was a respondent and a value of zero indicated that the sampled consumer was a non-respondent.
Exhibit B3 shows the marginal effect (odds ratio) of each variable on the propensity to respond as well as the 95% confidence interval associated with the odds ratio (OR) estimate. Estimates above 1.0 indicate that the variable was associated with an increased propensity to respond in comparison to the reference group, while estimates below 1.0 indicated that the variable was associated with a lower propensity to respond to the survey in comparison to the reference group. For example, male enrollees were slightly less likely to return the survey compared to female enrollees (OR = 0.829). This analysis also shows that older enrollees were more likely to respond. In most surveys, older people and women are more likely to respond.
Exhibit B3. Odds Ratios from Variables included in Logistic Regression Modeling Survey Response
Demographic |
Point Estimate |
Lower 95% CI |
Upper 95% CI |
Sex (Ref: Female) |
|
|
|
Male* |
0.802 |
0.799 |
0.806 |
Age (Ref: 45-54) |
|
|
|
18-24* |
0.471 |
0.443 |
0.501 |
25-34* |
0.555 |
0.523 |
0.590 |
35-44* |
0.709 |
0.667 |
0.753 |
55-64* |
1.715 |
1.616 |
1.821 |
65 or older* |
1.346 |
1.249 |
1.452 |
Metal Level (Ref: Silver) |
|
|
|
Catastrophic |
0.890 |
0.747 |
1.060 |
Bronze |
0.888 |
0.746 |
1.057 |
Gold* |
0.840 |
0.706 |
0.999 |
Platinum* |
0.796 |
0.658 |
0.964 |
Missing |
0.894 |
0.727 |
1.098 |
Census Division (Ref: West South Central [AR, LA, OK, TX] |
|
|
|
East North Central (IL, IN, MI, OH, WI)* |
1.121 |
1.049 |
1.199 |
East South Central (AL, KY, MS, TN)* |
1.634 |
1.526 |
1.750 |
Middle Atlantic (NJ, NY, PA) |
1.034 |
0.967 |
1.105 |
Mountain (AZ, CO, ID, MT, NV, NM, UT, WY)* |
1.287 |
1.203 |
1.376 |
New England (CT, ME, MA, NH, RI, VT)* |
1.214 |
1.136 |
1.298 |
Pacific (AK, CA, HI, OR, WA) |
1.015 |
0.953 |
1.082 |
South Atlantic (DE, DC, FL, GA, MD, NC, SC, VA, WV) |
1.047 |
0.982 |
1.115 |
West North Central (IA, KS, MN, MO, NE, ND, SD)* |
1.453 |
1.354 |
1.560 |
Nonworking Telephone Number (Ref: Working Telephone Number) |
|
|
|
Nonworking or Unavailable Telephone Number* |
0.269 |
0.267 |
0.271 |
Invalid Address (Ref: Valid Address) |
|
|
|
Invalid Address* |
0.310 |
0.304 |
0.316 |
* = Statistically significant at p<.05
As shown in Exhibit B3, despite controlling for a variety of demographic aspects, many of these variables remained statistically significantly associated with an individual’s response to the survey.
1 Health Insurance ExchangeSM and ExchangeSM are service marks of the U.S. Department of Health & Human Services.
2 Healthcare Effectiveness Data and Information Set (HEDIS®) is a registered trademark of the National Committee for Quality Assurance (NCQA).
3 Dillman, D., Smyth, J., & Christian, L. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Modified | 0000-00-00 |
File Created | 2021-01-21 |