CMS-10493 - Supporting B rev 04-21-2013 (clean)

CMS-10493 - Supporting B rev 04-21-2013 (clean).docx

Nationwide Consumer Assessment of Healthcare Providers and Systems (DCAHPS) Survey for Adults in Medicaid (CMS-10493)

OMB: 0938-1239

Document [docx]
Download: docx | pdf

Supporting Statement – Part B

Nationwide Consumer Assessment of Healthcare Providers and Systems (DCAHPS) Survey for Adults in Medicaid

CMS-10493, OCN 0938-New

B.1 Respondent universe and sample


Respondents to the Medicaid adult CAHPS survey will be individuals ages 18 and older as of the measurement period who do not reside in an institutional setting and were continuously enrolled in Medicaid for the six months prior to the measurement period. The survey will be conducted among a random sample of beneficiaries in all 50 states and the District of Columbia (the states). The sampling strategy shall identify four strata within each state consisting of adults in the following groups:


  • disabled individuals identified based on state Medicaid administrative files;


  • non-disabled individuals who receive care through a managed care delivery system;


  • non-disabled individuals who receive care through a fee-for-service (FFS) or primary care case management delivery system; and


  • individuals dually eligible for Medicare and Medicaid who are not enrolled in a Medicare-Medicaid Coordination Office-sponsored demonstration.


A decision will be made about conducting CAHPS in Puerto Rico, the U.S. Virgin Islands, and other U.S. territories based on the estimated cost and resources available to this project. An addendum to this request will be made if CMCS wishes to expand the survey population to the territories.


The final sample design will be based on information obtained from either a central finder file derived from the federal Medicaid Statistical Information System (MSIS), a state’s Medicaid Management Information System (MMIS; from which MSIS data are generally extracted on a quarterly schedule) or other database, or a combination of these sources.


A query of CMS’s MSIS State Summary Data Mart1 was used to determine the expected number of enrollees in the MSIS database for each of the four strata, by state. The most recent year for which the Data Mart contains data for all states is FY 2009; therefore the results of this query were adjusted to obtain FY 2014 projections. CMCS estimates approximately 50 percent growth in Medicaid enrollees from 2009 to 2014.2 Attachment B provides projected 2014 Medicaid-eligible counts for individuals ages 19 and older for each of the subgroups of interest by state.3,4

Planning information related to survey eligibility resolution (e.g. enrollment in Medicaid at the time of the survey and valid addresses and phone numbers) and response rates were obtained from the National Committee for Quality Assurance (NCQA) HEDIS 2013 Specifications for Survey Measures, Volume 3 (HEDIS Volume 3). This document provides the following technical specifications for HEDIS survey measures and standardized CAHPS surveys, including adult Medicaid surveys.


  • Sample size calculation examples in HEDIS Volume 3 assume that 5 percent of sampled records will be for individuals who have disenrolled from Medicaid, and 20 percent of the records will have invalid addresses and phone numbers.

    • Thus, we expect that 75 percent (100 – 5 – 20 percent) of beneficiaries with records in the sampling frame will be eligible for the survey.

  • The target response rate for the Medicaid product line is 45 percent.

    • The documentation states that this response rate is based on an analysis of HEDIS survey results from prior years.

  • HEDIS Volume 3, Table A7-3 provides estimates of Medicaid beneficiary response rates for CAHPS 5.0H Adult survey questions.

    • These response rates range from a low of 16.3 percent to a high of 90.6 percent.

B.2 Information collection procedures


The administration of the survey will be conducted through NORC’s subcontractor, Thoroughbred Research Group, an NCQA-certified Medicaid CAHPS Survey vendor. Cover letters and surveys will be mailed to all sampled members. Cover letters will be personalized with name, address, and personalized salutation of sampled members and will be signed by the CMS Privacy Officer. Approximately one week after mailing the initial survey, a reminder card will be mailed to the sample population. Five weeks after mailing the initial questionnaire, a second survey and cover letter will be sent to all non-respondents. A second reminder postcard will be mailed to all non-respondents approximately one week after the second survey packet. Three weeks after the second questionnaire is mailed, a telephone non-response follow-up will be initiated to members who have not yet completed a mail survey (and who have not refused to participate). Data collection will remain open a total of 10 weeks.

B.2.1 Statistical Methodology for Stratification and Sample Selection


The four subgroups described in section B.1 within each state (including D.C.) will be the main explicit stratifiers in the sample selection. Other implicit stratifiers (i.e., variable used to sort the sampling frame within explicit strata) such as age group, gender, and geographic region (e.g. zip code, county, etc.) will be used in the sample selection scheme to make sure that the subgroup samples are spread across a broad range of eligible Medicaid beneficiaries. Use of explicit stratifiers allows for sample allocation to the categories within a stratifier to ensure sufficient sample size for analytic purposes. Use of implicit stratifiers ensure the sample is representative of the population relative to key characteristics but may not provide sufficient sample sizes to allow detailed analysis for those stratifiers.


In order to carry out the sample design so that sample is spread out across implicit stratification variables, we will use systematic sampling from a sorted file. That is, within each stratum of each state, records will be sort by all the implicit stratifiers. Every kth record in the file will be selected where k is defined as the total number of records in an explicit stratum divided by the stratum sample size. A random starting point between 1 and k will be chosen to initiate the selection process.

B.2.2 Estimation Procedure


For producing estimates, each respondent member will have a sampling weight. Weights will be calculated following a standard process and will be applied separately to each respondent record within state-subgroup strata. The general process will have the following steps:


1) utilize probabilities of selections to derive base weights;

2) adjust for non-resolution of sample unit eligibility;

3) adjust for survey nonresponse;

4) calibrate survey weights so weighted sample counts match target population counts; and

5) employ weight trimming to reduce overly influential weights.


In addition, variance estimates will be derived using a standard approach such as Taylor Series or jackknife, so as to provide measures of uncertainty in the survey estimates resulting from the sample design and sample size.

B.2.3 Degree of Accuracy Needed for the Purpose Described in the Justification


Responses to CAHPS questions are categorical in nature. Therefore, estimates of interest are percentages (proportions), and accuracy can be judged by the size of a proportionate estimate’s margin of error, i.e. the half-width of a 95 percent confidence interval. Alternatively, the power to detect differences between percentages between two categories of interest (both within and across states) can be used as an accuracy measure.


Given CMS’s goal to attain national and state-by-state estimates across different financing and delivery models (e.g., managed care and fee-for-service) and population groups (e.g., enrollees beneficiaries with physical and/or mental disabilities, dually-eligible beneficiaries, all other enrollees beneficiaries), accuracy of estimates relative to the four subgroup strata within each state is of interest. Therefore, we determine a sample size at the subgroup level within state to meet accuracy goals. Table B.2.3.1 shows the sample size needed to obtain a 5 percent margin of error at the 95 percent confidence level for a range of proportion estimates from a stratum with a large number of beneficiaries (e.g. the Non-Dual Non-Disabled PCCM or FFS stratum in California is projected to have 4,675,213 beneficiaries in FY2014).


To be able to estimate a percentage within +/- 5 percentage points at the 95% confidence level, for instance, we would need to sample of 384 beneficiaries if we assume the population characteristic is 50 percent (the worst case). For smaller stratum sizes, divide the Table B.2.3.1 sample size (n0) by the quantity , where N is the stratum size. For example, if the stratum size is 2,460 (the size of the Non-Dual Non-Disabled HMO stratum in Iowa), a sample of will provide a +/- 5 percent margin of error at the 95% confidence level.



Table B.2.3.2: Sample Sizes by Expected Percentage Estimate
5% Margin of Error at the 95% Confidence Level, Large Stratum Size

Expected Percentage Estimate

10% or 90%

20% or 80%

30% or 70%

40% or 60%

50%

138

246

323

369

384



Margin of error gives an indication of the precision of an estimate. If one is interested in comparing two percentage estimates, for example between two strata in a state or between strata of the same type across states, then a common stratum sample size can be chosen to obtain a desired statistical power. Table B.2.3.2 shows the power to detect a difference of 10% between percentages from two independent samples across a range of sample sizes. The power to detect a difference of 10 percentage points between two percentage measures from independent samples, each of size 400, is approximately 80 percent when one of the percentages is 50 percent—the worst case scenario for a percentage measure.


Table B.2.3.2: Power* to Detect Differences of 10 Percentage Points Between Two Equally Sized Samples


Sample 1 Percentage

Sample Size

Sample Size

Sample Size


350

400

450

10

95

97

98

20

84

89

92

30

77

82

87

40

73

79

84

50

73

79

84

60

77

82

87

70

84

89

92

80

95

97

98

90

100

100

100

* Continuity correction used to calculate the power.


Based on the above analysis, good precision and power will be achieved if approximately 400 complete and valid surveys are obtained within each stratum. However, complete and valid surveys will not necessarily have response for every question on the CAHPS survey. As noted above, the worst question response rate for a CAHPS survey is 16.3 percent. Therefore, if 2,500 complete and valid surveys are obtained in a stratum, then we would expect to have at least 408 (16.3% if 2,500) surveys with a response to any given question. For most states, this implies that our goal is to obtain 10,000 complete surveys (2,500 within each of the 4 subgroup strata). For low response rate questions, the number of responses should be sufficiently high (400 or more records) for percentages estimates to have a margin of error of no more than 5 percent, and the power to detect a difference of 10 percentage points between two subgroups will be at least 80 percent. For higher response rate questions, estimates will be more precise (smaller margin of error), and tests of differences will have higher power.

B.2.4 Unusual Problems Requiring Specialized Sampling Procedures


There are no unusual problems requiring specialized sampling procedures.

B.3 Methods to Maximize Response Rates


The Medicaid CAHPS data collection will use a number of techniques including a mixed-mode data collection protocol to maximize response rates. The protocol aims to maximize participation in the mail version of the survey by sending up to two questionnaire packets and two reminder postcards to each sample participant. For those who do not respond to the second mailing of the questionnaire, a telephone follow-up will be used. The telephone survey will be available in English and Spanish language to meet the needs of most enrollees.

One way to ensure a high response rate is to ensure that the survey material reaches the intended respondent. Thoroughbred uses a specially developed national change of address database software to update addresses and append missing information (e.g., zip code) to decrease undeliverable and return to sender mail. They are able to obtain more matches/updates than other companies because: 1) they use real-time updates against the most current USPS database possible, whereas most companies use a 30-day update CD; and 2) they are able to go further back than most companies and therefore, they look for moves as far out as 48 months.


Because the look and feel of the written survey instrument can drive the response rate, the surveys are printed in professional-looking seam-stitched booklets. They will use a high-quality paper to ensure that ink does not bleed through the pages and mail surveys in attention-getting #10 business double window envelopes. The use of double-window envelopes allows the CMS logo to show through.

B.4 Tests of Procedures or Methods


Pilot tests are an essential element of quality control as they provide an opportunity to thoroughly test sampling and data collection procedures prior to full-scale deployment of a survey. As such, we have executed pilot test of sampling procedures (described below) and intend to implement a pilot test of data collection procedures prior to the main data collection in order to evaluate data collection procedures and survey content.

Sampling Pilot

CMS invited all states receiving an Adult Quality Measures grant and states participating in the T-MSIS pilot to participate in the CAHPS sampling pilot. Nine states either volunteered to participate in, or requested additional information about the expectations of the pilot. After preliminary discussions with these states, we recruited five states to participate in the pilot study: Alabama, Oregon, Rhode Island, Tennessee, and West Virginia. States were given a choice of three sampling options as follows:

The three sampling options are:

  1. MSIS Option (Option 1): NORC pulls a sample of eligible beneficiaries from approved state MSIS data. The sample is sent back to the state, so contact information (e.g., name, address, phone number) can be appended. The state then sends the contact information back to NORC.

  2. Data Extract Option (Option 2): The state sends NORC a file of beneficiaries eligible during the six-month period of interest (currently defined as July 1, 2013 through December 31, 2013), and a long-term care claims file for claims during calendar year 2013. NORC uses this file to select a sample of eligible beneficiaries.

  3. State Selects Sample Option (Option 3): The state constructs the sample frame and selects a sample of eligible beneficiaries using NORC’s detailed specifications. The state sends the selected sample to NORC.

Each state chose at least one sampling option to pursue for the pilot study. Two states (Alabama and Tennessee) chose two options, due to their interest in attaining experience selecting a sample under Option 3. At least two states participated under each option, which provided a range for the level of effort required under each sampling option (See attachment 1 for sampling pilot results).

Data Collection Field Test

The data collection field test will be conducted from among the five states that participated in the sampling pilot and will enable the evaluation of various assumptions used to determine the target sample sizes within the four sampling subgroups of interest. The percentage of disenrolled beneficiaries, the percentage of invalid address and phone records, and the response rates will also be evaluated during the field test study. If warranted, target sample sizes and data collection procedures will be adjusted based on the field test results.

The CAHPS Medicaid Survey 5.0H will be the base survey instrument utilized for the field test and main data collection. It was developed by a CAHPS consortium led by AHRQ and tested with a Medicaid population using methods similar to those used in development of the commercial CAHPS survey instrument. Additionally, we will supplement the CAHPS Medicaid Survey 5.0H with 15 questions to help capture the care and access experiences of Medicaid enrollees in FFS/PCCM and managed care delivery systems. Counted collectively as one of the 15 total supplemental questions is a suite of six questions that encompass the new HHS health disparities data collection standard for disability status per ACA section 4302; we have also replaced two existing questions on race and ethnicity with the new HHS data collection standards for these elements. These supplemental questions consist of previously validated measures derived from national surveys like the Medical Expenditure Panel Survey (MEPS), the National Health Interview Survey (NHIS), or the National Health and Aging Trends Survey. In a limited number of cases, supplemental questions have been slightly modified to be more applicable to the adult Medicaid population and their experiences accessing care. In such cases, we have done everything possible to maintain the integrity of the original question to ensure optimum question validity and reliability. We will test the survey instrument in the states participating in the field test and modify the content or order of questions as appropriate based on field test findings.

B.5 Statistical and questionnaire design consultants


Throughout the planning of this survey, CMCS has had ongoing input on statistical and methodological issues from ASPE. Expertise from ASPE will continue to be available to CMCS during the implementation phases of the survey. With the recently awarded contract to the NORC/Thoroughbred team, CMCS now has the support of NORC’s senior statisticians and other staff with considerable expertise in survey research.


Statistical consultants for the analysis of the Medicaid CAHPS survey data will be identified through a new solicitation and awarded sometime after the survey is in the field. Ongoing statistical consultation for the survey design, sampling, and weighting of the data will be provided by:


Edward Mulrow, PhD, PStat®

Senior Statistician

NORC at the University of Chicago

(301) 634-9441




3 The target population is adults ages 18 and older as of December 31 of the measurement year. The Data Mart, however, groups 18-year-olds with 15-, 16-, and 17-year-olds. Thus, we could not isolate the 18-year-olds with this tool and group them with other adults. In general, this will not affect the sample size calculation because including the 18-year-olds increases the population size. As it stands, the population within each of the strata is projected to have more than the required sample size, even without the 18-year-olds. The exception is states that do not have any non-disabled adults managed care enrollees. Including 18-year-olds would not change this in these states. Furthermore, most 18-year-olds are covered in children’s eligibility groups; the group of newly eligible adults authorized by the Affordable Care Act is for individuals ages 19 to 65.

4 The dual eligible subgroup includes full-benefit dual eligible beneficiaries.

7


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Statement Part A
SubjectPRA Package CAHPS
AuthorCMS
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy