supporting statement B_disenrollment reasons survey 3_22_2016 (002)

supporting statement B_disenrollment reasons survey 3_22_2016 (002).pdf

Implementation of the Medicare Prescription Drug Plan (PDP) and Medicare Advantage (MA) Plan Disenrollment Reasons Survey (CMS-10316)

OMB: 0938-1113

Document [pdf]
Download: pdf | pdf
Supporting Statement, Part B,
Implementation of the Medicare
Prescription Drug Plan (PDP)
and Medicare Advantage (MA)
Plan Disenrollment Reasons
Survey
Contract Number: GS-10F-0275P
Task Order Number: HHSM-500-2014-00389G
March 22, 2016
Prepared for CMS
Beth Simon and Kim DeMichele, Project Officers
RAND Corporation
1776 Main Street
P.O. Box 2138
Santa Monica, CA 90407-2138

TABLE OF CONTENTS

Collection of Information Employing Statistical Methods ............................................................. 3
1. Respondent Universe and Sample .............................................................................................. 3
2. Information Collection Procedures ............................................................................................. 9
3. Methods to Maximize Response Rates ....................................................................................... 9
4. Tests of Procedures or Methods .................................................................................................. 9
5. Statistical and Questionnaire Design Consultants ...................................................................... 9

ii

Collection of Information Employing Statistical Methods
1. Respondent Universe and Sample
Objectives and Basic Approach of the Sampling Plan
The PDP and MA plan disenrollment reasons survey sample is designed to be
representative of the population of beneficiaries who disenroll voluntarily from their PDP
or MA contracts during that period. Because the reasons and experiences reported by
disenrollees may vary over the course of the year, the sample should be representative of
the distribution of disenrollment across months of each contract, anticipating that
weighting will be used to correct differential rates of responses per disenrollee across
months. Each month’s sample allocation will be calculated as soon as possible after the
month’s disenrollment counts become available, so that fielding can proceed with a
minimum of recall bias and loss of saliency. Consequently, each month’s allocation must
be calculated before disenrollment counts from later months are known.
In short, the sample design strategy for is as follows: in each succeeding month,
the new enrollment data are incorporated into projections, and sample allocations for the
month are calculated based on the share of the unexpended sample for the contract that
corresponded to the fraction of disenrollment projected for the remainder of the year that
actually occurred during the current month. In December, we assess whether the unused
sample for each contract is adequate to sample proportional to December disenrollment
as a fraction of the year, and supplement the sample for some contracts for which there is
not enough sample left to meet this target.
Preliminary Analyses of 2011-2013 Data
We analyzed monthly disenrollment data from 2011-2013 to identify patterns in
monthly disenrollment to inform our sampling approach used to allocate the total sample
across months. The table below shows the percentage of total disenrollees in each year by
month for that period.

3

Percentage of Total Disenrollees in Each Year by Month

Year
Month

2011

2012

2013

1

8.09

4.10

4.59

2

3.12

3.34

2.86

3

2.85

2.88

2.45

4

1.86

2.45

3.76

5

2.49

2.72

2.96

6

3.03

2.68

2.70

7

2.12

2.75

2.88

8

2.48

2.62

2.65

9

2.39

2.30

2.44

10

2.42

3.01

3.30

11

2.53

2.35

2.55

12

66.61

68.79

66.86

2,717,724

3,005,840

3,573,519

Year

About half the disenrollments are in PDPs, and the distributions of PDP and MA
disenrollments differ only moderately, with a slightly higher percentage from PDPs in
December than other months. The distribution across months of the year is similar but not
identical in 2011, 2012 and 2013. In each year, about two thirds of disenrollments occur
in December, traditionally a heavy month for disenrollment (towards the end of the Open
Enrollment Period). When monthly proportions are weighted equally by contract
(corresponding to approximately equal sample sizes per contract and excluding the
“small” contracts), the mean December proportion drops slightly, reflecting the greater
concentration of disenrollments from large-disenrollment (especially PDP) contracts in
December. We discovered much greater variation at the level of individual contracts. For
example, the mean percentage of 2011/2012 disenrollment in December is 58.0
percent/60.8 percent but 10th percentiles are 23.9 percent/25.8 percent and the 90th
percentiles are 82.2 percent/87.1 percent. The ratio of 90th to 10th percentiles by month
ranges from 3.5 to 8.1. This suggests that a sampling plan that assigns fixed sample
proportions to all contracts in each month is likely to be far from optimal.
To understand associations of disenrollment across months, we calculated
correlations of log(Ncm+1), where Ncm is the number of disenrollments in contract c in
4

month m, across contracts between all pairs of months (restricted to 486 contracts with at
least 150 disenrollments in each of the years). Correlations between months other than
December in the same year were uniformly high, between 0.90 and 0.97. Even
correlations between these months in different years were high, between 0.84 and 0.94.
However correlations between December of either year and any non-December month
were much lower: between 0.62 and 0.82. This suggests that determinants of
disenrollment are different for December, the month accounting for the largest share of
disenrollment for the year and particularly for the Open Enrollment Period, than for other
months which reflect disenrollment allowed for a more select subgroup of beneficiaries.
In further analyses, we explored a variety of loglinear models to predict a month’s
disenrollment from that observed in earlier months. This analysis confirmed the minimal
predictive value of December data for the following months in such predictions.
Predictions from most models closely approximated a simple proportional projection
from the total for previous months of the cycle. While some gain in accuracy could be
obtained with more complicated models, these gains affected only a limited number of
contracts and the patterns might not be replicated in other years.
Target Sample Sizes
Following the findings of the pilot survey, target sample totals were established at
150 per MA (“H” and “R”) contract and 300 per PDP (“S”) contract to achieve reliable
estimates at the contract level. Some contracts do not attain these target numbers over the
course of the year; “small” contracts projected to fall into this category are sampled at
100%, except those which were projected to fall below 40 cases per year. Instead, for
these excluded contracts, a floor sampling rate was established to achieve national (but
not contract) representativeness. Also for national representativeness, a floor sampling
rate was established for the included contracts that increases the target sample size for
some “big” contracts. (The terms “small” and “big” here are defined with respect to
disenrollment, not enrollment.)
Analyses of 2013 data led to some modifications of the sample design.
•

MA-PD sample sizes: We increased the MA-PD target sample size per contract
from 150 to 233 (assuming a 43 percent response rate would generate 100
responses per contract), which allow us to generate composite measures at the
5

contract level of 0.70 or higher for most of our composites (all but “left because
of benefits and coverage”) for 359 out of a total of 518 MA-PD contracts (69
percent).
o Small contracts: There were 159 small contracts that would have fewer
than 233 disenrollments and for whom we would not expect to be able to
generate reliable estimates. For the small contracts, we sent 40 surveys (to
any MA-PD contract with at least 40 disenrollments) to achieve
approximately 16 responses per measure for contracts to use for internal
quality improvement activities (results would likely provide non-reliable
results on most items for most small contracts). We estimated there would
be 70 contracts with fewer than 40 disenrollments and we would not draw
sample for these contracts as we would be unable to obtain a floor of 16
responses from these contracts; in many cases we may only generate one
or two responses from these very small contracts. (Since we cannot report
out any statistics based on fewer than 11 respondents, we set our floor of
16 expected responses to make allowance for random variation in unit
response rates as well as item nonresponse).
•

PDP sample sizes: We increased the PDP per contracts sample size from 300 to
465, which will allow us to generate composite measures at the contract level with
reliabilities of 0.70 or higher for “left due to financial reasons” and “ease of
getting Rx drugs” composites, while coming close to 0.70 reliability for the “left
because of benefits and coverage” composite for 50 out of 57 PDP contracts. We
would be unlikely to achieve 0.70 reliability for the other PDP composites (“left
due to problems getting information about Rx drugs” and “ease of getting needed
info”) or the overall rating of PDP due to limited between-contract variation in
performance on these measures.
o Small contracts: There are 7 PDP contracts with fewer than 465
disenrollees; for these contracts, we sampled 40 disenrollees from each of
the 7 contracts to achieve approximately 16 responses per measure to be
able to provide non-reliable results to these small contracts.
Big and Small Contract Sampling Rate
6

The basic sampling design calls for roughly equal sample sizes per contract
(excluding those with very small numbers of disenrollees), which implies that the
sampling rate in a very large contract is lower than in a smaller one. Such
disproportionate sampling implies inefficiency in estimates of national measures, such as
national breakdowns by race/ethnicity, contract type, etc.; that is, estimates are less
precise than they would be for a sample drawn at the same rate from every contract. This
inefficiency can be controlled by adding sample to bring the lowest sampling rates up to a
larger floor.
Using disenrollment data from 2012, we simulated various floor sampling rates
for big contracts. With no supplementation of sample size for big contracts beyond the
standard 150 in MA-PD and 300 in PDP, effective sample size for national estimates was
less than 2000, meaning that precision of estimates would be about equivalent to that of a
simple random sample of 2000 cases, despite a total sample size of over 90,000. This was
due to the concentration of a large fraction of disenrollments in a handful of contracts
with modest total sample size. Adding a supplementary 6000 cases to the biggest
contracts increased effective sample size to about 8000, or about four times as much. We
set a target of 6000 additional cases over the course of the year in big contracts and found
that this was projected if we set a floor sampling rate of 0.50 percent in PDP contracts
and 1.34 percent in MA-PD contracts.
A number of contracts were projected to have total disenrollments less than the
target sample sizes of 150 or 300 in 2012. There were 106 such contracts and their
projected total disenrollments are 6917. We sampled these at a rate of four percent for
national representativeness and to get a total sample for the group of about the magnitude
of a single PDP sample (n = 300).
Monthly Sample Targets
Once January data becomes available, we begin to make monthly projections of
remaining disenrollment for the remainder of the year, using the strong associations
among disenrollment counts in months other than December. In calculation of the sample
for a month (henceforth the ‘target month’) for a particular contract, suppose A is the
total observed disenrollment in the contract in the current year before the target month, B
is that for the target month, and C is the (as yet unknown) total for months after the target
7

month through November. Let A*, B*, C* be the corresponding totals across all contracts
in the previous year (here, 2013). Then under the approximation that the proportions by
month are constant from one year to the next, we estimate C by
If the remaining sample to be allocated is n then the share m allocated to the target month
is proportional to the disenrollment in the target month as a share of total projected

disenrollment from the target month through the end of the year,

An

alternative target is based on allocation of the entire target sample t proportionally across
months,

, which yields a larger sample size if too large a sample was

drawn in previous months, leaving too small a residual available for the current month.
This might occur, for example, if a contract had only modest disenrollment in the first
part of the year and then a sudden jump in disenrollment later, when most of the sample
had been drawn already. Finally, we take the maximum of m1 and cm2 as the sample

draw for the contract and month, where c = 0.8 is a constant representing our tolerance
for less-than-proportional sampling.
Design for the remainder of the year proceeds sequentially, month by month,
except for December which is calculated separately. December sample is allocated
roughly proportional to the share December disenrollment constitutes relative to total
disenrollment for each contract. Additional sample is allocated to December for those
contracts whose remaining sample allotment falls disproportionally short of that target.
Sample Adjustment for Anticipated Response Rate
Response rates to the disenrollee survey historically have varied substantially
across contracts, leaving some contracts with achieved samples below targets. Response
rates for December disenrollees also are substantially higher than rates for other months.
To reduce the resulting variation in responses per contract and month weights, after target
sample sizes are calculated for a month, we adjust them for differential response rates by
contract. This is possible because these rates are highly correlated (r>0.85) from one year
to the next. We multiply each sample target by RM/R where RM is the mean of contract
response rates and R is a particular contract’s projected response rate from a regression

8

prediction (minimum 20 percent), assuming R=RM for new contracts. This helps to
equalize the number of responses, and hence reliability, across contracts.
2. Information Collection Procedures
Sample members from each of twelve disenrollment periods (one disenrollment
period per month) receive –
•

A pre-notification letter (attachments 1-3) indicating the member would receive a
survey (letters beginning in September 2014 included a Spanish request notice);

•

A mailed survey with a cover letter (attachment 4) and a postage-paid return
envelope;

•

A second mailed survey for nonrespondents with a follow-up cover letter
(attachment 5) and a postage-paid envelope.
Puerto Rican members received Spanish translations of all letters, surveys and pre-

alert postcards.
We mail the first-wave survey approximately seven days after sending the
prenotification letter, then the second mail survey four-to-five weeks later to any
nonrespondents. The average survey fielding time for each disenrollment period is
approximately two months.
3. Methods to Maximize Response Rates
We anticipate an annual response rate of 40 percent, based on prior disenrollment
surveys and recent experience with surveys of Medicare beneficiaries. We will employ
two mail survey waves to minimize nonresponse, and consider other methods (such as
sending written surveys in both English and Spanish, and phone follow up of nonresponders) if necessary and if project budgets allow.
4. Tests of Procedures or Methods
No tests of procedures or methods will be undertaken as part of this data collection.
5. Statistical and Questionnaire Design Consultants
The survey, sampling approach, and data collection procedures were designed by
the RAND Corporation under the leadership of –
Cheryl Damberg, Ph.D.
9

Senior Policy Researcher
RAND Corporation
1776 Main Street
PO Box 2138
Santa Monica, CA 90407-2138
310-393-0411
Data will be collected by the survey vendor CSS Research under the direction of –
Jeff Burkeen
CSS Research
1625 K Street NW, 8th Floor
Washington, DC 20006
202-454-3005

10


File Typeapplication/pdf
File TitleSupporting Statement B Disenrollment Reasons Survey
AuthorRAND
File Modified2016-03-29
File Created2016-03-29

© 2024 OMB.report | Privacy Policy