MCPSS 2006
ANALYSIS
OF POTENTIAL NONRESPONSE BIAS
AND
WEIGHT ADJUSTMENTS TO
REDUCE IT
August 4, 2006
Prepared for:
Centers for Medicare
&
Medicaid Services
7500 Security Boulevard
Baltimore, MD 21244
Prepared by:
Westat
Rockville, MD 20850
Authors & Acknowledgements
Authors of the report are:
Huseyin Goksel and Vasudha Narayanan, from Westat
Acknowledgements:
The authors would like to thank the following individuals for their help and support in preparing this report:
Gladys Valentin, David Clark from Centers for Medicare & Medicaid Services.
Table of Contents
Page #
Summary of Sample Design
and Data Collection
Methods……………………………………………………….. 1
Response Rates ……………………………………………………………………….. 3
Nonresponse Adjustments ……………………………………………………………. 5
Adjusting
the Weights for Cases
with Unknown Eligibility
………………………………………………………….. 7
Adjusting the Weights for
Nonresponding Eligible Providers …………………………………………………. 8
List of Tables
Table 1: 2006 National
Implementation
MCPSS Summary Sample Disposition
…………………………………………….. 3
Table 2: Unweighted and Weighted
Response Rates
by Contractor Type …………………………………………………………………. 3
ANALYSIS OF POTENTIAL NONRESPONSE BIAS AND WEIGHT ADJUSTMENTS TO REDUCE IT
Nonresponse in surveys creates a potential for bias in the survey estimates. If response propensity is independent of substantive variables, e.g., satisfaction scores in this survey, then no bias would arise in the survey estimates. To reduce any potential bias, the sampling weights are usually adjusted for nonresponse and it is expected that the weighted estimates produced by these adjusted weights, will have much reduced bias, if any.
There are several methods to adjust the sampling weights for nonresponse. We used a response propensity method. The objective was, using the known characteristics of both respondents and nonrespondents, to identify subgroups of provider population within which the response propensity is independent of satisfaction. Thus, we attempted to form homogeneous subgroups with respect to response propensity using statistical modeling software1. After the subgroups were identified, the weights were adjusted for nonresponse within these subgroups.
We first provide a summary of the goals of the survey, its target population, and sample design. Then, we discuss the achieved response rates. In the final section, we describe the nonresponse adjustments applied to the sampling weights.
Summary of Sample Design and Data Collection Methods
The goal of the 2006 national implementation was to collect quantifiable data on provider satisfaction with the performance of Medicare Fee-for-Service (FFS) Contractors. The target population for the 2006 national implementation consisted of all Medicare providers served by 42 different Medicare FFS Contractors. Some of the contractors provided more than one type of service. Thus, the total number of contractors by different contractor service types was 53. These Contractors were comprised of 26 Fiscal Intermediaries (FIs), 19 Carriers, four Regional Home Health Intermediaries (RHHIs) and four Durable Medical Equipment Contractors (DMERCs).
The goal of sample design was to obtain valid and reliable estimates at contractor level and to conduct statistical tests for the differences in the mean satisfaction scores between the contractors. The targeted sample size was 400 completes for each contractor. The contractor sample sizes were allocated proportionately across the provider types. A small number of provider type strata within contractors had to be over-sampled to attain the target of a minimum number of 30 completes in each stratum.
The provider records were further stratified implicitly by sorting the records by additional provider characteristics. A sample of providers was drawn with equal probability and systematically within each provider type stratum within contractors.
Although Web was the primary mode of data collection, the 2006 national implementation was a multimode study. Initially, each sampled provider received a survey notification packet in the mail which provided information about the MCPSS and instructions on how to access and complete the online survey instrument. Providers also had the option to request a paper copy of the survey instrument any time during the study, and could mail or fax back their completed survey instruments. Westat followed up by telephone with providers who did not complete the Web survey or paper copy.
Regardless of the mode of data collection, all versions of the survey instrument contained the same 76 questions, presented the questions in the same order. The survey instrument covered seven key areas of the interface between the providers and their Medicare FFS Contractors: provider inquiries, provider communication, claims processing, appeals, provider enrollment, medical review, and provider audit and reimbursement. All the service areas were not relevant for all Medicare FFS Contractors. The survey instruments were hence designed to only inquire about the relevant services rendered by the Medicare FFS Contractor to their providers.
Response Rates
The 2006 national implementation achieved a final survey response rate of 64.8 percent. Table 1 shows the sample disposition and unweighted response rate and Table 2 shows the comparison of unweighted and weighted response rates by contractor types.
Table 1: 2006 National Implementation MCPSS Summary Sample Disposition
Total Sample |
28,835 |
Completed Surveys |
16,121 |
Partially Completed Surveys |
792 |
Refusals |
628 |
Ineligibles |
1,054 |
Bundles |
2,548 |
Other Non-Response |
5,095 |
Unknown Eligibility |
2,597 |
Response Rate |
64.8% |
Table 2: Unweighted and Weighted Response Rates by Contractor Type
Contractor Type |
Response Rates |
|
Unweighted |
Weighted |
|
FI |
64.1 |
66.2 |
Carrier |
61.4 |
61.6 |
RHHI |
76.8 |
76.2 |
DMERC |
72.9 |
72.7 |
Overall |
64.8 |
62.9 |
The unweighted response rate was calculated using the following formula:
Completes
Completes +Partial Completes + Refusal + Other Nonresponse + ((Unknown Eligibility) * Eligibility Rate)
where Eligibility Rate was calculated as:
Completes +Partial Completes + Refusal + Other Nonresponse
C ompletes +Partial Completes + Refusal + Other Nonresponse + Ineligible + Bundles
The disposition categories listed in the above formulae were defined as follows:
Completed surveys are cases where the respondent provided a survey response to at least one item in section C “Claims Processing” and at least one item in any other survey section.
Partially completed surveys are cases where the respondent did not provide a survey response to any items in Section C “Claims Processing”, but did provide a survey response to items in another or other sections.
Refusals are cases where a respondent declined to participate in the 2006 national implementation study; thus the respondent was unwilling to provide any survey response to any survey items.
Other Nonresponse cases are where we located correct contact information, but wasn’t able to establish contact with the provider (e.g., ring no answers, answering machines, busy singles, etc.)
Ineligibles are cases where:
A respondent did not fit the eligibility criteria (e.g., has not had a Medicare claim in the past 6 months); or
A respondent is out of scope of the study (e.g., the facility has closed or its contract terminated).
Bundles are cases where a respondent is affiliated with multiple facilities. If the respondent completes a single survey to represent multiple facilities, then all other facilities are linked to this completed survey;
Unknown Eligibility are cases where:
We had a telephone contact number, but was unable to communicate with the respondent due to language issues; or
We didn’t have any correct contact information (i.e., phone number nor mailing address information) available to use to contact the respondent. These cases are also known as nonlocatables. About 98.9 percent of eligibility unknown cases were nonlocatable.
The weighted response rate takes into account the effect of differential sampling rates. It also adjusts for multiple provider facilities that were associated with some of the satisfaction score reporting units in the survey.
Nonresponse Adjustments
The sampling weights were adjusted to reduce any potential bias caused by not obtaining a completed survey instrument from all the sample providers. To reduce this potential bias, a separate adjustment factor was computed in each nonresponse adjustment cell. A separate set of nonresponse adjustment cells were formed within each Contractor.
If response propensity is independent of satisfaction and other substantive variables within nonresponse adjustment cells, then nonresponse-adjusted weights yield unbiased estimates. There are several alternative methods of forming nonresponse adjustment cells to achieve this result. We used Chi-Square Automatic Interaction Detector (CHAID) software (SPSS, 19932) to guide us in forming the cells. CHAID partitions data into homogenous subsets with respect to response propensity. To accomplish this, it first merges values of the individual predictors, which are statistically homogeneous with respect to the response propensity and maintains all other heterogeneous values. It then selects the most significant predictor (with the smallest p-value) as the best predictor of response propensity and thus forms the first branch in the decision tree. It continues applying the same process within the subgroups (nodes) defined by the "best" predictor chosen in the preceding step. This process continues until no significant predictor is found or a specified (about 20) minimum node size is reached. The procedure is stepwise and creates a hierarchical tree-like structure.
We developed two separate models (and thus a separate set of adjustment cells) to predict (1) propensity of determining eligibility among all sample cases, and (2) propensity of response among the eligible providers. The cases with undetermined eligibility included mostly nonlocatables. We believe that the provider characteristics influencing locatability and response after the provider is identified as eligible can be quite different.
All sample providers were classified into four major survey disposition categories based on the outcome of the survey. These categories were (1) respondent --completed instruments, (2) eligible nonrespondent (including refusals, other nonresponse, and partial completes), (3) ineligible (ineligibles and bundles), and (4) unknown eligibility (mostly nonlocatables).
Variables employed in forming nonresponse adjustment cells had to be known for both responding and non-responding providers. The variables listed below were used as potential predictors in modeling response propensity with CHAID:
For Financial Intermediaries (FIs):
Provider type sampling stratum
Ownership type (voluntary, proprietary, government, unknown)
Number of beds categories (hospitals and skilled nursing facilities only)
Number of claims, size categories
Provider type derived from provider ID
FI service area (some contractors included several different service areas)
For Carriers:
Provider type sampling stratum
Major specialty of the provider
Number of claims, size categories
Carrier service area (Carrier numbers)
For RHHIs:
Ownership Type (voluntary, proprietary, government, unknown)
Number of claims, size categories
For DME Suppliers:
Provider specialty
Number of claims, size categories
After creating two separate sets of adjustment cells, we carried out separate weight adjustments to compensate for those providers with unknown eligibility than for those nonresponding eligible providers. The weight adjustment factor for undetermined eligibility, within each adjustment cell, was computed as the ratio of the weighted (by the base weight) total number of sampled providers to the weighted number of providers, whose eligibility could be determined. This adjustment assumes that the rate of eligibility among the cases with unknown eligibility is the same as among the cases with known eligibility within each adjustment cell. The nonresponse adjustment factor was computed as the ratio of the weighted (after adjusting for undetermined eligibility) number of eligible (responding plus eligible nonresponding) providers to the responding providers within each nonresponse adjustment cell.
Although nonresponse adjustment can reduce bias, at the same time, it may increase the variance of estimates. Small adjustment cells and/or low response rates (or large nonresponse adjustment factors) may increase the variance and give rise to unstable estimates. In order to prevent an unduly increase in variance and thereby an adverse effect on the mean square error of the estimates, we attempted to limit the size of the smallest cell to a minimum and avoid large adjustment factors. Next, we discuss each weight adjustment in detail and present their formulae.
Adjusting the Weights for Cases with Unknown Eligibility
First, the weights were adjusted to compensate for cases with unknown eligibility. The adjustment factor for the adjustment class c ( ) was computed as:
where,
S1c is the set of responding cases (completes) in adjustment class c,
S2c is the set of eligible nonresponding cases in adjustment class c,
S3c is the set of ineligible cases in adjustment class c,
S4c is the set of sampled cases with undetermined eligibility in adjustment class c,
Wci is the base weight for provider record i in the adjustment class c.
Then, the weight adjusted for eligibility unknown cases for sampled record i in adjustment class c, ( ), was computed as:
Adjusting the Weights for Nonresponding Eligible Providers
After forming the nonresponse adjustment cells, the weights were adjusted to compensate for the eligible nonresponding providers. The nonresponse adjustment factor for cell α, δα was computed as:
where,
S1α is the set of responding providers (completes) in adjustment class α,
S2α is the set of eligible nonresponding providers in adjustment class α.
is the weight adjusted for unknown eligibility cases for provider i in adjustment class α.
Then, we computed the final weight as the product of the weight that was adjusted for the unknown eligibility and the nonresponse adjustment factor. The final sample weight for provider i in nonresponse adjustment class , , was computed as follows:
1Göksel, H., Judkins, D.R., Mosher, W.D. (1992). Nonresponse adjustments for a telephone follow-up to a national in-person survey. Journal of Official Statistics, Statistics Sweden, 8(2).?
2SPSS (1993), SPSS for Windows: CHAID, Release 6.0, User’s Guide, Jay Magidson/SPSS Inc., 1993.
File Type | application/msword |
File Title | WEIGHTING |
Author | Lauren Shrader |
Last Modified By | CMS |
File Modified | 2006-08-16 |
File Created | 2006-08-16 |