Medicare Part C and Part D Data Validation (42 CFR 422.516(g) and 423.514(j)) CMS-10305, OMB 0938-1115
Employing Statistical Methods for Information Collections
All Part C and Part D organizations that report Part C and/or Part D data to CMS per the Part C/Part D Reporting Requirements, regardless of enrollment size, are required to undergo an annual data validation review. The only organization types that the data validation requirement does not apply to are Program of All-Inclusive Care for the Elderly (PACE) organizations and 1833 Cost Plans. Because 100 percent of applicable sponsoring organizations will undergo the data validation process, sampling is not relevant for respondent selection.
Within the data validation process, DVCs are encouraged to collect the entire data set (the census) relied on by sponsoring organizations to meet Medicare Part C and D reporting requirements. If the census method proves impractical due to an unusual time burden placed on sponsoring organizations during data extraction, each sponsoring organization is required to perform a sampling task in collaboration with the data validation review contractor. In such cases, each sponsoring organization draws an initial sample of either 150 or 205 administrative records, at a minimum, depending on the Medicare Part C or Part D reporting section. Sample sizes may be larger and are determined by the DVC using standard statistical methodologies. All relevant records associated with these samples are then selected for review (for example, all claims for a random sample of 205 members). In cases where the population is smaller than the required sample size, records for the entire population are provided for evaluation.
− |
Statistical methodology for stratification and sample selection |
− |
Estimation procedure |
− |
Degree of accuracy needed for the purpose described in the justification |
− |
Unusual problems requiring specialized sampling procedures |
− |
Any use of periodic (less frequent than annual) data collection cycles to reduce |
burden
For data warehouse database files requiring sampling, simple random samples are used in the data validation review. The underlying standard is a quantifiable error rate in key fields which is
assumed to have a binomial distribution1. The sample sizes are designed to detect error rates of 5% or more, assuming an underlying error rate of 15% or more, with a one-tailed Type I error rate (α)=.05, except for samples based on eligibility. In those cases, because more confidence is needed, α is set at .025. A standard normal approximation to the binomial distribution is used to establish critical values. A finite population correction factor has been included in sample size calculations. The variation formula below is solved for n to obtain sample size:
| ∆ |= Z
pq N −n
□
,
n−1
N
where │∆│is the desired precision (5%), N is the number in the population, p is the assumed proportion (.15), q is 1-p, and Z is the appropriate critical value from the normal curve (either 1.645 or 1.96, depending on the error rate and n is the sample size.
Since extraction of the full census or use of the sampling process is required for all sponsoring organizations and their applicable reporting sections, survey-related issues such as non-response bias are not applicable.
CMS conducted pilot tests of all methodology, including sampling of all supporting documents, with one Medicare Part C sponsoring organization and one Part D sponsoring organization prior to the first data validation cycle in 2010.
Nelvis Njei, PharmD
Centers for Medicare & Medicaid Services
Medicare Drug Benefit and Part C and D Data Group Division of Clinical and Operational Performance
1 The binomial distribution measures the statistical behavior of percentages.
410-786-9937
File Type | application/octet-stream |
File Title | Supporting Statement For Paperwork Reduction Act Submissions: Medicare Part D Reporting Requirements and |
Author | CMS |
File Modified | 0000-00-00 |
File Created | 2021-10-19 |