CMS memo 11-10-08

RESPONSES_TO__OMB_jb110508.doc

Medicare Demonstration Ambulatory Care Quality Measure Performance Assessment Tool ("PAT")

CMS memo 11-10-08

OMB: 0938-0941

Document [doc]
Download: doc | pdf

RESPONSES TO OMB’s QUESTIONS ON ICR 0938-0941:

Medicare Demonstration Ambulatory Care Quality Measure Performance Assessment Tool ("PAT") – CMS-10136.



  1. Can CMS provide a list of the performance measures and how they are defined? For example, on the diabetes screen shot, does “eye exam” refer to the fact that the primary care physician provided a referral to the patient for an ophthalmic exam, that the patient actually got an eye exam at the ophthalmologist, that the primary care physician performed a basic exam of an undilated pupil, or something else? 


RESPONSE:

The attached document lists each of the measures. In addition detailed measures specifications are available on the MCMP Demonstration web site:

http://www.cms.hhs.gov/DemoProjectsEvalRpts/MD/itemdetail.asp?filterType=dual,%20keyword&filterValue=Care%20Management%20Performance&filterByDID=0&sortByDID=3&sortOrder=ascending&itemID=CMS1198950&intNumPerPage=10


Please note that these measures are updated annually based on changes made by the measure owners: NCQA, AMA or CMS. For example, new diagnostic code inclusions or exclusions may be needed or some of the standards may change.



  1. Are the results being risk-adjusted? If so, what is the risk-adjustment methodology? If not, why not?


RESPONSE:

The measures are not risk adjusted. In many cases these measures are process measures (e.g. “Did the patient get a mammogram or flu shot?”) and, therefore, risk adjustment is not necessarily appropriate as it would be with an outcome measure. Even with some of the outcome measures (e.g. blood pressure for diabetics), there is no generally accepted uniform method for risk adjustment.


However, the data collection and scoring system used in the demonstrations recognize that there may be situations where some of the measures may be clinically inappropriate for some patients. For example, depending upon the measure owner, some of the measure specifications allow a doctor to exclude a patient from the denominator of a measure if, for example, there are medical or other patient or system specific reasons why the measure (e.g. a medication) would be inappropriate. In addition, our scoring systems for the EHR and MCMP Demonstrations, where smaller number of patients in any category may cause more variability in the results, do not require 100% scores on measures in order to achieve the full incentive payment.



  1. How does CMS keep track of attrition? Does CMS evaluate the reasons for the attrition? For example, why did the practices drop out of the MCMP demonstration project (page 8 of the supporting statement).


RESPONSE:

Yes, CMS tracks which practices drop out of demonstrations. Part of the evaluation conducted by the independent evaluator, will be to examine reasons for attrition.



  1. Please provide more detail in the Part B of the supporting statement. For example, are the measures reported by practice or by individual physician within the practice? What determines the number of patients a physician needs to report on? What determines whether the physician needs to report on all eligible patients vs. a sample of patients?  What is the specific sampling methodology and how does it work in concert with the maximum number of cases?


RESPONSE:

The measures are reported at the practice level and not at the individual physician level. As part of the preparation for the quality measure data collection process, our contractors (RTI ) use claims data to determine which beneficiaries assigned to each practice are eligible for reporting based on the unique specifications for each measure. For the PGP demonstration, where there may be many hundreds or thousands of patients eligible for reporting a given measure, the statisticians at RTI have determined that reporting on 411 patients constitutes a statistically reliable sample size for reporting purposes. For the MCMP and EHR demonstrations where the potential total pool of eligible patients is much smaller due to the smaller number of physicians in each practice, their statisticians have calculated that reporting on 218 patients constitutes a statistically reliable sample size. Practices with fewer than this number of patients eligible for a specific measure, report on all eligible patients. RTI randomly selects the patients to be reported upon for any given practice from among all those that are eligible for each measure. They provide for each practice an ordered list of patients to report on. To allow for exceptions where, for example, a practice can’t confirm that a patient has a particular condition or is unable to find the patient chart, RTI provides a 50% “over sample”. Thus, a practice that is unable to locate the chart for patient #57 on a diabetic measure (perhaps because the patient has moved and the chart has been sent to the new provider), can skip patient #57, continue reporting with patients #57 through #219 in order to meet the 218 patient sample size. We require that the practice follow the order of the patients to avoid potentially “cherry picking” which patients are reported on and making sure that the patients reported on truly represent a random sample.



  1. Please provide a breakdown of the offices and physician practices that have signed up to take part in each of the demonstration projects. How were they selected and how representative are they, in terms of the patients they care for, size, geographic location, specialty/general practice, etc.?


RESPONSE:

The 10 large multi-specialty group practices participating in the PGP demonstration were selected via a competitive process in 2002-2003.


In order to participate in the MCMP demonstration, only those eligible practices participating in the QIO Doctor’s Office Quality – Information Technology (DOQ-IT) program in the four selected states that piloted the DOQ-IT program (MA, AR, CA & UT) were eligible to apply for the demonstration. In addition only small to medium sized primary care practices (approximately <=10 doctors) were eligible to apply. Initial enrollment in the demonstration was 699 practices. The following numbers of practices and physicians were enrolled in the demonstration when it became operational on July 1, 2007:




Of these 34% were solo practices, 43% had 2-4 physicians, 20% had 5-9 physicians, and 3% had 10 or more physicians in the practice.


We are currently in the process of recruiting practices from the first 4 Phase I sites in the EHR Demonstration. This includes: Maryland and the District of Columbia, southwest Pennsylvania (11 counties in the Pittsburgh area), Louisiana, and South Dakota (including selected border counties in Iowa, North Dakota and Minnesota). We hope to recruit up to 200 small to medium sized primary care practices (<= 20 providers) in each area with half randomly assigned to the treatment group and half assigned to the control group. We will not have information on actual enrollment of until early 2009.



  1. On page 2 of the supporting statement, it says physicians will receive incentive payments in parts based on the degree of HIT functionality used to coordinate care for the EHR demonstration project. However, since the existing PAT instrument was developed prior to the EHR demonstration project, how will the PAT enable CMS to determine the extent to which physicians are using HIT to coordinate care and the functionalities their EHR systems have?


RESPONSE:

The PAT will not be used to determine the degree of HIT functionality used under the EHR Demonstration. CMS is developing an Office Systems Survey (OSS) for this purpose. This survey will be submitted for review by OMB under the PRA process when it is finalized. Preliminary documentation about the OSS has previously been supplied to OMB at its request.



  1. On page 2 of the supporting statement, CMS refers to 3 existing demonstrations – PGP, MCMP, and “one new physician P4P demonstration.” Is the third demonstration program referring to the EHR demonstration program?


RESPONSE:

Yes.



  1. How will the data in this ICR be used to help “evaluate the effectiveness of these payment models and provide insight into the most appropriate way for the agency to collect clinical information”? (page 3 of the supporting statement). Are the physicians and practices taking part in these demonstration programs able to provide CMS feedback on the PAT or on the demonstration more generally? Is CMS validating the clinical data received through the PAT?


RESPONSE:

Each of these demonstrations will have an independent evaluation conducted to determine the impact of the various payment models on improving the quality of care as measured by these clinical quality measures, and on the cost to the Medicare program as measured by claims data. In addition, the evaluation will look at various implementation issues, including how practices are able to report the clinical quality measures and any issues they may face. The evaluations will include data from surveys and site visits as well as quality measures and claims data. In addition, as part of the annual data collection process, our contractors provide technical assistance to the practices and get feedback that is then used to improve and enhance training, as appropriate. During the data collection process, we also have regular “open door” conference calls with demonstration participants to get feedback and respond to questions.



  1. Please send us a copy of the waiver OMB provided CMS for practices participating in the demonstrations and PQRI program (page 4 of the supporting statement). OMB appreciates the effort CMS went to to minimize the burden on physicians, and it is probably worth including documentation of this waiver as part of this ICR package.


RESPONSE:

The relevant documents are attached under separate cover.



  1. Were any comments received on the 60 day or 30 day FR notices?


RESPONSE:

NO



  1. What have been the results of the P4P projects so far? What percentage of the physicians and practices are attaining the results needed to earn the incentive payment?


The PGP demonstration recently made payments for incentives earned during the second year of the demonstration. At the end of the second performance year, all 10 of the participating physician groups continued to improve the quality of care for chronically ill patients by achieving benchmark or target performance on at least 25 out of 27 quality markers for patients with diabetes, coronary artery disease and congestive heart failure. Five of the physician groups achieved benchmark quality performance on all 27 quality measures. The groups demonstrated improved quality of care delivered to Medicare beneficiaries on the chronic conditions measured by increasing their quality scores an average of 9 percentage points across the diabetes mellitus measures, 11 percentage points across the heart failure measures, and 5 percentage points across the coronary artery disease measures.  As a result, all physician groups received at least 96 percent of their PQRI incentive payments, with five groups earning 100 percent of their incentive payments. The 10 physician groups earned PQRI incentive payments totaling $2.9 million.  In addition, four physician groups received $13.8 million for generating shareable savings under the demonstrations' financial methodology of which approximately 40% was tied to their performance on the above quality measures. 

For the MCMP demonstration, the first demonstration payments for baseline reporting (“pay for reporting”) were issued last spring. In total, we paid out approximately $1.5 million with an average payment per practice of $2,505; 88% of practices reporting received the maximum incentive for which they were eligible.


The EHR Demonstration is not yet operational, so no payments have been made.



  1. The burden on page 7 appears to be off. If the PGP demonstration has 10 participants and each response takes 79 hours, doesn’t this come to 790 hours rather than 10 hours? Shouldn’t the total burden for 2011 be 790 + 650 + 400 = 1840?


RESPONSE

The table on page 7 reflects hours per respondent. There is no column for total hours by demonstration although that is part of the calculation for the estimated total cost in each year. Thus for PGP, there are 10 respondents which we estimate each require 79 hours to report the data, for at total of 790 hours as you note. At $55/hour, 790 hours will cost $43,450 per year. The other rows are calculated similarly. ($10 x $79 x 55 = $43,450; 650 x $24 x 55 = $858,000; etc.). The formula you suggest would be incorrect. While the total hours burden (a column not shown) for PGP would be 790 hours (10 x 79 hours /respondent), the total burden for 2011 for the MCMP demonstration would be 15,600 hours (650 x 24 hours/respondent), and the total for the EHR demonstration would be 9,600 hours (400 x 24 hours/respondent). Multiplying each of these numbers by $55/hour provides the total financial burden shown.

Attachment 1

Table 1: Clinical Quality Measures




Diabetes

Heart Failure

Coronary Artery Disease

Preventive Care

(measured on population with specified chronic diseases)

DM-1 HbA1c Management

HF-1 Left Ventricular Function Assessment

CAD-1 Antiplatelet Therapy

PC-1Blood Pressure Measurement

DM-2 HbA1c Control

HF-2 Left Ventricular Ejection Fraction Testing

CAD-2 Drug Therapy for Lowering LDL Cholesterol

PC-5 Breast Cancer Screening

DM-3 Blood Pressure Management

HF-3 Weight Measurement

CAD-3 Beta Blocker Therapy – Prior MI

PC-6 Colorectal Cancer Screening

DM-4 Lipid Measurement

HF-5 Patient Education

CAD-5 Lipid Profile

PC-7 Influenza Vaccination

DM-5 LDL Cholesterol Level

HF-6 Beta Blocker Therapy

CAD-6 LDL Cholesterol Level

PC-8 Pneumonia Vaccination

DM-6 Urine Protein Testing

HF-7 ACE Inhibitor/ARB Therapy

CAD-7 ACE Inhibitor/ARB Therapy


DM-7 Eye Exam

HF-8 Warfarin Therapy for Patients with AF



DM-8 Foot Exam




Diabetes Mellitus

  1. HbA1c Management: Testing (DM-1): The percentage of patients 18-75 years of age with diabetes (type 1 or type 2) who had Hemoglobin A1c (HBA1c) testing

  2. HbA1c Management: Poor Control (DM-2): The percentage of patients 18-75 years of age with diabetes (type 1 or type 2) who had HbA1c in poor control (>9.0%)

  3. Blood Pressure Management (DM-3): The percentage of patients 18-75 years of age with diabetes (type 1 or type 2) who had a BP < 140/80 mmHg

  4. Lipid Management Testing: (DM-4): The percentage of patients 18-75 years of age with diabetes (type 1 or type 2) who had LDL-C screening performed

  5. Lipid Management: Control < 100mg/dl (DM-5): The percentage of patients 18-75 years of age with diabetes (type 1 or type 2) who had LDL-C testing l <100 mg/dl

  6. Urine Protein Screening (DM-6): The percentage of patients 18-75 year of age with diabetes (type 1 or type 2) who had medical attention for nephropathy

  7. Eye Examination (DM-7): The percentage of patients 18-75 years of age with diabetes (type 1 or type 2) who had an eye exam (retinal) performed

  8. Foot Examination (DM-8): The percentage of patients 18-75 years of age with diabetes (type 1 or type 2) who had a foot exam (visual inspection, sensory exam with monofilament, and pulse exam)

Congestive Heart Failure

  1. LVF Assessment (HF-1): Percentage of patients with quantitative or qualitative results for LVF assessment

  2. Left Ventricular Function (LVF) Testing (HF-2): Percentage of patients with LVF testing during the current year for patients hospitalized with a principle diagnosis of HF during the current year

  3. Weight Measurement (HF-3): Percentage of HF patient visits with weight measurement recorded

  1. Patient Education (HF-5): Percentage of patients who were provided with patient education on disease management and health behavior changes during one or more visit(s)

  2. Beta-Blocker Therapy (HF-6): Percentage of patients who were prescribed beta-blocker therapy

  3. ACE Inhibitor/ARB Therapy (HF-7): Percentage of patients who were prescribed ACE inhibitor or ARB therapy

  4. Warfarin Therapy for Patients with Atrial Fibrillation (HF-8): Percentage of patients with paroxysmal or chronic atrial fibrillation who were prescribed warfarin therapy

Coronary Artery Disease

  1. Antiplatelet Therapy (CAD-1): Percentage of patients who were prescribed antiplatelet therapy

  2. Drug Therapy for Lowering LDL Cholesterol (CAD-2): Percentage of patients who were prescribed a lipid-lowering therapy (based on current ACC/AHA guidelines)

  3. Beta-Blocker Therapy - Prior Myocardial Infarction (MI) (CAD-3): Percentage of patients with prior MI at anytime who were prescribed beta-blocker therapy.

  1. Lipid Profile (CAD-5): Percentage of patients who received at least one lipid profile (or ALL components tests)

  2. LDL Cholesterol Level (CAD-6): Percentage of patients with most recent LDL cholesterol <100 mg/dl

  3. ACE Inhibitor or ARB Therapy (CAD-7): Percentage of patients who also have diabetes and/or LVSD who were prescribed ACE inhibitor or ARB therapy

Preventive Care

  1. Blood Pressure Measurement (PC-1): Percentage of patient visits with blood pressure (BP) measurement recorded

  1. Breast Cancer Screening (PC-5): The percentage of women 40-69 years of age who had a mammogram to screen for breast cancer

  2. Colorectal Cancer Screening (PC-6): Percentage of patients screened for colorectal cancer during the one-year measurement period

  3. Influenza Immunization (PC-7): Percentage of patients who received an influenza immunization during the one-year measurement period

  4. Pneumonia Vaccination (PC-8): The percentage of patients >65 years and older who ever received a pneumococcal vaccination






File Typeapplication/msword
File TitleRESPONSES TO OMB’s QUESTIONS ON ICR 0938-0941:
AuthorCMS
Last Modified ByCMS
File Modified2008-11-10
File Created2008-11-10

© 2024 OMB.report | Privacy Policy