Mathematica Policy Research, Inc. (MPR) is conducting an independent evaluation of the Medicare Care Management Performance (MCMP) Demonstration on behalf of the Centers for Medicare & Medicaid Services (CMS). The demonstration, which began operations on July 1, 2007, will run for three years, ending June 2010.
Section 649 of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) requires the Secretary of the U.S. Department of Health and Human Services to establish a pay-for-performance demonstration program with physicians to meet the needs of eligible beneficiaries through the adoption and use of health information technology (HIT) and evidence-based outcome measures. The goals of the three-year demonstration are to improve quality of care to eligible fee-for-service Medicare beneficiaries and encourage the implementation and use of HIT. The specific objectives are to promote continuity of care, help stabilize medical conditions, prevent or minimize acute exacerbations of chronic conditions, and reduce adverse health outcomes. CMS is responsible for designing and operating the MCMP demonstration.
Under the demonstration, physician practices that meet or exceed performance standards established by CMS in clinical performance process and outcome measures will receive a bonus payment for managing the care of eligible Medicare beneficiaries. Practices that submit performance data electronically using a certified electronic medical record (EMR) system to CMS will also be eligible for an increase in the incentive payment. The bonuses will be in addition to the normal fee-for-service Medicare payment they receive for services delivered. In a predemonstration (baseline) year, the demonstration will be a pay-for-reporting initiative to help physicians become familiar with the process of reporting quality measures. The demonstration builds on models used in the private sector, most notably Bridges to Excellence™ (Bodenheimer et al. 2005; de Brantes 2005; Iglehart 2005).
The MCMP demonstration will target practices serving at least 50 traditional fee-for-service Medicare beneficiaries with selected chronic conditions for whom they provide primary care. Under this demonstration, physicians practicing primary care1 in solo or small- to medium-sized group practices (practices with 10 or fewer physicians, although there may be exceptions) will be eligible to earn incentive payments for (1) reporting quality measures for congestive heart failure (CHF), coronary artery disease (CAD), diabetes, and the provision of preventive health services during a baseline (predemonstration) period; (2) achieving specified standards on clinical performance measures during the three-year demonstration period; and (3) submitting clinical quality measures to CMS electronically using an EMR system that meets industry standards specified by the Certification Commission for Healthcare Information Technology (CCHIT).
The legislation authorizes up to four demonstration sites to include both urban and rural areas.2 The states of Arkansas, California, Massachusetts, and Utah were chosen as the four sites. The Quality Improvement Organizations (QIOs) in these four states recruited demonstration practices on relationships built through CMS’s Doctor’s Office Quality—Information Technology (DOQ-IT) project. Demonstration practices represent many organizational structures, and serve at least 50 Medicare beneficiaries. Recruitment of demonstration practices began in January 2007.
Demonstration practices were defined by one or more tax identification numbers (TINs). Physicians were linked to each practice using individual Medicare provider identification numbers (PINs). Medicare beneficiaries who live in a demonstration state and who are treated by primary care providers, or those medical subspecialties likely to provide primary care, for the targeted conditions and who are covered under traditional fee-for-service Medicare for both Part A and Part B coverage were linked to these practices.3 Demonstration practices are submitting performance data to CMS on up to 26 clinical measures covering treatment related to CHF, CAD, diabetes, and the provision of specific preventive and screening services for all beneficiaries assigned with a chronic condition.4 Through several contractors, CMS is collecting data on all the clinical measures for the baseline period and all three years of the demonstration.
The demonstration practices will be eligible to receive up to three incentive payments. First, demonstration practices will receive an incentive of $20 per beneficiary per category (up to $1,000 per physician to a maximum of $5,000 per practice) for reporting baseline clinical quality measures. The payment will not be contingent on the practice’s score on any of these measures. Second, for each of the three demonstration years, based on the clinical measures data that the practices report, CMS will calculate a composite score for each chronic condition (as well as the preventive measures) and compare it against performance thresholds. Physicians will be eligible for payments of up to $70 per beneficiary for meeting standards related to a specific chronic condition. Beneficiaries who have more than one condition will be counted in each of the relevant groups. For preventive services, physicians will be eligible for a payment of up to $25 per beneficiary with any chronic condition. Physicians will be eligible to earn up to $10,000 per year for performance on all clinical measures. The maximum annual payment to any single practice will be $50,000, regardless of the number of physicians in the practice. Third, practices with a CCHIT certified EMR system that can extract and submit performance data to CMS electronically will be eligible to increase the incentive payment by up to 25 percent, or $2,500 per physician (up to $12,500 per practice) per year during the demonstration period for electronic submission. Thus, practices could receive up to $192,500 over the three years of the demonstration (including the baseline period).
Finally, Congress also mandated an independent evaluation of the MCMP demonstration. The evaluation must include an assessment of the impacts of pay-for-performance on improving quality of care, care coordination, and continuity of care; reducing Medicare expenditures; and improving health outcomes. The legislation specified that a final evaluation report must be submitted to Congress within 12 months of the demonstration’s conclusion. CMS, with funding from the Agency for Healthcare Research and Quality (AHRQ), has contracted with MPR to conduct this evaluation.
The main goal of the evaluation is to provide CMS and AHRQ with valid estimates of the incremental effect, or impact, of providing performance-based financial incentives on the quality of care, continuity of care, use of Medicare-covered services, and Medicare costs of the chronically ill Medicare beneficiaries served by demonstration practices. It will also examine impacts on physician practices’ use of health information technology, and physician and patient satisfaction. To provide this information, the evaluation must generate rigorous quantitative estimates of the intervention’s impacts.
The impact analysis for the evaluation will use a matched comparison (quasi-experimental) group design. Comparison practices were chosen from practices that participated in the Doctor’s Office Quality─Information Technology (DOQ-IT) project in selected non-demonstration states. Each demonstration state was matched to non-demonstration states based on specific criteria that included demographics, degree of electronic health records and pay for performance programs going on in the state, and other key characteristics (such as their ratio of specialists to general practice/family medicine physicians). The comparison states included Nebraska and Texas (for Arkansas); Arizona, Oregon, and Washington (for California); New York and Connecticut (for Massachusetts); and Idaho and Colorado (for Utah).
Among demonstration and comparison practices, we predicted whether practices participated in the demonstration using a propensity score model that included variables related to practice size, whether the practice was in a medically underserved area, the practices’ experience with HIT, the average number of hospital visits per beneficiary in the practice, the number of evaluation and management visits per beneficiary in the practice, and the number of beneficiaries with the chronic conditions specified by the demonstration. We then developed “matching” weights--to be used in the impact analyses--where the most weight was assigned to comparison practices that were the closest matches for MCMP practices (according to their propensity scores) and the least weight to comparison practices that were not close matches for demonstration practice. Finally, we assessed the validity of the matches by estimating a regression (weighted by the matching weights) that tested whether there were significant differences between MCMP and comparison practices in the changes in key outcome measures (hospitalizations and Medicare expenditures) during the two years prior to the demonstration (2005-2006). These regressions indicated that the differences between the MCMP and comparison practices were small (less than three percent of the mean) and not statistically significant at the .05 level.
The impact analysis will use a difference-in-differences approach to estimate impacts. With this approach, changes in quality measures and other outcomes of practices in the demonstration states and comparison states will be compared before and after the start of the demonstration. The unit of analysis will be the practice, which also is the unit of intervention. As noted above, the matching weight will be applied in the impact analysis.
Because the four demonstration sites (that is, the demonstration state and its matched comparison states) are likely to differ substantially, the evaluation will estimate impacts separately for each site. Site-level differences may include: physician practice regulations, practice styles, practice settings, adoption of electronic health records, and pay-for-performance penetration. We will also report summary impacts across states to provide an assessment of the overall demonstration effectiveness. Finally, because overall impact estimates may mask important differences within groups, when sample sizes permit, we will estimate impact estimates for subgroups defined by practice features, such as practice affiliation or patient mix, and beneficiary characteristics, such as having a select chronic condition.
It is important to highlight that the analysis will not report practice-level impact estimates. As noted, although practices are the unit of intervention, the impact analysis should refer to the overall impact of the intervention by site, or combined sites, rather than by practice. This is in agreement with the standard practice of reporting impacts on the total sample of individuals, rather than on specific individuals who received the intervention and their control or matched comparison group, when the individual is the unit of intervention.
Data for the impact analysis will be collected from four sources: (1) the Office Systems Survey (2) Medicare claims data (3) a beneficiary survey, and (4) a physician survey. (This request for OMB clearance only relates to the beneficiary and physician survey.) Together, these data sources will allow us to capture the demonstration’s 26 quality measures (Table A.1) as well as on a wide array of other outcome measures (Table A.2).
The Office Systems Survey (OSS) was designed by CMS to collect information from DOQ-IT practices’ on their use of health information technology. The OSS that was administered in the summer of 2007 will be used to construct baseline measures on practices’ use of electronic tools to improve quality. Analogous followup measures will be constructed from the OSS that is administered in 2009.
Medicare claims and eligibility data will be used to identify beneficiaries with chronic conditions, construct service use and expenditure measures, and construct seven of the demonstration’s 26 quality measures. These data will be available for the baseline period and for each of the three years of demonstration operations.
The survey of eligible Medicare beneficiaries will measure well being (using such indicators as health status, burden of illness, and quality of life); access to care; adherence to self-care management principles; continuity of care; and satisfaction with care (Table A.3). It will also collect data on six of the demonstration’s quality measures that are not available from the claims data. (It will not ask questions about the other 13 quality measures because they are too technical for the beneficiary to know or remember the answer accurately.) It will be administered once, 19 months after the start of the demonstration.
The survey of physicians will measure barriers to transforming the practices’ clinical encounters with beneficiaries and other office procedures, barriers to adoption of HIT, experience implementing this type of technology, satisfaction with HIT, and experience with
TABLE A.1
DATA AVAILABILITY OF QUALITY MEASURES RELATED TO FINANCIAL INCENTIVES
|
Data Source |
|
||
Measure |
Medical |
Medicare Claims |
Beneficiary Survey |
Data Available |
Percentage of patients with coronary artery disease who: |
|
|
|
|
Were prescribed antiplatelet therapy |
X |
|
|
No |
Were prescribed a lipid-lowering therapy |
X |
|
|
No |
Were prescribed beta-blocker therapy, among those with prior myocardial infarction |
X |
|
|
No |
Received at least one lipid profile |
X |
X |
|
Yes |
Had most recent LDL cholesterol < 130 mg dl |
X |
|
|
No |
Were prescribed ACE inhibitor therapy, among those who also have diabetes and/or LVSD |
X |
|
|
No |
Percentage of patients with diabetes having: |
|
|
|
|
One or more blood tests for hemoglobin A1c |
X |
X |
|
Yes |
Most recent A1c level > 9 percent |
X |
|
|
No |
At least one test for microalbumin (or had medical attention for existing nephropathy or microalbuminuria or albuminuria) |
X |
X |
|
Yes |
Dilated retinal exam |
X |
X |
|
Yes |
At least one foot exam |
X |
|
X |
Yes |
Last blood pressure measurement below 140/90mm Hg (among those receiving a test) |
X |
|
|
No |
Most recent LDL cholesterol < 130 mg/dl |
X |
|
|
No |
Had at least one LDL cholesterol test |
X |
X |
|
Yes |
Percentage of patients with congestive heart failure who: |
|
|
|
|
Had left ventricular function results recorded |
X |
|
|
No |
Left ventricular ejection tested (among those hospitalized with heart failure) |
X |
X |
|
Yes |
Had weight measurement recorded |
X |
|
X |
Yes |
Had patient education class on disease management and health behavior change during one or more visits within a six- month period |
X |
|
|
No |
Were prescribed beta-blocker therapy, among those who also have LVSD |
X |
|
|
No |
Were prescribed ACE inhibitor therapy, among those who also have LVSD |
X |
|
|
No |
Were prescribed warfarin therapy, among those with paroxysmal or chronic atrial fibrillation |
X |
|
|
No |
Percentage of those with specified chronic diseases who: |
|
|
|
|
Had blood pressure measurement during last office visit |
X |
|
X |
Yes |
Had breast cancer screening during current or previous year, among those under age 69 |
X |
X |
|
Yes |
Had colorectal cancer screening during recommended period |
X |
|
X |
Yes |
Had influenza vaccination during September through February of year prior to measurement year, among those over age 50 |
X |
|
X |
Yes |
Had pneumonia vaccination, among those with a chronic condition over age 65 |
X |
|
X |
Yes |
ACE = Angiotensin Converting Enzyme Inhibitor; LVSD = left ventricular systolic dysfunction.
TABLE A.2
OVERVIEW
OF TYPES OF OUTCOME MEASURES AND DATA
SOURCES FOR IMPACT
ANALYSIS
Measure |
Data Source |
Primary Outcome Measures |
|
Quality Measures |
|
Outcomes directly related to financial incentives |
Medicare Claims Data and Beneficiary Survey |
Process measures related to care quality |
Medicare Claims Data and Beneficiary Survey |
Health outcomes |
Medicare Claims Data and Beneficiary Survey |
Medicare service use and costs |
Medicare Claims Data |
Use of HIT in office procedures |
Physician Survey and Office Systems Survey |
Secondary Outcome Measures |
|
Coordination and continuity of care |
Beneficiary Survey, Physician Survey, Medicare Claims Data |
Physician satisfaction |
Physician Survey |
Patient satisfaction |
Beneficiary Survey |
HIT = health information technology.
TABLE A.3
MEASURES COLLECTED ON THE BENEFICIARY SURVEY
Health Status |
Self-rated health status |
List of diagnosed chronic conditions |
Self-rated knowledge of chronic conditions |
Self-rated knowledge of risk factors or symptoms of worsening conditions |
|
Access to Care |
Regular source of medical care |
Frequency of physician or clinic visits in past year |
Frequency of emergency room visits in past year |
|
Health Care Processes |
Measures taken, exams given, and education provided during last visit to health care professional |
Discussion of exercise, smoking, drinking, diet with health care professional in past year |
Colon cancer screening in past five years |
Flu vaccination in past two years |
Frequency of self-examination of feet and self-weigh during past year |
|
Satisfaction with Care |
Level of satisfaction with several dimensions of the care received from the health care professional (for example, the amount of time spent with the doctor during a visit) |
Level of satisfaction with the ability to get appointments and reminders for appointments |
Level of satisfaction with communication among physicians regarding patient’s medical care |
Overall quality of health care and services |
|
Background Information |
Level of education |
Primary language spoken |
Marital status |
Living arrangements |
Household size |
House ownership status |
Employment status |
Household income |
other pay-for-performance programs (in the demonstration states only) (Table A.4). It will be administered 25 months after the start of the demonstration.
Finally, while data on each of the demonstration’s quality measure will be extracted from medical records, this data will not be used in the impact analyses because it will be available only for treatment group practices. Thus, we will only be able to conduct descriptive analyses and trend analyses for the 13 quality measures that are available only from medical records.
TABLE A.4
MEASURES
COLLECTED ON THE PHYSICIAN SURVEY
Use of Electronic Medical Records (EMRs) |
Availability of EMR system |
Use of EMR system to perform functions (for example, documenting office visits, e-prescribing, polypharmacy, or issuing patient reminders) |
Level of satisfaction with EMR system training |
Level of satisfaction with ability of EMR system to meet practice needs |
Barriers to Adoption and Use of EMRs |
Start-up and maintenance costs |
Time to acquire or setup the system |
Staff computer skills, skepticism, and reluctance to change |
Patient privacy concerns |
Time and ability to incorporate legacy records into the new system |
Interoperability |
Caring for Medicare Patients with Chronic Illnesses |
Issue routine care reminders electronically or manually |
Change in number of office visits, telephone conversations, and email exchanges with Medicare patients |
Number of encounters with polypharmacy, unnecessary or duplicate tests, lack of timely information from other providers or after hospitalization |
Level of satisfaction with overall quality of care, coordination of care, physician and patient knowledge of recommended preventive care |
Frequency of producing reports on patients |
Frequency of availability of patient care-related information during office visits |
Experiences with the MCMP Demonstration (only Physicians in Demonstration Practices) |
Success targeting important medical conditions, use of appropriate quality measures, and promoting EMR adoption and use |
Effect of demonstration on processes of care |
Recommendation of MCMP to colleagues |
Experience with other pay-for-performance programs |
Demographic and Socioeconomic Characteristics |
Number of Medicare beneficiaries with chronic conditions seen in an average week |
Use of languages other than English in practice |
Years in medical practice |
Whether board certified |
Age |
Race/ethnicity |
The MCMP Demonstration is authorized by Section 649 of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA). The legislation requires the Secretary of the U.S. Department of Health and Human Services to establish a pay-for-performance demonstration program with physicians to meet the needs of eligible beneficiaries through the adoption and use of health information technology (HIT) and evidence-based outcome measures. (See Appendix A for a copy of the legislation.) The MMA authorized up to four demonstration sites to include urban and rural areas; CMS chose Arkansas, California, Massachusetts, and Utah. An independent evaluation of the MCMP demonstration is required. The evaluation must include an assessment of the impact of pay-for-performance on improving quality of care, care coordination, and continuity of care, thereby reducing Medicare expenditures and improving health outcomes. To measure these outcomes, the impact evaluation requires a survey of eligible Medicare beneficiaries and a survey of physicians participating in the demonstration.
Information for the evaluation of the MCMP demonstration will be collected and analyzed by MPR, under Contract Number 500-00-0033, Task Order 05, with CMS, titled “Evaluation of Medicare Care Management Performance Demonstration”.
Findings from the impact analysis will be included in the Report to Congress (due within 12 months of the conclusion of the demonstration) and other internal reports to CMS.
Data collection for the beneficiary survey will begin in January 2009, approximately 19 months from the start of the demonstration. Beneficiary survey data collection will rely on a self-administered mail questionnaire and will be supplemented with computer-assisted telephone interviewing (CATI). Data collection for the physician survey will also use both mail and CATI; however, CATI will be the primary data collection method. Questionnaire content for each survey will be the same across modes. Respondent signatures are not required for either of the two surveys.
MPR will enter mail survey data using Viking data entry software on a SUN Ultra Enterprise 2 workstation. A data entry program specific to the survey instrument will be developed and thoroughly tested before use. The program will contain study-specific logic and range and consistency checks to produce high quality data.
Quality control and data entry of completed questionnaires will continue throughout the field period (expected to run for 12 months for the beneficiary survey and 11 months for the physician survey). The data entry program will contain edit specifications and will flag errors electronically. Calls to collect critical missing information and resolve inconsistencies will be made as needed. All errors will be reviewed and resolved during data cleaning, and all entries will be 100 percent verified.
This information collection does not duplicate any other effort, and the information cannot be obtained from any other source.
Solo, small, and mid-size practices (that is, practices with 10 or fewer physicians, although there may be exceptions) will be targeted for the physician survey. Participating in the survey will impose minimal burden for physicians. The physician survey is designed to be completed in 10 or fewer minutes.
Both data collection efforts are one-time-only collections and are necessary for conducting a credible evaluation. Not conducting the surveys would limit CMS’s understanding of the impact of the MCMP demonstration and would impair CMS’s ability to provide a fully informed Report to Congress, as required.
There are no special circumstances related to the proposed data collection for the MCMP evaluation.
The notice required by 5 CFR 1320.8 (d), will be submitted by CMS for publication in the Federal Register.
Outside consultation for the design of the study and surveys was received from the following experts.
Sheldon Retchin (M.D., M.P.H., University of North Carolina), Professor of Internal Medicine and Chief Executive Officer of Virginia Commonwealth University (VCU) Health System. Dr. Retchin provided advice regarding the development of the physician survey instrument. He also assisted with the analysis of quality of care measures. Dr. Retchin is a national expert in health policy and health care delivery and has extensive experience with the implementation and study of the effectiveness of electronic medical records in office practice settings. The VCU Health System, where Dr. Retchin is CEO, recently installed a $57 million clinical information system that includes computerized physician order entry (CPOE). The VCU Health System has had mandatory CPOE at its hospitals for more than 20 years.
Robert H. Miller (Ph.D., Economics, University of California, San Francisco [UCSF]) is Professor of Health Economics in Residence, Institute for Health & Aging at UCSF. Dr. Miller provided advice on the physician survey. His research focuses on the economics of information technology (IT) and organizational change in ambulatory care settings. He has conducted studies about the costs, benefits, and use of electronic medical records; the economic feasibility of community-wide electronic clinical data exchange; and the capabilities of e-health systems: their implementation, use, and current/potential effects on quality and efficiency.
Several surveys that were used in other demonstrations sponsored by CMS were referenced in the development of the beneficiary and physician survey instruments for MCMP to identify questions that were previously used successfully with similar populations. These included (1) the Medicare Coordinated Care Physician Survey Questionnaire; (2) the Senior Dimensions Second Generation Social Health Maintenance Organization Survey; and (3) the Medicare Disease Management Program Evaluation Patient Questionnaire. The two current survey instruments were pretested with nine or fewer respondents.
No payments or gifts are planned for respondents of either the beneficiary or physician surveys.
Confidentiality for this project is being assured in accordance with 42 U.S.C. 1306, 20 CFR 401 and 402, 5 U.S.C. 552a (Privacy Act of 1974), and OMB Circular No. A-130.
MPR will take several steps to assure respondents that the information they provide will be treated as confidential and used for research purposes only. Sample members will be told that the answers they provide will be kept confidential and will not be released, except as required by law. They will also be told that their information will be used only as part of this evaluation. Survey respondents will be told that they will not be identified individually (that is, by name) in any reports or in any communications to CMS. The assurances and limits of confidentiality will be made clear in advance material mailed to respondents and will be restated at the beginning of each telephone interview. Respondents will also be told that their participation in the survey is voluntary, though important, and that they have the option to refuse to answer questions in the survey. Staff assigned to work on the project sign confidentiality pledges as a term of employment. The confidentiality pledge requires staff to maintain the confidentiality of all information collected.
Questionnaires completed by mail will not contain names or other personally identifying information. Instead, each questionnaire will contain a unique barcode that can be linked to the respondent only for research purposes.
The beneficiary survey includes questions about health status, medical diagnoses, and medical visits that may be considered sensitive. Obtaining information about these potentially sensitive topics is central to the evaluation. Many of the questions were adapted without modification from other surveys of similar populations, such as the Medicare Coordinated Care Physician Survey Questionnaire and the Medicare Disease Management Program Evaluation Patient Questionnaire. In these surveys, there was no indication that respondents were reluctant to report on their health status, diagnoses, and health visits as well as other aspects of their health and their experiences with health care providers. The questions in the physician survey are about the use of electronic medical records, practices when caring for chronically ill Medicare beneficiaries, experiences with the demonstration, and some general background questions. These questions are not considered sensitive.
Table B.1 presents estimates of respondent burden for the beneficiary and physician surveys. It shows the expected number of respondents to each survey, hours per response, and the annualized hour and cost burden.
Hour estimates for the beneficiary survey are based on pretests completed with eight Medicare beneficiaries. In those pretests, completion times ranged from 10 to 14 minutes, and averaged 11 minutes. This average was rounded up to the next quarter hour or 15 minutes. The cost per beneficiary response was computed using an estimated average hourly wage rate of $20.475 as follows: $20.47*.25 hours = $5.12 per response. For the 1,200 total beneficiary hours expected (column 4, line 1, Table B.1), the estimated total annual cost burden for the beneficiary survey is $6,144.
Eight physicians also completed pretests. Those pretests form the basis for the hour estimates provided. For the physician survey, pretest completion times ranged from 4 to 18 minutes, and averaged 8 minutes overall. The cost per physician response was computed using an estimated annual salary of $160,000 for primary care physicians and 2,080 annual work hours as follows: $160,000/2,080 *0.17 hours = $13.08 per response. For the 272 total hours expected to complete the survey (column 4, line 2 in Table B.1), the estimated total annual cost burden for the physician survey is $3,558.
There are no direct costs to respondents other than their time to participate in the study.
TABLE B.1
RESPONSE BURDEN FOR THE BENEFICIARY AND PHYSICIAN SURVEYS
Survey |
Number of Respondents (1) |
Frequency of Response (2) |
Hours Per Response (3) |
Annual Hour Burden (4) |
Cost Per Response (5) |
Annual Cost Burden (6) |
Beneficiary survey |
4,800 |
1 |
0.25 |
1,200 |
$5.12 |
$6,144 |
Physician survey |
1,600 |
1 |
0.17 |
272 |
$13.08 |
$3,558 |
Total |
6,400 |
1 |
NA |
1,472 |
NA |
$9,702 |
The total current value for this contract is $2,299,876 over a period of seven years. The estimated annualized cost to the government for conducting the surveys of beneficiaries and physicians is $282,961 (over a period of three years). This estimate is based on the contractor’s costs for conducting and tabulating mail survey results, including labor; conducting computer-assisted telephone interviewing for both surveys; other direct costs for computer, telephone, postage, reproduction, fax, printing, and survey facilities; and indirect costs for fringe benefits, general and administrative costs, and fees.
This is a new data collection; therefore, there are no changes to burden.
The demonstration evaluation will produce several reports including a cost neutrality monitoring report, and interim and final evaluation reports that synthesize findings across states and analytic components. The evaluation reports will be adapted to develop a Report to Congress. Table B.2 summarizes the delivery schedule for these reports. A summary of each report follows.
OMB has requested that MPR monitor cost neutrality over the first 18 months of the demonstration. This analysis will require comparing our regression estimates of the demonstration’s effects on Medicare savings to the incentive payments made to demonstration practices. Assuming the data for this analysis are available by month 21 (that is, 21 months after the demonstration begins), MPR plans to deliver a draft of this report to CMS in month 24 (that is, June 2009).
TABLE B.2
delivery SCHEDULE for evaluation reports
|
Due Date |
|
Report |
Project Montha |
Calendar Month |
Design report |
n.a. |
May 2007 |
Implementation report |
18 |
December 2008 |
Cost neutrality monitoring report |
24 |
June 2009 |
Second interim evaluation report |
28 |
October 2009 |
Report to Congress |
40 |
October 2010 |
Final evaluation report |
51 |
September 2011 |
aRefers to the number of months after the start of the demonstration (July 1, 2007).
n.a. = not applicable.
One of the most important components of the evaluation will be the synthesis of the findings from the implementation and impact analyses to determine whether the pay-for-performance incentives improved quality of care for fee-for-service Medicare beneficiaries with chronic illnesses and influenced the adoption and use of HIT and, therefore, whether pay-for-performance should be implemented on a larger scale.
MPR will prepare three interim evaluation reports (drafts due 18, 28, and 40 months after the start of the demonstration, respectively) and a final evaluation report (draft due 51 months after the start of the demonstration), all of which will synthesize those findings available at different times during the demonstration.
The draft for the first interim evaluation report was completed in December, 2008 (18 months after the start of the demonstration), and is currently under review by CMS. It provides an overview of implementation and demonstration activities to date in each state, a comparison of baseline characteristics of demonstration and comparison practices including their use of HIT, and summary statistics on the number of demonstration practices that submitted baseline data. It relies on data from the Office Systems Survey, baseline claims data, and baseline quality measurement data from the demonstration practices.
The second interim evaluation report, due in October 2009 (28 months after the start of the demonstration), will focus on impact estimates for the first year of program operations. Although MPR will compare impacts on use of Medicare-covered services and costs across practices and states, MPR will not attempt to draw inferences from them at this stage of the evaluation. In addition, MPR will summarize findings from telephone discussions with highly successful practices and with those that withdrew, if any, in year 2 of demonstration operations.
The third interim evaluation report, due in October 2010 (40 months after the start of the demonstration), will focus on impact estimates for the second year of program operations. MPR will also include findings on the impacts of pay-for-performance on physician-beneficiary interactions (that is, access to care, care coordination, and satisfaction with care) from the beneficiary survey. Finally, MPR will summarize findings from telephone discussions with highly successful and unsuccessful practices (including those that withdrew, if any), in year 3 of demonstration operations. Up to nine of these discussions will be conducted during the second and third years of the demonstration.
The final evaluation report, due in September 2011 (51 months after the start of the demonstration), will provide final impact estimates from claims data using data from the third, and final, year of demonstration operations. In addition, MPR will present impact estimates from the physician survey on processes associated with the adoption of HIT to improve quality of care. The report will also incorporate our synthesis analysis, including data from the last wave of the Office Systems Survey.
MPR will produce one Report to Congress based on the independent evaluation. The draft report is due in October 2010, approximately three months after the end of demonstration operations. This report will analyze implementation experiences and findings of the MCMP demonstration across the four states.
The OMB expiration date will be displayed on all survey materials sent to sample members, including the advance letter and questionnaire.
Both data collection efforts will conform to all provisions of the Paperwork Reduction Act.
1 The following physician specialties will be eligible to participate in the MCMP demonstration if they provide primary care: general practice, allergy/immunology, cardiology, family practice, gastroenterology, internal medicine, pulmonary disease, geriatric medicine, osteopathic medicine, nephrology, infectious disease, endocrinology, multispecialty clinic or group practice, hematology, hematology/oncology, preventive medicine, rheumatology, and medical oncology.
2 In addition, the statute requires that one site be “in a state with a medical school with a Department of Geriatrics that manages rural outreach sites and is capable of managing patients with multiple chronic conditions, one of which is dementia.”
3 Beneficiaries for whom Medicare is not the primary source of insurance coverage or whose care a hospice program manages will be excluded from the demonstration.
4 In addition to three primary target chronic conditions—congestive heart failure, coronary artery disease, and diabetes mellitus—the other eligible conditions are Alzheimer’s disease or other mental, psychiatric, or neurological disorders; any heart condition (such as arteriosclerosis, myocardial infarction, or angina pectoris/stroke); any cancer; arthritis and osteoporosis; kidney disease; and lung disease. These conditions will be identified through ICD-9-CM diagnosis codes available in Medicare claims data (Wilkin et al. 2007).
5 The rate represents the 2006 national average hourly rate of $19.29 as published by the Bureau of Labor Statistics, plus two percent annual rate of inflation.
File Type | application/msword |
File Title | MEMORANDUM |
Author | Cindy McClure |
Last Modified By | Jessica Silvani |
File Modified | 2009-02-26 |
File Created | 2009-02-26 |