NORC Final QIO Report

NORC Final QIO Report 1.29.07.pdf

Program Evaluation of the Ninth Scope of Work Quality Improvement Organization Program (CMS-10294)

NORC Final QIO Report

OMB: 0938-1104

Document [pdf]
Download: pdf | pdf
TOWARD AN EVALUATION OF THE QUALITY
IMPROVEMENT ORGANIZATION PROGRAM:
BEYOND THE 8th SCOPE OF WORK

FINAL REPORT

Janet P. Sutton, PhD
Lauren Silver, BA
Lucia Hammer, MBA
Alycia Infante, MPA

Presented to:
The Office of the Assistant Secretary for Planning and Evaluation
U.S. Department of Health and Human Services
200 Independence Avenue, S.W.
Washington, D.C. 20201

Submitted by:
NORC at the University of Chicago
Health Policy and Evaluation
7500 Old Georgetown Road, Suite 620
Bethesda, Maryland 20814

January 29, 2007

Acknowledgements
Although the authors take sole responsibility for the contents of this report, we would like to thank
Laurie Feinberg, our ASPE Project Officer, and Susan Polniaszek who provided thoughtful input
and guidance throughout the course of the project. We would also like to thank our subcontractor,
Robert Friedland, at Georgetown University, for his many contributions in designing the interview
guides, conducting interviews, and providing insight on evaluation designs. Likewise, we very much
appreciate the helpful advice and guidance given to us by our study consultant, Shoshanna Sofaer.
We gratefully acknowledge the contributions of the members of our Technical Expert Panel who as
a group and individually reviewed the evaluation design options and provided us with substantive
advice.
Our appreciation goes out to the staff and key informants at each of the nine QIOs that we visited
who graciously took of their time to answer our many questions about their work and provided us
with a more substantive understanding of the QIO program. In addition, we are very grateful to the
staff at CMS and the Institute of Medicine who met with us and provided us with materials to better
understand QIO program operations.
Finally, we are very much indebted to our current NORC colleagues Claudia Schur, Caitlin
Oppenheimer, Alene Kennedy, Jessica Bushar, Jyoti Gupta and our former NORC colleagues Holly
Stockdale, Alana Ketchel, Jennifer Benz, and Jenissa Haidari, without whose hard work and tireless
assistance in various stages of this project, including developing the study protocol, designing and
populating the QIO Inventory, conducting the site visits, and reviewing reports and other products,
this project could not have been completed.

TABLE OF CONTENTSTS
EXECUTIVE SUMMARY .................................................................................. i
1.0

INTRODUCTION................................................................................... 1

1.1 Growing Interest in Evaluation of the QIO Program......................................................... 1
1.2 Study Objectives ................................................................................................................ 2
1.3 Organization of Report ...................................................................................................... 3

2.0

THE QIO PROGRAM.............................................................................. 4

2.1 History and Background .................................................................................................... 4
2.2 The QIO Program Today ................................................................................................... 5

3.0

REVIEW OF THE LITERATURE........................................................12

3.1 Methodology .................................................................................................................... 12
3.2 Evaluations of the Effectiveness of the QIO Program..................................................... 13
3.3 Qualitative Reviews of the Evolution and Impact of the QIO......................................... 19
3.4 The Impact of Individual QIO Quality Improvement Initiatives.................................... 21
3.5 Summary ......................................................................................................................... 28

4.0
4.1
4.2
4.3
4.4

5.0
5.1
5.2
5.3
5.4
5.5
5.6

6.0
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8

DEVELOPMENT OF QIO INVENTORY.......................................... 30
Sources of Data Information to Conduct the Environmental Scan................................. 30
Database of QIO Activities............................................................................................. 32
Key Findings................................................................................................................... 34
Challenges and Limitations............................................................................................. 38

SITE VISIT RESULTS .......................................................................... 39
Site Visit Methods........................................................................................................... 39
Results: Organization and Governance of QIO .............................................................. 41
Results: Technical Assistance Offered to Providers....................................................... 42
Results: Case Review, Beneficiary Protection and Program Integrity ........................... 46
Results: QIO Support Center Functions ......................................................................... 47
Results: CMS Program Management and Evaluation Issues.......................................... 47

EVALUATION DESIGNS & CONSIDERATIONS............................ 50
TEP Contributions to the Evaluation Design Process..................................................... 50
Assumptions, Scope and Organization of Evaluation Designs....................................... 57
Designs for Evaluating the Core Program ...................................................................... 58
Designs for Evaluating the Special Studies Program ..................................................... 67
Designs for Evaluating Technical Assistance Approaches............................................. 71
Designs for Supporting Poor-Performing and Less Motivated Providers ...................... 74
Designs for Evaluating CMS Performance Targets........................................................ 76
Other Areas for Study ..................................................................................................... 78

7.0
7.1
7.2
7.3

RECOMMENDED FIRST STEPS ...................................................... 80
Inventory CMS Data Systems......................................................................................... 80
Address Limitations in Access to Provider Identifying Data ......................................... 81
Maintain Transparency in Designing and Conducting Evaluation ................................. 82

REFERENCES ................................................................................................ 83
APPENDIX A: 8TH SOW QIOs.....................................................................A-1
APPENDIX B: SCREENSHOT OF QIO INVENTORY ............................ B-1
APPENDIX C: SELECTED STUDIES FOR THE 8th SOW........................C-1
APPENDIX D: TECHNICAL EXPERT PANEL BIOGRAPHIES ............ D-1
APPENDIX E: SOURCES OF DATA FOR USE IN EVALUATION ......... E-1

EXECUTIVE SUMMARY
I.

Background and Study Objectives

The Centers for Medicare and Medicaid Services (CMS), the Federal agency that administers the
Medicare program, contracts with a national network of 53 Quality Improvement Organizations
(QIOs)—one in each state, the District of Columbia, Puerto Rico, and the Virgin Islands. QIOs
seek to 1) improve the quality of care that Medicare beneficiaries receive by collaborating with
providers to help them meet evidence-based standards of care, 2) protect beneficiaries by
responding to and investigating claims and evidence of substandard care, and 3) protect the
Medicare Trust Funds by reviewing claims patterns and suspicious cases for the inappropriate use of
services or incorrect billing codes. Over the course of a 3-year contract with CMS, QIOs engage
providers in quality improvement projects and offer technical assistance across four major health
care settings – hospitals, home health agencies, nursing homes, and physician offices. For the
current 3-year contract period CMS has dedicated $1.265 billion to the program.
Recent press coverage and inquiries made by Congress have raised questions regarding the QIO
program’s effectiveness and whether substantial reforms should be made to the program. As part of
the Medicare Prescription Drug, Improvement, and Modernization Act (MMA) of 2003, the
Congress requested that the Institute of Medicine (IOM) conduct an evaluation of the QIO
program. The IOM released their report “Medicare’s Quality Improvement Organizations:
Maximizing Potential” in March 2006. Among the IOM’s conclusions was that:
“Given the lack of consistent and conclusive evidence in scientific literature and
the lack of strong findings from the committee’s analyses, it is not possible to
determine definitively the extent of the impact of the QIOs and the national
QIO infrastructure on the quality of health care received by beneficiaries. Many
confounding factors make it difficult to attribute the results obtained thus far [to
QIOs].” (IOM, 2006)
I.A

Study Objectives

In 2005 ASPE contracted with NORC at the University of Chicago (NORC) to develop several
options for evaluating the effectiveness of the QIO program. NORC’s objectives for this study
were three-fold:
1)

Conduct an environmental scan to identify and create an inventory of QIO-specific
technical assistance activities, interventions, and strategies used to meet performance
targets identified in the 7th and 8th SOW and enter this data into a database of QIO
activities;

2)

Conduct site visits to QIOs to gather more detailed information about their day-to-day
operations and quality improvement strategies;

3)

Identify alternative designs for evaluating the QIO program or studies to enhance our
i

understanding of selected components of the program, to be vetted by members of a
Technical Expert Panel (TEP).
I.B

History and Structure of the QIO Program

The origins of the QIO program date back more than thirty years, beginning in 1971 with the
creation of Experimental Medical Care Review Organizations (EMCROs), in 1972 with the creation
of Professional Standards Review Organizations (PSROs), and then in 1982 with the creation of the
Utilization and Quality Control Peer Review Organization (PRO) Program. These earlier programs
focused on utilization review, cost-containment, and adherence to local practice patterns by
“inspecting and detecting” to identify egregious cases in delivery of care and, if necessary,
sanctioning providers for substandard care. As a result, providers perceived them more as
adversarial and regulatory in nature, as opposed to potential partners in quality improvement.
In response to a 1990 review by the Institute of Medicine (1990), which concluded that a
collaborative approach to quality improvement would be more effective in improving providers’
performance, the Health Care Financing Administration (HCFA) (now CMS) launched the Health
Care Quality Improvement Initiative (HCQII) in 1992 to analyze patterns of care and identify areas
for improvement. Under the HCQII, PROs were encouraged to collaborate with hospitals as
partners in developing and implementing hospital quality improvement initiatives instead of focusing
on identifying individual “bad apples” within the provider community. These changes implemented
by HCFA represented a dramatic shift in vision for the QIO program. Subsequently, Congress
officially renamed the PRO program in 2001 to the “Quality Improvement Organization Program.”
To date, eight rounds of contracting have occurred since the shift to a 3-year contract cycle took
place in 1984, hence bringing us in 2005 to the 8th Scope of Work (SOW). Under the SOW QIOs
are required to engage in four major sets of tasks. Tasks 1 through 3 are referred to in this report as
the “core contract” since all QIOs are required to perform these activities. Task 4 refers to “noncore” activities. These are “Special Studies,” which selected QIOs may be contracted to perform.
Under Task 1 of the 8th SOW core contract, QIOs are responsible for providing technical assistance
to providers across four major health care settings – nursing homes, home health agencies, hospitals,
and physician offices – in order to improve providers’ performance across multiple clinical
outcomes and processes of care measures. Furthermore, CMS requires that QIOs divide their
technical assistance activities between two different groups of providers. First, QIOs must offer
technical assistance to all providers in a state who request assistance on issues of quality
improvement as identified in the SOW. The second group of providers includes an “identified
participant group,” or an IPG. Providers in an IPG are selected by QIOs and, subsequently,
volunteer to receive intensive and ongoing technical assistance and participate in a number of
projects to meet specified performance improvement targets. Thus, Task 1 is comprised of QIOs’
activities with IPG and non-IPG providers. Under Task 3, QIOs review beneficiary complaints for
quality of care concerns and, as part of the Hospital Payment Monitoring Program (HPMP), they
also review the accuracy of DRG codes, medical necessity, and the appropriateness of care to
address issues of inappropriate utilization or billing patterns.
Task 4 of the SOW is comprised of the Special Studies Program. The Special Studies Program
includes two different types of special studies—Quality Improvement Organization Support Centers
ii

(QIOSCs) and all other special studies. CMS awards QIOs funds to conduct special studies in
addition to their core contract activities. Special studies are designed to gather information for
identifying best practices; examining or testing performance measures, tools or technical assistance
approaches; and, in general, addressing issues of specific interest or relevance to CMS and the QIO
program. Quality Improvement Organization Support Centers are QIOs who receive funds to offer
technical assistance or support to other QIOs by providing them with the tools, training,
information on best practices, and other resources that they need to work effectively with providers
to meet quality improvement objectives. As of the 8th SOW, a total of 15 QIOSC contracts have
been awarded.
I.C

Review of the Literature on QIO Program Effectiveness

For years, researchers have attempted to evaluate the effectiveness of the QIO program using both
qualitative and quantitative analytical techniques and with national-, organizational-, and health care
setting-level data, but, for the most part, these studies have proven inconclusive. Even the most
recent studies are plagued by the same methodological obstacles that earlier studies failed to
overcome – questionable data, selection bias, spurious attribution due to numerous confounding
factors (e.g. secular trends, differences in provider motivation, non-QIO quality improvement
initiatives), lack of generalizability, and the inability to isolate and define experimental and control
groups.
The body of literature on the QIO program brings to policymakers’ attention the importance of
quality improvement in Medicare and, in part, suggests that QIOs play a role in promoting quality of
care. However, the evidence is inconclusive as to what extent, if any, of the demonstrated quality
improvement can be attributed to the QIO program, overall. This conclusion stems from two
major observations in the literature:
•

The review of this literature did not yield a conclusive answer as to whether or not the QIO
program or specific QIO-led interventions resulted in higher quality, lower quality or no
change in any given provider setting. While several QIO interventions or collaboratives
suggest that QIO-directed quality improvement activities have been effective at improving
selected process and outcome measures, the statistical significance of the findings varied. As
an editorial in a 2005 issue of JAMA pointed out that among 33 recent studies of the QIO
program, 16 yielded “ambiguous results,” eight reported no or negative effects, and nine
reported positive effects.

•

Most studies evaluating the effectiveness of the QIO program are fraught with
methodological limitations—such as selection bias, confounding, and attribution— that are
inherent in the study designs. Such problems are threats to the internal and external validity
of the studies and may bias study findings. In the future, new and methodologically rigorous
studies will be necessary to offer more meaningful conclusions about the effectiveness of the
QIO program.

iii

II.

Major Findings from QIO Inventory, Site Visits and TEP Meeting
II.A

Development of QIO Inventory

In order to obtain an inventory of QIO activities for the 7th and 8th SOWs, NORC conducted a
comprehensive environmental scan. As part of this scan we gathered a standardized set of
descriptive information about each of the 53 QIOs; data consisted of basic identifying information
such as address and the name of the Chief Executive Officer. Other data consisted of information
on the organizational structure, profit status, board membership and composition. To the extent
available, we gathered activity-level information on each of the QIOs and information related to the
organization’s day-to-day operations and activities, such as ongoing quality improvement projects
and initiatives; related publications; trainings, workshops, and other services offered to providers;
collaborations with other organizations; and beneficiary outreach activities. Information gathered
from the environmental scan was used to populate a database or inventory of QIO activities, and to
develop QIO-specific site visit interview protocols. Finally, data from the scan assisted staff in the
development of evaluation designs.
For the overwhelming majority of tasks, large gaps exist in the data. The scope of findings reflected
the paucity of activity- or intervention-specific information available in public resources, particularly
activities related to the 7th SOW. In several cases, no substantive information on any specific
project could be found for a given QIO and subtask. The quality and depth of information did,
nonetheless, vary greatly from QIO to QIO. Even for a single QIO, the information available often
varied from setting to setting. Efforts to locate details on projects that were identified by name
often proved futile and while most QIOs stated that they currently or have previously participated in
national or local quality improvement initiatives, specific details as to the QIOs’ scope or role in the
initiative were generally unavailable.
II.B

Site Visits to QIOs

To gain on-the-ground insight into individual QIOs’ daily operations, NORC conducted site visits
to nine QIO contractors, representing 12 states and the District of Columbia. In consultation with
ASPE and CMS staff, site visit QIOs were chosen on the basis of the size of the state they served,
location, whether they held single or multiple QIO contracts or QIOSC contracts, and profit status.
QIO staff were queried about organizational structure and governance, their strategies for
completing tasks under and beyond the core contract (such as special study and/or QIOSC
activities), and their experiences with CMS management of the program, including the contracting
and evaluation process. A brief overview of the site visit results is presented below.
Identified participant group selection: Most QIOs report “cherry-picking” in order to meet
CMS’s performance targets, that is, QIOs choose providers as identified participants who are most
likely to garner QIOs a passing score on CMS’s evaluation. Moreover, QIOs indicated that they
tend to avoid working with both poor performers and high performers – the former because they
may lack the resources or the motivation to meet the SOW’s quality improvement benchmarks and
the latter due to a possible “ceiling effect” that may limit the degree of potential performance
improvement.

iv

Technical assistance offered to providers: QIO perceptions of which forms of technical
assistance are most effective differed—some preferred collaborative models or group training, while
others preferred a consultative approach incorporating one-on-one assistance. QIOs reported that
the technical assistance strategies they employ depend, in part, on budgetary constraints, geographic
distribution of providers, the presence of field offices, and the type of provider and subtask.
Additionally, QIOs reported that increasing micromanagement on the part of CMS and CMS’ data
lags have restricted both their ability to innovate in order to better respond to the unique needs of
the communities they serve and to conduct real-time tracking of the impact of specific interventions.
Case review and beneficiary protection: All QIOs reported that they receive relatively few
beneficiary complaints and, furthermore, they indicated that most complaints received were not true
quality of care issues, rather, complaints tended to deal with service problems, such as long wait
times, “rude staff,” and other communication problems. Despite this, all QIOs disagreed with the
IOM’s recommendation that case review activities be removed from QIOs’ responsibilities.
II.C

Proceedings from Technical Expert Panel Meeting

NORC identified and recruited eight experts to respond to and offer feedback and guidance on the
draft evaluation design options. The TEP was convened to ensure that the evaluation designs
NORC proposed were as rigorous and appropriate as feasible considering the scope of the project,
the availability (or lack thereof) of data, and the constraints facing the government and an eventual
evaluator of the QIO program. The TEP provided several major recommendations, including:

III.

•

Evaluations of the QIO program should be prospective. That is, all necessary data
collection vehicles should be in place at the start of the 9th SOW in order to support
ongoing evaluation activities throughout the SOW period of performance. Moreover, a
prospective evaluation may enable the use of more rigorous methodological techniques,
such as randomized case control designs.

•

Options for evaluating the program, as a whole, are limited due to a number of
methodological barriers, thus, multiple smaller-scale studies may be more feasible, such
as well-designed case control studies or randomized control trials to examine the
effectiveness of different technical assistance interventions. These types of studies could
potentially minimize attribution issues and yield results that are more actionable.

•

Several members suggested that instead of the historic snapshot approach to the QIO
program evaluation, a shift in paradigm to continuous quality improvement would be
more informative and may better enable organizations to shift courses to make necessary
programmatic changes.

Evaluation Designs and Considerations

This section describes general approaches for evaluating both the core QIO program and
supplementary components of the program, including special studies and QIOSC contracts,

v

and non-evaluative studies that could be used to gather information or develop tools to
enhance future evaluations of the QIO program as well as to gain a more refined
understanding of the program’s role in quality improvement. The proposed evaluation
options build on prior evaluations that have been conducted, but uses econometric and
statistical approaches to addresses several of the methodological limitations affecting these
studies. We also build upon findings from our QIO inventory and site visits to QIOs. A
major resource in shaping our recommendations was the 2006 report “Medicare’s Quality
Improvement Organization Program: Maximizing Potential,” issued by the IOM Committee
on Redesigning Health Insurance, Performance Measures, Payment, and Performance
Improvement Programs. Finally, the evaluation options described were informed and shaped
by the input of an eight-member Technical Expert Panel (TEP).
III.A Designs for Evaluating the Core QIO Program
We begin this discussion by describing a design option that is based on a national, provider-level
analysis which incorporates a case-control panel design to assess differences in IPG and non-IPG
providers’ performance. Limitations to this approach are described in the body of the report.
Long-term evaluation goal and approach: In situations where a randomized control trial cannot
be used, a two-stage econometric model may be used to estimate program effects. Thus, we propose
using econometric modeling to examine differences in IPG and non-IPG provider performance on
clinical quality and process of care measures. It is hypothesized that for each health care setting
under Task 1, performance on quality measures (e.g., restraint use in nursing homes, on-time
prophylactic antibiotic administration in hospitals, etc. ) is related directly to provider engagement
with the QIO. This hypothesis, however, is flawed due to the presence of selection bias, that is,
there may be inherent differences between providers who were selected (or volunteered) to
participate in an IPG and providers who were not selected (or did not volunteer) to participate. Due
to non-random selection, and the likelihood that IPG providers are selected to participate because
they are the most likely to improve (or they volunteer to participate because they are the most
motivated to improve), estimates of a QIO’s impact on performance likely will be biased.
A two-stage econometric modeling approach can be used to account for factors that may influence a
providers’ likelihood of working with a QIO, thereby helping to address the two methodological
barriers that have hindered previous QIO program evaluations – selection bias and confounding, or
attribution. The first equation models the selection mechanism by estimating the probability that a
provider of a particular type (e.g., nursing home, home health agency etc.) participates or is selected
to participate in a QIO’s IPG. The second equation addresses selection bias by estimating provider
performance as a function of the likelihood of selection into an IPG as well as other variables that
include provider, environmental, and QIO characteristics.
Primary and secondary data collection activities: Primary and secondary data collection will be
required to model the dependent and independent variables that comprise the relationships
described above. The major dependent variables are provider participation in an IPG and provider
performance on subtask quality measures.
•

Provider participation in an IPG. Due to regulations that limit access to data on which providers
are IPG members, evaluators must currently work directly through individual QIOs or QIOSCs
vi

to gather de-identified data on IPG providers, or through CMS to obtain access to the PARTner
System, which also stores this type of data electronically.
•

Provider performance on subtask quality measures. These data are collected as a standard part
of the QIO program and should continue to be available through CMS or the QIOSCs. In fact,
for many subtasks, the performance measures by which QIO performance is evaluated are the
same measures reported publicly in the hospital, home health, and nursing home COMPARE
databases or obtained from the Nursing Home Minimum Data Set (MDS) or the Home Health
Outcome and Assessment Information Set (OASIS).

•

The major independent variables used in this model are provider, environmental, and QIO
characteristics. Year or time period also is included in this model because, as suggested by a
member of the TEP, an effort should be made to examine continuous improvements in quality.
As such, it is recommended that performance be measured on at least an annual basis.

•

Provider characteristics. The probability of selection (the first equation in the model) or
participation in an IPG could be driven by a number of provider-level characteristics. CMS
administrative databases (Providers of Services file, the Medicare Cost Reports, the Standard
Analytical Files, and the Provider Enrollment, Chain and Ownership System data) may be used
to extract information on provider profit status, membership in a system, rural/urban location,
and staffing. Private sector databases, such as the American Hospital Association Annual
Survey, may supplement information that is not available in CMS administrative databases.
Information on providers’ level of motivation and willingness to work with QIOs on quality
improvement issues, the extent to which the provider has the internal infrastructure to support
quality improvement efforts, and utilization of non-QIO quality improvement resources is not
readily available and must be obtained through primary data collection. A potential primary data
collection instrument is the CMS “Survey of Provider Satisfaction with Quality Improvement
Organizations.”

•

Environmental characteristics. Certain environmental characteristics may impact providers’
willingness to work with QIOs, such as whether providers are required by managed care
organizations to participate in selected quality improvement initiatives or the level of market
competitiveness. Resources to characterize environmental features that may drive participation
in an IPG and other quality improvement activities are available from public and private sources,
such as the Bureau of Health Profession’s Area Resource File, the Medicare Denominator File
(for use in estimating managed care penetration in the elderly population), and the Kaiser Family
Foundation State Health Facts database.

•

QIO characteristics. It is probable that quality improvement is driven not solely by whether a
provider is an IPG member, but also by the types, intensity, and frequency of technical
assistance that QIOs offer to providers. The concepts of “technical assistance” and “intensity”
are difficult to define and measure, but should be considered key determinants of providers’
performance improvement, however, it should be emphasized that the relationship between
intensity of assistance and performance may be non-linear. The PARTner system and the
Provider Satisfaction Survey are possible sources of information on the nature of the technical
assistance offered by QIOs to providers. Furthermore, measures or scales could be created
using detailed descriptions about the methods QIO use to provide technical assistance, the types
of information they convey, and the number of times that technical assistance is provided.
vii

III.B Supplementary Short-Term Studies
Our ability to adequately model the IPG selection process and to define and measure key QIO- and
provider-specific variables, such as interaction with the QIO, the intensity of technical support and
provider “motivation,” limits the ability to conduct a rigorous evaluation of the QIO program. To
restructure the program without considering its impact could be costly and, without baseline
information on performance, it would be impossible to determine the cost-effectiveness of
restructuring. Therefore, we acknowledge the shortcomings of this evaluation option, but believe
that many of these limitations could be addressed over time, through investments in short and midterm studies and additional data collection.
•

Short-term study on IPG selection processes: There is a dearth of information on the
mechanisms that drive inclusion (from the perspective of a QIO) or participation (from the
perspective of the provider) in an IPG. A more complete understanding of this relationship is
necessary to fully specify the models described above and to accurately control for selection bias
in estimating differences in quality improvement for IPG and non-IPG providers. Among the
options for better understanding the selection process: (1) interviews could be conducted with
QIO staff and providers to understand the criteria that QIOs use to identify IPG candidates and
why certain providers opt in or out of the opportunity to participate in an IPG; (2) exploratory
secondary data analyses could be conducted to assess how IPG and non-IPG providers differ on
basic structural and organizational measures; and (3) the Provider Satisfaction Survey could be
modified to collect information on provider-level characteristics that may drive IPG
participation, such as motivation and infrastructure availability.

•

Short-term study on types and intensity of QIO interventions: Scant data exist on the
range of technical assistance offered by QIOs and little has been done to characterize the
intensity and frequency of QIO interactions with providers. In the short-term, investments in
developing measures or scales by which to categorize QIO technical assistance, both in terms of
substance and intensity, will further our ability to evaluate the QIO program. Two options for
gathering information to develop such a scale include: (1) semi-structured interviews with QIOs
and providers to catalog the types of technical assistance strategies and interventions that are
employed across all QIOs, and to ascertain whether certain provider or environmental factors
influence the decision to use certain types of assistance over others; and (2): the CMS Provider
Survey could be modified to gather detailed information on the nature and intensity of specific
QIO interventions.
III.C Designs for Evaluating the Special Studies Program

During the 7th SOW, CMS spending on the Special Studies Program amounted to more than $130
million, of which approximately $67 million was allocated to QIOSC contracts, which are
considered a separate type of special study. Despite the amount dedicated to the Special Studies
Program, little is known about how the results of special studies or the assistance provided by
QIOSCs support QIO functions or advance the quality of care for Medicare beneficiaries.
•

Special studies: In the short term, an inventory of key pieces of information on special studies
could be developed to support long-term evaluation activities. Through interviews with and a
viii

survey of QIOs, and using CMS administrative data, information could be collected on the
status of special studies in the 9th SOW, including special study results, dissemination methods,
and target audiences. Building on the information gathered for the inventory, the case study
approach may be employed to compare special studies that have been deemed to produce a
“good return on investment” to those deemed to produce a “poor return on investment.”
Interviews with CMS staff and surveys of QIOs and providers also may provide useful
information that speaks to the value that special studies add to the QIO program.
•

QIOSCs: Similar to the data collection methods used for evaluating special studies, an
environmental scan and site visits/interviews with QIOs could be used to gather information on
the types and levels of engagement between QIOs and QIOSCs (both topic-specific and crosscutting). As part of site visits, semi-structured interviews with QIOSC staff could be conducted
to gather more detailed information on the nature of the QIO-QIOSC relationship and how
QIOSCs attempt to support QIOs. Finally, it may be desirable to invest resources in developing
a QIO “engagement scale”, which – by combining information on the substance or nature of
technical assistance obtained from QIOSCs with information on the intensity of assistance
received – could estimate the level of support QIOSCs provide to specific QIOs. Having
developed this scale, collection of data to estimate QIOSC-QIO “scores” could be obtained on
an on-going basis by requiring QIOSCs or QIOs to systematically compile and submit data on
these interactions to CMS.
III.D Designs for Evaluating Technical Assistance Approaches

Little is known about 1) which approaches for “delivering” technical assistance and 2) the types of
content that comprise assistance are most effective in driving quality improvement in particular
settings and with particular types of providers. In the short term, semi-structured interviews with
QIOs and IPG providers should be conducted to better understand the methods used by QIOs to
deliver assistance, the substantive information that is conveyed, and the factors that drive the
selection of different methods of assistance. Assuming that issues of confidentiality are addressed,
“shadowing” QIO staff as they conduct site visits, seminars, or other training activities could
provide an in-depth view that may be unavailable from interviews alone.
CMS’ special study mechanism offers the opportunity to engage QIOs in the study of the
effectiveness of technical assistance using more robust, randomized case control, cross-over designs.
At minimum, such an approach would examine three models of technical assistance – consultative,
collaborative, and provider pay-for-performance – with randomization occurring at either the IPG
or QIO level. It should be noted that investments in analyzing alternative approaches are best spent
on subtasks for which there is large variation in performance as opposed to those with little
variation.
III.E Designs for Extending Support to Poor-Performing and Less Motivated
Providers
Project staff and the technical expert panel emphasized the impact that CMS policies governing the
QIO program may have on the program’s effectiveness. Of specific interest was the question: Does
the QIO program target the appropriate provider population and, if not, should CMS re-focus requirements to
ix

encourage QIOs to work with providers who may benefit the most from technical assistance, such as poor performers or
providers who lack motivation to engage in quality improvement activities and/or work with QIOs? Through the
special study mechanism CMS could empower QIOs to develop alternative approaches for selecting
and motivating providers, as well as exploring creative solutions to work with providers to achieve
selected performance objectives.
•

Extending support to poor-performing providers: During the 8th SOW, QIOs were required
to offer technical assistance to a maximum of 3 nursing homes that were determined by the
State Survey Agency to be “persistently poor” performing homes. In the short term, a case
study approach of QIO experiences working with these nursing homes (and, in turn, the nursing
homes’ experiences working with QIOs) could be implemented to gather information relevant
for evaluating whether CMS should re-focus requirements to encourage QIOs to work with
poor performers.

•

Extending support to less motivated providers: If, as suggested by some members of the
TEP, provider motivation is endogenous, it could potentially be influenced by QIOs. As a
special study at the outset of the 9th SOW, QIOs could be given the latitude to explore various
strategies, including those involving financial and non-financial incentives, to ascertain which
ones are most effective in motivating providers to work with QIOs to achieve selected quality
improvement objectives. After having identified subsets of providers in selected task areas (e.g.,
nursing home, home health) randomized case-control studies may be conducted to determine
whether selected approaches are more or less effective.
III.F Designs for Evaluating CMS Performance Targets

It is unclear how CMS identifies its quality improvement benchmarks. During site visits, many QIOs
reported that they could not meet CMS performance targets because they were “unrealistic” – in
large part because there is no known scientific evidence to suggest current targets could be achieved
within the time frame used to evaluate performance and, in some cases, because QIOs believed that
particular characteristics of their beneficiary or provider population made these targets less feasible
or appropriate. Overall, CMS’ approach for setting performance measures and targets must become
more transparent if QIOs are to understand more fully the goals they are expected to achieve. To
this end:
•

Interviews with CMS staff could be conducted to determine the process by which performance
targets are set;

•

Relevant literature could be reviewed to document ranges of performance improvement that
have been achieved by specific types of providers in given time frames;

•

In cases where evidence is unavailable to support CMS’ benchmarks, tasks with the greatest
variation could be identified for more in-depth investigation, such as through case studies of
high- and low-performing QIOs to determine which characteristics are associated with variation
in performance; and

•

A consensus panel should be convened to review evidence from the literature and from QIO
experiences to assist CMS in establishing more realistic performance measures and targets.
x

IV.

Options for Future Evaluation

CMS has made significant investments in the QIO program. Therefore, we recommend that an
ongoing or continuous process for evaluating the program would best ensure that funds are spent in
the most cost-effective manner. Ideally, the data collection tools and processes used to evaluate a
program and the program itself are developed concurrently. Otherwise, the information necessary
to adequately conduct the evaluation may not be available at the time the evaluation occurs.
Evaluation of the 8th SOW will require the use of retrospective approaches and, therefore, may
suffer from the same methodological shortcomings as previous studies. Moving towards the 9th
SOW and beyond, prospective, rigorous approaches may be feasible if the data and systems
necessary to conduct these evaluations are in place. Therefore, we propose the following three major
options:
(1)

Assess CMS Data Systems & Develop Systems for On-going Evaluation of the
QIO Program: To facilitate future evaluations, a thorough review of CMS’s QIO data
systems could first be conducted, followed by the development, validation, and
incorporation of appropriate data collection tools into the QIO program prior to the start
of the SOW – particularly with an eye toward minimizing data lags.

(2)

Address Limitations in Access to Provider Identifying Data: In conducting this
project, access to data was limited due regulations which prohibit the release of data with
provider identifiers; this includes information on whether a provider is a member of an
IPG. In an effort to foster and facilitate evaluation of the QIO program, consideration
must be given to whether or not such stringent provider confidentiality requirement
continues to be needed.

(3)

Maintain Transparency in Designing and Conducting Evaluation. The success of
an evaluation will, to a great extent, depend on the ability of the evaluator to gain the
cooperation of and work effectively with CMS, the QIOs, and providers, all of whom
may be asked to contribute information on their operations, collect or submit data, and
participate in specific evaluation projects. For these reasons, we highly recommend that
the evaluator maintain transparency in designing and conducting the evaluation.

xi

1.0

INTRODUCTION

Medicare, the nation’s public health insurance program for the aged and the disabled, insures
approximately 43 million beneficiaries, making it the largest payer of health care services in the U.S.
In 2005, Medicare expenditures totaled $336 billion. This figure is projected to double as early as
2012, and expenditures in the future are expected to grow more rapidly than workers’ earnings and
the economy overall (Board of Trustees of the Federal Hospital Insurance and Federal
Supplementary Medical Insurance Trust Funds 2006). The sheer size of Medicare – the resources
dedicated to the program, the growing number of beneficiaries, and the program’s potential to
impact health care delivery nationwide – demands that a system is in place to ensure that the
program is both cost-effective and provides the highest quality care possible. To that end, the
Centers for Medicare and Medicaid Services (CMS), the Federal agency that administers the
Medicare program, contracts with a national network of Quality Improvement Organizations
(QIOs). These organizations seek to:
•

Improve the quality of care that Medicare beneficiaries receive by collaborating
with providers to help them meet professionally recognized, evidence-based
standards and guidelines of care

•

Protect beneficiaries’ rights, respond to their complaints, and investigate claims
and evidence of substandard care

•

Protect the Medicare Trust Funds by reviewing claims patterns and suspicious
cases for the inappropriate use of services or incorrect billing codes

A total of 53 QIOs, including one in each state, territory, and the District of Columbia, carries out
this multi-pronged mission over the course of a 3-year contract with CMS, referred to as the
Statement of Work (SOW). During this time, they engage providers in quality improvement projects
and offer technical assistance across four major health care settings – hospitals, home health
agencies, nursing homes, and physician offices. In addition, QIOs handle beneficiary complaints
related to quality issues, conduct case reviews and monitor hospital payments. By 2008, CMS will
have dedicated almost $3.5 billion to the QIO program over the course of the past three SOWs
alone, amounting to approximately $300 to $400 million per year. Despite its size and expense, there
has been no systematic, quantitative, and independent national evaluation of the effectiveness of the
QIO program to date.

1.1 Growing Interest in Evaluation of the QIO Program
Recent press coverage and Congressional inquiries have raised questions regarding the QIO
program’s effectiveness and whether substantial reforms should be made to the program. A July
2005 Washington Post article (Gaul 2005a) described a 75-year old husband’s four-year legal struggle
with Medicare after submitting a complaint to his state’s QIO regarding his wife’s death, which he
alleged resulted from doctors misdiagnosing her colon cancer. The article, and others published
thereafter (Gaul 2005b, Gaul 2006) raised concerns over financial improprieties and the potential
conflict of interest that is created when organizations are expected to conduct medical case review
1

while at the same time collaborating with providers on quality improvement projects. Indeed, the
article reported that over the course of two decades, total QIO sanctions against physicians and
hospitals for substandard care dropped from an average of 31 to one per year. Furthermore, this
series of Washington Post articles raised concerns over the general lack of transparency in the
beneficiary complaint process, the lack of public accountability and consumer representation, and
the lack of competition in the QIO bidding process.
Less than two months following the Washington Post’s coverage Senator Charles E. Grassley (RIowa), Chairman of the Senate Finance Committee, ordered a Congressional investigation into
individual QIOs and the program as a whole. In a letter to Medicare officials, he expressed concern
over QIOs’ misplaced priorities, and requested documentation on QIOs’ finances, Medicare
contractor audits and evaluations, and program polices for preventing conflicts of interest (Gaul
2005c). Although this was the impetus behind the Office of the Inspector General and the
Government Accountability Office (GAO) inquiries into the program, as part of the Medicare
Prescription Drug, Improvement, and Modernization Act (MMA) of 2003 the Congress requested
the Institutes of Medicine (IOM) to conduct an evaluation of the QIO program.
The IOM released their report “Medicare’s Quality Improvement Organizations: Maximizing
Potential” in March 2006. One of the IOM’s conclusions was that:
“Given the lack of consistent and conclusive evidence in scientific literature and
the lack of strong findings from the committee’s analyses, it is not possible to
determine definitively the extent of the impact of the QIOs and the national
QIO infrastructure on the quality of health care received by beneficiaries. Many
confounding factors make it difficult to attribute the results obtained thus far [to
QIOs].” (IOM 2006)
In fact, researchers have attempted to evaluate the effectiveness of the QIO program for many
years. (Refer to Section 3.0 for a detailed review of the literature). These attempts, which have used
both qualitative and quantitative analytical techniques and which have been undertaken at the
national, organizational, and health care setting level, have largely proven inconclusive. Even the
most recent studies are plagued by the same methodological obstacles that previous studies failed to
overcome – questionable data, selection bias, spurious attribution due to numerous confounding
factors (e.g. secular trends, differences in provider motivation, non-QIO quality improvement
initiatives), lack of generalizability, and the inability to isolate and define experimental and control
groups.

1.2

Study Objectives

Considerable resources have been dedicated to the operation of the QIO program. For the current
3-year contract period, the 8th SOW, CMS has dedicated $1.265 billion to the program, and as
recently as April of 2006, the American Health Quality Association (AHQA), the national trade
organization for QIOs, called on CMS for increased funding in future SOWs (Schulke 2006).
Despite the large investment in the program and the expansion of QIOs’ responsibilities, there is
limited information on whether QIO activities are actually improving the quality of care.

2

ASPE contracted with NORC at the University of Chicago (NORC) to develop several options for
evaluating the effectiveness of the QIO program. As requested by ASPE, NORC’s objectives for
this study were three-fold. First, NORC was to conduct an environmental scan to identify QIOspecific technical assistance activities, interventions, or strategies used to meet performance targets
identified in the 7th and 8th SOW.i Information collected from the environmental scan was to be
entered into a database or inventory of QIO activities. Second, site visits to QIOs were to be
conducted to gather more detailed information about the day-to-day operations of these Quality
Improvement Organizations and the strategies used to advance quality in each health care setting.
Finally, NORC was to identify alternative designs or studies for evaluating the QIO program and to
enhance our understanding of selected components of the program; these designs were to be vetted
by members of a Technical Expert Panel (TEP). ASPE indicated that evaluation and study
approaches could utilize both quantitative and qualitative techniques, but that both short- and longterm studies should be designed. In designing these evaluation approaches NORC attempted to
address the shortcomings of previous evaluations which rendered their findings questionable.

1.3

Organization of Report

This report is organized into seven sections. Following this introduction, Section 2.0 of the report
provides a brief background of the QIO program, including a discussion of the historical roots of
the program, required quality improvement efforts, and QIO performance expectations. A
comprehensive review of the health services, policy and clinical literature pertaining to the
effectiveness of the QIO program is presented in Section 3.0 of this report. Sections 4.0 and 5.0,
respectively, present findings from the environmental scan and QIO site visits. Alternative
evaluation designs are presented in Section 6.0. Among the studies described in this section are data
collection activities and “non-evaluative” designs that are either intended to inform future
evaluations of the program or understanding of program operations. Finally, Section 7.0 includes
suggestions to facilitate the on-going evaluation of the QIO program.

The 7th SOW included the 3-year contract period that began in 2002 and ended in 2005. Contracts for the 8th SOW
began in 2005 and end in 2008.
i

3

2.0
2.1

THE QIO PROGRAM

History and Background

The origins of the QIO program date back more than three decades to 1971, with the creation of
Experimental Medical Care Review Organizations (EMCROs). Under the EMCRO program,
voluntary groups of grant-funded physicians reviewed individual Medicare cases to reduce
unnecessary provision of services in both the inpatient and ambulatory settings. In 1972,
amendments to Title XI of the Social Security Act established Professional Standards Review
Organizations (PSROs). Like EMCROs, PSROs focused on utilization review and compared
questionable cases with local patterns and standards of care. Trained clinicians performed utilization
reviews to ensure that Medicare, which at the time employed a cost-based reimbursement approach,
compensated providers only for care that was medically necessary. Although these organizations
conducted Medical Care Evaluation Studies to address quality concerns as well, studies from the
1970’s and 1980’s found that PSROs had no significant impact on either quality or cost control.
A decade later, as part of the 1982 Tax Equity and Fiscal Responsibility Act and under the Peer
Review Improvement Act, Congress replaced the PSRO program with the Utilization and Quality
Control Peer Review Organization (PRO) Program. Using case reviews, PROs had the authority to
deny Medicare payments to hospitals and physicians if there was substantial evidence of unnecessary
or substandard care.
These earlier programs – EMCROs, PSROs, and PROs – focused on utilization review, costcontainment, and adherence to local practice patterns. They employed an “inspect and detect”
method to discover egregious cases and, if necessary, sanction providers for substandard care. As a
result, providers often perceived them as adversaries or regulatory agencies consumed with detecting
and punishing mistakes. A 1990 review by the Institute of Medicine (1990), however, concluded
that a collaborative approach to quality improvement would be more effective in improving
providers’ performance than an adversarial approach.
In response, the Health Care Financing Administration (now CMS) launched the Health Care
Quality Improvement Initiative (HCQII) in 1992 to analyze patterns of care and identify areas for
improvement. Subsequently, PROs were to examine practice patterns at the institutional, regional,
and national level, rather than focusing on individual physicians or hospitals that were suspected of
poor performance. Further, PROs were to collaborate with hospitals as partners in developing and
implementing hospital quality improvement initiatives (Jencks and Wilensky, 1992). These changes,
which were designed to transform the PRO from regulator to partner and to encourage
collaboration over discipline, represented a dramatic shift in vision. Symbolic of this change, in
2001 Congress officially renamed the PRO program the “Quality Improvement Organization
Program.” Activities under this new vision of quality assurance included assisting providers in
redesigning workflow and care processes and fostering partnerships for quality improvement.
Between the time of the PRO and the QIO program the sites of care also expanded. The PROs’
original jurisdiction was, for the most part, hospital and physician care. Nursing homes, home
health agencies, and Medicare Advantage (formerly Medicare + Choice) were added to their purview
over the years, so that QIOs now touch every major component of the Medicare program. Many
QIOs also conduct business outside of the QIO program, pursuing quality improvement activities
4

for other federal and state health care programs as well as for private organizations.

2.2

The QIO Program Today

Beginning in 1984 with the PRO program and continuing today under the QIO program, contracts
with Quality Improvement Organizations are issued for a 3-year period of performance. Eight
rounds of contracting have occurred since the shift to a 3-year contract cycle, hence bringing us in
2005 to the 8th SOW. In the 8th SOW, 41 organizations hold 53 separate performance-based
contracts with CMS; there is one contract for each state, territory, and the District of Columbia.
Most of these organizations serve as the QIO for only one state, however, a few organizations
function as multi-state QIOs. (Appendix A provides a list of all QIOs funded under the 8th SOW.)
Unlike their predecessors, QIOs today are staffed with employees who are versed, and often
certified, in a diverse range of quality improvement techniques and programs. Since their focus has
shifted from utilization review to quality improvement, they are required to establish relationships
with providers, medical professional associations, and numerous other quality improvement
stakeholders.
Much as it did with its predecessors, CMS requires that QIOs be physician-sponsored or physicianaccess organizations. They must be able to demonstrate that they either are owned by or represent
at least 20 percent of the licensed doctors of medicine and osteopathy who practice medicine or
surgery in the State. Alternatively, QIOs must demonstrate that they have arrangements with
doctors of medicine or osteopathy—including licensed providers from every specialty, who are in
active practice and available to conduct case review for the QIOs (CMS 2006a).
The contract is comprised in large part of four sets of tasks, or activities that QIOs are expected to
perform. Tasks 1 through 3 are referred to here as the “core contract” since all QIOs are expected
to perform these activities. Task 4 refers to “non-core” activities. These are “Special Studies,”
which selected QIOs may be contracted to perform. Sections 2.2.1 and 2.2.2 provide general
information concerning the activities that QIOs are expected to perform under the 8th SOW
contract. Detailed information may be obtained from the SOW, which may be downloaded from
the CMS website at http://www.cms.hhs.gov/QualityImprovementOrgs/downloads/8thSOW.pdf.
2.2.1

QIO Activities under the Core Contract.

Under the 8th SOW (CMS 2006b) core contract, QIOs are responsible for “assisting providers in
developing the capacity for and achieving excellence” in the provision of care to Medicare
beneficiaries across health care settings (Task 1); and “protecting beneficiaries and the Medicare
program” (Task 3). Under the 7th SOW, Task 2 of the contract required QIOs to engage in
beneficiary education and communications activities; the 8th SOW does not specify any Task 2
activities.
Task 1 - Assisting providers in developing the capacity for and achieving excellence: The
majority of a QIO’s time and resources is devoted to performing Task 1 activities, which includes
the provision of technical assistance to nursing homes, home heath agencies, hospitals, and
physician offices. As defined by the IOM, technical assistance is:
5

“The process by which Quality Improvement Organizations work with providers,
…to improve patient outcomes. This includes root-cause analysis, assistance with the
implementation of interventions and systems changes, facilitating knowledge transfer,
assisting with data collection, and coordinating efforts with other stakeholders.”
(IOM 2006).
CMS requires that QIOs divide their technical assistance activities between two different groups of
providers. First, QIOs are contractually obligated to offer technical assistance to all providers in a
state that request assistance on issues related to the quality improvement areas identified in the
SOW. Although the amount of assistance that must be made available to these providers is not
specifically prescribed, the SOW identifies targeted levels of performance improvement for
providers across the state.
The second group of providers is termed “identified participant groups” or IPGs. Providers
volunteer and are selected by QIOs to participate in an IPG. These providers receive intensive
technical assistance and participate in a number of quality improvement projects. It should be noted
that depending on the health care setting and CMS’ contractual requirements, QIOs may be
expected to work with more than one IPG. The number of providers that comprise an IPG is
determined under the terms of CMS’ SOW and is largely a function of the number of providers in a
state. Importantly, even though providers are included and agree to participate in an IPG they are
not contractually bound to work with QIOs.
Task 1 is subdivided into four subtask areas (designated by the letters “a” through “d”), each
focused on performance activities in a particular provider setting. These are identified below:
Subtasks
Task 1a
Task 1b
Task 1c1
Task 1c2
Task 1d1
Task 1d2
Task 1d3

Health Care Settings
Nursing Home
Home Health
Hospital
Critical Access & Rural Hospitals
Physician Practices
Underserved Populations
Part D Prescription Drug Benefits

Task 1a - Quality Improvement Activities Directed at Nursing Homes: In addition to all
nursing homes in the state, QIOs work with two groups of identified participants on improving
clinical performance measures, processes of care, setting improvement targets, and analyzing
resident and staff satisfaction. IPG1 works on various activities related to the reduction of pressure
ulcer rates, use of physical restraints, management of depressive symptoms and pain, as well as
collection of data on resident and staff experiences, which includes data to monitor turnover among
Certified Nursing Assistants (CNAs). IPG2 is comprised of a small number of nursing homes
(between 1 and 3 facilities) who have been identified by the state survey agency as “low performers.”
These providers work with the QIO to reduce rates of physical restraint use and pressure ulcers as
well as collection of monitoring of data on resident and staff experiences.

6

Table 2.1 summarizes the quality improvement areas that QIOs are expected to target in providing
technical assistance to nursing homes, both across the state and for IPGs.
Task 1b – Quality Improvement Activities Directed at Home Health Agencies: In the area
of home health, QIOs are expected to organize two IPGs, a Clinical Performance IPG and a
Systems Improvement and Organizational Change (SIOC) IPG. In addition to reduction in rates of
acute care hospitalization, providers in the Clinical Performance IPG are expected to work on the
continuous improvement of measures in the Outcome and Assessment Information Set (OASIS).
These include improvement on activities of daily living, such as transferring, ambulation, and
medication management. The focus of the SIOC IPG is on activities related to telehealth and
organizational quality culture change.
Other areas that serve as the focus of Task 1b, both statewide and for IPGs are shown in Table 2.1.
Task 1c1 and 1c2 – Quality Improvement Activities Directed at Hospitals: In the hospital
setting, QIOs work with three groups of identified participants – the Appropriate Care Measure
(ACM) IPG, the Surgical Care Improvement Project (SCIP) IPG, and the Systems Improvement
and Organizational Culture Change (SIOC) IPG. Additionally, QIOs are expected to offer technical
assistance to rural and critical access hospitals.
Hospitals participating in the Appropriate Care Measure IPG work with the QIO on improvement
of process measures related to care rendered to patients hospitalized with an acute myocardial
infarction (AMI), heart failure (HF) or pneumonia. Those in the SCIP IPG receive assistance from
the QIO to adopt standards for the prevention of surgical site infections, cardiovascular
complications, venous thromboembolism, ventilator assisted pneumonia, and the use of fistulas for
hemodialysis. Finally, QIOs work with providers in the SIOC IPG to further the use of health
information technology, including Computerized Physician Order Entry and barcoding.
QIOs work with rural and Critical Access Hospitals largely at the statewide level to, among other
goals, increase the level of reporting of the Hospital Quality Alliance Measure Set and achieve
improvement in one clinical performance measure selected by the provider.
1d1 through 1d3 – Quality Improvement Activities Directed at Physician Practices: QIOs
work with physician practices to enhance quality of care through numerous avenues. In terms of
clinical performance, QIOs provide technical assistance to promote the reliable delivery of
preventive services and better ensure the effective management of patients with chronic conditions,
especially those with diabetes and heart disease. QIOs are further expected to promote the
implementation and use of electronic health records.
To promote cultural competency in physician practices, QIOs are to encourage providers to
complete selected components of the Office of Minority Health’s Culturally and Linguistically
Appropriate Survey tool.
Finally, pursuant to enactment of the MMA, QIOs are expected to propose a study related to the
following areas: (1) improvement of prescribing using Part D data; (2) improvement of patient selfmanagement through medication therapy management services; (3) improvement of disease-specific
therapy using integrated Part A, B and D data; or (4) another project approved by CMS.
7

Task 3 - Protecting beneficiaries and the Medicare program: Beneficiary protection activities
subsumed under Task 3 include the review of beneficiary complaints for quality of care concerns.
Among other types of reviews subsumed under Task 3 are the following:
•
•
•
•
•
•

violations of Emergency Medical Treatment and Active Labor Act (EMTALA);
use of assistants at cataract surgery;
hospital-issued notices of non-coverage;
notices of discharge and Medicare appeal rights;
requests by hospitals for higher weighted DRG adjustments
managed care organizations’ notices of termination of skilled nursing facility, home
health agency or comprehensive outpatient rehabilitation facility benefits.

The Hospital Payment Monitoring Program (HPMP) in which QIOs conduct reviews to assess the
accuracy of DRG codes, medical necessity, and the appropriateness of care is further classified as a
Task 3 activity. In the 8th SOW, each QIO is to conduct a Special Study (which must be preapproved by CMS) to address issues of inappropriate utilization or billing patterns.
2.2.2

QIO Activities Conducted Outside the Core Contract

Task 4 of the SOW is comprised of the Special Studies Program. The Special Studies Program
includes two different types of special studies—Quality Improvement Organization Support Centers
(QIOSCs) and all other special studies. The Special Studies Program is designed to gather
information for identifying best practices; examining or testing performance measures, tools or
technical assistance approaches; and, in general, addressing issues of specific interest or relevance to
CMS and the QIO program. For the most part, Special Studies are awarded competitively through a
“call for proposals” process. However, in some cases, CMS may choose to fund unsolicited projects
if a QIO submits a proposal of particular interest to the QIO program or CMS.
Quality Improvement Organization Support Centers are QIOs who receive funds to offer technical
assistance or support to other QIOs by providing them with the tools, training, information on best
practices, and other resources that they need to work effectively with providers to meet quality
improvement objectives. As of the 8th SOW, a total of 15 QIOSC contracts have been awarded.
.
There are two types of QIOSCs: (1) topic-specific and (2) cross-cutting. Topic-specific QIOSCs
offer the support that is necessary for QIOs to meet setting-specific or task-related objectives.
Examples of topic-specific QIOSCs include the Nursing Home, Home Health Care, or Hospital
Interventions QIOSCs. Cross-cutting QIOSCs support QIOs by providing technical expertise on
issues that transcend or cut across specific tasks. For instance, the MedQIC (Medicare Quality
Improvement Community) QIOSC maintains a website where providers and QIOs can access tools
and resources for quality improvement. Likewise, the Performance Improvement QIOSC offers
guidance and training on processes related to quality improvement, and the Data Reports QIOSC
maintains systems for reporting of data.

8

Table 2.1 QIO Activities by Setting, IPG and Statewide Providers, 8th SOW
Task & Setting
1a-Nursing Home

IPG Activities
IPG 1 and IPG 2
Core Activities:
• Decrease pressure ulcers among high-risk
residents
• Decrease the use of physical restraints
• Improve the management of depression
• Improve the management of pain
Organizational Culture Activities:
Assess staff and resident satisfaction using
surveys
• Collect and monitor data on certified
nursing assistant turnover
•

Statewide Activities
Statewide
Improve Care Processes:
QIOs may opt to work with a subset of
nursing home providers to document
specific processes of care for 50% of
new admissions:
• Skin inspection and pressure ulcer risk
assessment
• Screening and treatment for
depression
• Evaluation of physical restraint
requirements or alternatives
• Pain assessment and treatment.
Organizational Culture Activities:
QIOs set statewide targets and assist
nursing homes set their own annual targets
related to:
• Reducing the use of physical restraints
• Reducing pressure ulcers in high-risk
patients

1b-Home Health

IPG 1 and IPG2
Clinical Performance IPG Core Activities:
• Decrease Acute care hospitalization
• Improve transferring
• Improve ambulation/locomotion
• Improve the management of oral
medications
• Improve pain interfering with activity
• Improve status of surgical wounds
• Improve dyspnea
• Improve urinary incontinence
• Improve bathing
• Discharge to community

Statewide
QIOs must work with home health
agencies statewide to:
• Incorporate influenza and pneumococcal
immunizations in patient assessments
• Set targets for acute care hospitalization
and other publicly reported OASIS
measures

Systems Improvement and Organizational
Change IPG Core Activities:
• Implementation and/or utilization of
Telehealth to reduce acute care
hospitalization
• Conduct an organizational quality culture
change survey that focuses on
organizational practices, teamwork,
communication, leadership, quality
improvement, and patient centeredness
• Implement plan of action based of survey.

9

Table 2.1 QIO Activities by Setting, IPG and Statewide Providers, 8th SOW
Task & Setting
1c- Hospitals

IPG Activities
IPG1, IPG2, and IPG3
Appropriate Care Measure IPG Core
Activities:
Use of appropriate care measures in the
following clinical areas:
• Acute myocardial infarction
• Heart Failure
• Pneumonia
Surgical Care Improvement Project (SCIP)
IPG Core Activities:
QIOs help hospitals standardize processes
for the following conditions:
• Surgical site infections
• Venus thromboembolism
• Ventilator-associated pneumonia
• Cardiovascular complications
• Fistula use in hemodialysis (vascular
access)

Statewide Activities
Statewide
QIOs work with hospitals statewide to:
• Improve clinical performance
measurement and reporting
• Assess satisfaction and
knowledge/perception
• Collaborate with all critical access
hospitals to report Hospital Quality
Alliance measures

Systems Improvement and Organizational
Change IPG Core Activities
• QIOs work with senior hospital leadership
and the Board of Directors to engage them
in using CPOE, bar-coding, or telehealth
systems
• Help hospital leadership develop the
business case for the use of these tools
• Educate hospitals in the use of these tools
• Help implement an interventions toolkit
1d- Physician Offices

IPG
QIOs work with identified participants to:
• Focus on more reliable delivery of
preventive services and effective
management of patients with chronic
conditions, especially diabetes and heart
disease
• Report and improve on quality measures
Doctor’s Office Quality-Information
Technology Program
• Improve clinical performance measures
through the implementation and use
electronic clinical information, i.e.,
electronic health records

Statewide
QIOs statewide work to:
• Support collaborative quality
improvement activities involving
Medicare Advantage organizations
• Collaborate with End-Stage Renal Disease
(ESRD) Networks to improve rates of
fistula use and influenza and
pneumococcal vaccinations
• Support the Physician Voluntary
Reporting Program
• Improve clinical performance measures
results for Medicare-underserved
racial/ethnic populations
Physician Practice/Pharmacy Part D:
Conduct study to improve safe delivery of
prescription drugs.

•

10

2.2.3

CMS Evaluation of QIO Performance

QIO contracts with CMS are considered “performance based” and, with few exceptions, the specific
technical assistance strategies that QIOs employ are not prescribed in the SOW. In all settings,
QIOs are evaluated by CMS according to the extent that they are able to meet or exceed identified
performance improvement targets. Understandably, given the focused assistance that these
providers are expected to receive, QIOs’ performance expectations are higher for IPG providers.
During the 8th SOW baseline performance was scheduled to be measured during the first quarter of
2006 and remeasured in the fourth quarter of 2007, a period of less than two years.
CMS has developed a complex formula to evaluate individual QIO’s performance on individual
subtasks. Each subtask is assigned a target performance level (e.g., reduction in failure rate of 35
percent or more in pressure ulcers) and an associated scoring weight. ii Based on whether
performance levels were achieved and the sum of the scoring weights, each QIO receives one of
four categorical “scores” for each subtask: (1) excellent pass, (2) full pass, (3) conditional pass, and
(4) not pass. The determination of whether or not a contract is renewed is based on the number of
subtasks for which a QIO receives a score of “not pass.”

ii

For certain subtasks, QIOs may receive “extra credit” for conducting additional activities or meeting selected goals.

11

3.0

REVIEW OF THE LITERATURE

This section reviews and evaluates the health services research, policy, and clinical literature on
Medicare’s Quality Improvement Organizations to provide a better understanding of the
effectiveness of the QIO program and the implications for quality of care. The review of literature
begins with a discussion of our approach for identifying literature and follows with a review of the
summative evaluations of the QIO program that have been conducted since 1999. Included in this
section of the report are summaries of recent qualitative and descriptive literature about the QIO
program and its quality of care interventions. Studies conducted by individual QIOs are also
reviewed in an effort to gather information on the impact of the QIO interventions on quality of
care in a variety of health care settings. This section concludes by reviewing the body of literature in
order to identify major trends and conclusions that can be used to inform policymakers, clinicians,
quality improvement professionals, and researchers about the effectiveness of the QIO program.

3.1

Methodology

NORC examined the published and unpublished literature on the QIO program, and reviewed
documents that analyzed, commented on or evaluated the QIO program from 1999 to the present.
Given that the Medicare QIO program’s mission changed after 1999, studies that were conducted or
published prior to this year are not reviewed in this report. This criteria was set because the 6th
Statement of Work (SOW) which ran from 1999 through 2002 marked not only a change in title
(from Peer Reviewed Organization to Quality Improvement Organization), but also a change in
focus. After 1999, QIOs were directed to initiate quality improvement partnerships and conduct
interventions and activities focused on systemic and process-related changes. The 7th SOW (20022005) expanded its focus to include quality improvement in specific health care settings such as
nursing homes, home health agencies, managed care plans, and physician offices. Thus, by
reviewing the literature since 1999, this report analyzes the impact of the QIO program since the 6th
SOW, and in effect, draws conclusions about its effectiveness from the most relevant information.
Research studies, randomized controlled trials, quasi-experiments, cohort studies, cross-sectional
evaluations, qualitative analyses and other reports were identified and summarized from peerreviewed clinical and policy journals. Literature was identified through a number of methods,
including a systematic search of published materials using Medline, HSRProj, CINAHL and other
major health services research databases as well as the search tools of websites for major health
policy journals such as Health Affairs, The Milbank Quarterly, and the Journal of the American Medical
Association. We also requested relevant materials from the Institute of Medicine Subcommittee on
QIOs’ Evaluation. In addition to peer reviewed literature, we conducted a comprehensive search of
popular media and materials relevant to the topic of program evaluation published on the Internet
or otherwise publicly available through use of search engines such as Google.com. Searches were
conducted on Google Scholar to obtain access to the “gray” literature (e.g., reports and news sources
not catalogued in electronic peer-reviewed literature databases but available online). As expected,
many of the documents were found as a result of “snowballing,” in which the bibliographies or
citations from a source identified through traditional searches are examined to identify additional
sources.

12

3.2

Evaluations of the Effectiveness of the QIO Program

Overall, there have been few attempts to conclusively characterize the role of the QIO program and
its effectiveness with regard to improving quality of care for Medicare beneficiaries. This section
provides an extensive review of the few existing evaluations of the Medicare QIO program and
elucidates the limitations of the research. The goal is to provide a comprehensive understanding of
the research to date and the generalizability of study findings to the overall QIO program. This
section presents research evaluations that involve or include data from more than one QIO. As the
literature will suggest, there is not enough conclusive evidence to suggest that the QIO program, as a
whole, is or is not effective in improving the quality of care for beneficiaries.
While the literature does not present a consensus about the effectiveness of the QIO program, the
studies are important because they provide unique approaches to evaluating the QIO program. This
section provides a thorough review of these QIO studies, reviewing three large-scale research studies
performed at the national level, and then studies conducted in specific health care settings such as
hospitals, nursing homes and long-term care settings, physician offices, and home health settings.
3.2.1

Evaluations of the QIO Program at the National Level

Jencks et al. (2003) was the first national-level study to suggest improvement across multiple quality
indicators for Medicare beneficiaries receiving care in both the inpatient and outpatient setting.
Through an evaluation of quality indicators developed by the QIO program, Jencks and colleagues
concluded that quality of care for Medicare’s fee-for-service (FFS) plan beneficiaries did improve
between 1998 – 1999 and 2000 – 2001. Jencks et al. examined national and state-level changes in
performance on 22 quality indicators for Medicare beneficiaries in the 50 states and the District of
Columbia and Puerto Rico. The authors collected CMS quality data on care delivered to Medicare
beneficiaries in 2000-2001 and compared it to baseline data collected in 1998-1999. The authors
found that 81 percent of the observations showed absolute increases in performance. Absolute
improvement was measured by the change in performance from baseline to follow-up for each of
the 1,144 pairs of data (52 states x 22 quality indicators).
Using summary statistics for each quality indicator, Jencks et al. determined that the median absolute
improvement for the country overall was 3.9 percent. Additionally, the study reported the
proportion of beneficiaries receiving appropriate care by measuring the median indicator in the
median state. The results indicated an improvement from 69.5 percent in 1998-1999 to 73.4 percent
in 2000-20001. In a 2003 editorial in JAMA, Hsia cited these results and described the Jencks et al.
study as “valid, robust, understandable, and correct.” Hsia touted the results of the Jencks et al. study
as evidence that the QIO program is being led effectively by the CMS Quality Improvement Group.
However, while the study found substantial improvements in care for Medicare fee-for-service
beneficiaries consistent with QIO activities, the data could not attribute these improvements directly
to the QIO’s quality improvement efforts. The uncertainty stems from two methodological
limitations which were duly noted by the authors. First, the study lacks a control group and, second,
the study does not account for the national trend towards quality improvement during the study
period. Jencks et al. (2003) could only suggest that there is a growing trend towards quality
improvement for Medicare beneficiaries. The IOM (2006) report also recognizes the Jencks et al.
13

study, reporting the problem inherent in using cross-sectional data to demonstrate that quality
improvement was the direct result of QIO interventions.
In an earlier study published in 2000, Jencks et al. conducted a national study of cross-sectional
observational data of all Medicare fee-for-service (FFS) populations diagnosed with heart failure,
stroke, pneumonia, breast cancer and diabetes and measured changes in their process of care over
1997-1999. (Though data collection began in 1997, the third data sample was gathered between
October 1998 and March 1999, thus making the study eligible for review in this report.iii) The
authors measured these changes by assessing beneficiaries’ receipt of 24 process-of-care measures.
The study also provided a state-level analysis of quality activities. The authors found a wide
variation of performance on quality measures across states, with some states performing consistently
well and others poorly. Interestingly, several less populous states and those in the northeast ranked
high in quality performance. The more populous states and states clustered in the southeast region
of the country tended to rank lower. After reviewing the findings, the authors acknowledge that it is
difficult to attribute any of the changes in quality indicators directly to QIO activities. However,
Jencks et al. (2000) noted that there is strong evidence that QIOs can contribute to significant
improvement in care if they provide effective technical assistance to providers, facilitate providers’
delivery of care to beneficiaries, and serve as conveners for partnerships among local stakeholders.
The Jencks et al. studies provide an important discussion of the limitations associated with using
national-level date in evaluations of the QIO program. Studies that only use national-level or statelevel data cannot be used to infer QIO performance in specific health care settings. National
aggregate data may not lead to significant conclusions about the impact of QIO interventions and
activities in particular health care settings such as physician offices, hospitals, and post-acute and
long-term care settings. For example, the Jencks studies were not able to capture the interactions
between QIOs and hospitals or the relationship between QIOs and long-term care facilities.
Furthermore, as policymakers and practitioners develop more comprehensive evaluations of the
QIO program, Jencks et al. (2000) indicates that time should also be spent examining the extent to
which current quality measures represent and accurately reflect the quality of care for Medicare
beneficiaries. The authors note that the generalizability of the conclusions to the QIO program’s
effectiveness hinges both on the validity of the measures used to evaluate quality of care and the
accuracy of study data.
Most recently, Rollow et al. (2006) conducted an observational study to evaluate the impact of the
Medicare QIO Program in four clinical settings: nursing homes, home health agencies, physician
offices, and hospitals. This comprehensive effort looked across 53 QIOs during the 7th SOW
activities and focused on overall performance for 41 quality measures in these four clinical settings
between 2002 and 2005. Rollow and colleagues assessed (1) whether quality improved in the QIO
7th SOW and (2) whether QIOs facilitated or contributed to the quality improvement. Rollow et al.
found that clinical quality improved for Medicare beneficiaries on 34 of the 41 quality measures
examined.
For the purposes of the study, Rollow et al. examined quality improvement in clinical measures for
identified participant group (IPGs) providers and non-IPG providers. During the 7th SOW, QIOs
were required to offer and provide technical assistance to health care providers, and recruit a subset
of providers, known as IPGs, for more focused interventions. IPGs are comprised of a subset of
iii

It is possible that the results of the study reflect performance during the 6th or 7th SOW.

14

providers from the state or jurisdiction’s nursing homes, home health agencies, and physician offices
that volunteer to receive direct QIO assistance with clinical measures. QIOs were asked by CMS to
recruit a certain percentage of IPGs from each clinical setting (except for hospitals). However,
QIOs had autonomy over the selection process. Assistance to IPGs varied from written and
electronic correspondence from QIOs to on-site visits and educational interventions.
The study examined IPGs across clinical settings and utilized performance data for 41 quality
measures (5 nursing home quality measures, 11 home health quality measures, 21 hospital measures,
and 4 physician office measures). By comparing performance for IPGs, which received direct
technical assistance from QIOs, and non-IPGs, which did not, the authors were able to study the
impact of QIO technical assistance and, in effect, speak to the overall impact of the QIO program.
Rollow et al. compared quality improvement from baseline to remeasurement for a sample of nonIPG providers and IPG providers for nursing homes, home health agencies, and physician offices.
The authors utilized Medicare claims data from 2002 to 2005, though the exact year for baseline and
remeasurement varied. The authors presented an overview of the characteristics of participating
nursing homes, home health agencies, and physician offices, including descriptive statistics. Authors
noted the observed differences in provider characteristics between IPG and non-IPG groups for
each care setting except for hospitals. Given IPG data was not available for hospitals, it was
impossible to determine the relative intensity of QIO assistance for the hospital setting. While the
authors could not employ a comparison group, they tracked trends in hospital performance by
conducting random samples of 125 inpatient records for Medicare patients in each state that had
specific health conditions.
For the five nursing home quality measures, Rollow et al. examined data from baseline to follow up,
and determined that IPG nursing homes experienced greater improvement than non-IPG nursing
homes for all five measures; the greatest improvements were in chronic care pain, short-stay pain,
and restraint use. Data for home health agencies demonstrated improvement in mean facility rates
for 10 out of 11 measures. For all but one measure (acute care hospitalizations), improvement was
greater for IPGs rather than non-IPGs. Quality improvement was also apparent in the physician
office setting, as IPG offices experienced improvement in clinical measures for all but four
measures. However, interestingly, non-IPG offices demonstrated improvement in two of four
measures as well as poorer performance in the mammography and diabetic retinal eye examination
measures. For the hospital setting, for which no comparison group was utilized, 19 of 21 measures
demonstrated improvement between baseline and remeasurement.
Several weaknesses in study methodology were noted by the authors. First, the hospital setting
lacked a comparison group given the lack of data available. Second, the authors used different years
for baseline and remeasurement for each clinical setting, making comparisons difficult across time.
Authors explained that using identical baseline and remeasurement periods for each health care
setting was impossible given limitations in the data sets and “contractual reasons.” Third, given the
nursing home and home health agency data were self-reported, there is the possibility that study
results are biased upward in terms of quality improvement. Authors noted that the non-IPG groups
also received assistance from QIOs, making it difficult to disentangle the relative differences in
assistance between the two groups. The authors noted that the observed improvements in quality
may actually be lower than if non-IPGs received no assistance from QIOs.
Rollow and colleagues (2006) commented on the challenges associated with measuring the impact of
15

the QIO program from a body of literature fraught with methodological limitations: “The extent to
which [nationwide quality improvements] are attributable to the efforts of health plans, accreditors,
or QIOs is unclear, given the absence of comparison groups.” In the future, the authors plan to
look into the potential for randomized selection of IPGs or matching IPG providers and non-IPG
providers to enhance the accuracy of the study results.
3.2.2

Evaluations of the QIO Program in Hospital Settings

In addition to macro-level evaluations of the QIO program, several researchers have used data from
multiple QIOs to examine the program’s effectiveness with respect to quality improvement in
specific health care settings. The most recent studies have focused on quality improvement in
hospitals. Since 1999, two evaluations of the QIO program have focused on quality improvement in
hospital settings: Snyder and Anderson (2005a) and Bradley et al. (2005). The more controversial of
the two was published by Snyder and Anderson of the Johns Hopkins School of Public Health in
2005.
Snyder and Anderson (2005a) concluded that improvement in quality of hospital care could not be
definitively attributed to QIOs and that additional efforts to assess the QIOs’ effectiveness may be
needed. The retrospective study explored quality of hospital care for Medicare beneficiaries in
hospitals that voluntarily participate with QIOs compared to non-participating hospitals. The
objective of the study was two-fold: (1) to explore characteristics of hospitals that voluntarily
participate with the QIOs and (2) to determine whether the quality of hospital care for Medicare
beneficiaries is higher in hospitals that voluntarily participate with Medicare’s QIOs compared to
nonparticipating hospitals. The researchers characterized a hospital as “participating” if, as a result
of working with the QIO, the hospital collected quality data by itself or with the help of its QIO, or
the hospital implemented systems changes such as chart reminders or critical pathways. A “nonparticipating” hospital did not perform either of these activities. Snyder and Anderson reviewed
40,000 medical records from Medicare beneficiaries in five states (Maryland, Nevada, New York,
Utah, and Washington) and the District of Columbia to examine quality of hospital care. The
researchers evaluated the data to determine how participating and nonparticipating hospitals were
performing with respect to 15 quality indicators in five clinical areas. Data were abstracted in 1998
at baseline and reviewed in 2001-2002 during follow-up.
Snyder and Anderson reached several conclusions. First, the data showed that there were
statistically significant differences in the characteristics of hospitals that participated with QIOs
versus those that did not. For example, across the five clinical areas studied, participating hospitals
tended to be smaller and not-for-profit. Snyder and Anderson also found that between 56 percent
and 69 percent of hospitals could be classified as “participating hospitals.” However, in terms of
quality improvement, Snyder and Anderson found that almost all of the differences between the
group of participating hospitals and the group of nonparticipating hospitals were too small to be
statistically significant. Of the 15 quality indicators tested, only one indicator – “patient screened for
or given pneumococcal vaccine” – suggested that participating hospitals had a statistically significant
greater improvement than nonparticipating hospitals (p=.005). As a result, the researchers
concluded that their study did not definitively determine that QIOs improve quality of hospital care
for Medicare beneficiaries, given that hospital quality of care is improving regardless of QIO
interventions.

16

Several critiques of Snyder and Anderson’s work have been published. In a 2005 JAMA Letter to the
Editor, Jencks critiqued the study’s methodology specifically citing that Snyder and Anderson’s
results could not be generalized to the entire QIO program. One concern cited was that the
researchers had not appropriately characterized and defined an intervention and nonintervention
hospital. Jencks asserted that the study’s nonintervention site was biased because QIOs had some
impact on all of the hospitals during the study period. In the same issue of JAMA, another Letter to
the Editor by Bratzler (2005) raised the point of selection bias: nonparticipating hospitals were likely
those for which QIOs intentionally decided to limit interventions because these hospitals already
had active quality improvement initiatives. If this were, in fact, the case, then the study would not
have a real nonintervention group, making the study susceptible to a Type II error – a false-negative
finding.
Other criticisms were raised by The American Health Quality Association (AHQA), the national
non-profit association that represents the QIOs. AHQA disputes the results of the Snyder and
Anderson study because the authors evaluated outdated study data. AHQA’s prime concern is that
the study’s results are not truly representative of the QIO program as it exists today (Kulkarni 2005).
Other concerns were related to the observation period and the sample size. Both Jencks (2005) and
AHQA expressed concern that an 18-month study period was too short to see real results. While
QIOs sign three-year contracts to perform quality improvement activities, the authors did not
examine data from a full contract period, affecting the study’s results. Jencks (2005) and Sugarman
(2005) also cited that the study lacked statistical power; given the small sample size it was unrealistic
to evaluate statistical significance of individual quality measures at the state level. Sugarman stated
that “serious methodological flaws in the [Snyder and Anderson] study render the finding nearly
meaningless.” Sugarman further noted that policymakers and researchers would be hard-pressed to
generalize the findings of Snyder and Anderson’s 2005 study to the larger QIO program and its
impact on quality of hospital care for Medicare beneficiaries.
Snyder and Anderson (2005b) replied to the concerns noted by Jencks, Bratzler, and others in a
JAMA Letter to the Editor. The authors addressed Bratzer’s concern about selection bias with respect
to hospital participation, noting that “while hospital participation is difficult to define, subject to
misclassification, and vulnerable to spillover effects, the study findings are supported by several
factors.” One factor cited by Snyder and Anderson was that sensitively analyses were conducted as
part of the study. The analyses varied the definition of hospital participation, but study results
remained unchanged. The authors also recognized the concerns raised by Jencks regarding the
length of the study period and the timing of follow-up. Finally, the authors noted that “future
research evaluating the QIO program, conducted by independent investigators and using concurrent
control groups is needed.”
While less quantitatively rigorous than the study by Snyder and Anderson, Bradley et al. (2005)
conducted a descriptive study that evaluated qualitative data about potential effectiveness of QIO
interventions in hospital settings. Bradley et al. (2005) conducted a cross-sectional study of randomly
selected acute care hospitals in order to explore the effectiveness of the QIO program with regard to
improving the quality of hospital care for Medicare beneficiaries. The study randomly selected 105
acute care hospitals from the Center for Medicare and Medicaid Services’ (CMS) Online Survey
Certification and Reporting System (OSCAR) in 2001. The authors interviewed the Director of
Quality Management at the 105 hospitals between January and July 2002. The survey instrument
assessed various levels of interaction between the hospital quality department and the QIO,
including the “prevalence and helpfulness of QIO interventions” that related to quality
17

improvement and acute myocardial infarction (AMI) in the last year, and the “perceived impact of
the QIOs on quality of care” (Bradley et al. 2005). The latter variable was measured by surveying the
Director of Quality Management at each hospital about the QIO interventions over the last three
years in order to gauge whether the intervention contributed to quality improvement of AMI care
and to what extent quality of AMI care may have been different in the absence of the intervention.
Respondent and hospital characteristics were also reported.
The study found that 73 percent of quality management directors indicated the amount of contact
with their QIO to be appropriate, with 27 percent preferring more contact with their QIO with
regard to improving AMI care; none preferred less contact. Respondents from for-profit hospitals,
in comparison to non-profit or government-owned hospitals, typically had more contact with their
QIOs (p=.04). Over 75 percent of the respondents reported that their QIO had provided them
with educational materials and data and 70 percent indicated that the QIOs conducted educational
programs. The authors measured which QIO interventions were reported to be “very helpful” to
hospitals. Over 70 percent of respondents reported the following QIO activities as very helpful:
development of quality indicators, participation in quality improvement teams, and provision of
benchmark data. Most importantly, the study assessed whether directors reported that QIOs
changed the quality of AMI care. Thirty-two of the 99 respondents, or 32 percent of respondents,
indicated that their hospital’s quality performance with respect to AMI care would have been
different without the QIO intervention. Nearly 40 percent of directors reported that the QIO had
not either positively or negatively affected the quality of AMI care. Of the subset who indicated that
their quality of care would have been different, 72 percent of respondents thought that quality of
AMI care would have been worse in the absence of the QIO interventions. Bradley et al. surmise
directors found the QIO interventions to have positively influenced quality of AMI hospital care
and recommend several ways that the QIO program can improve their effectiveness with respect to
AMI care. These include improving the timeliness of data circulation between QIOs and the
hospitals; appealing to physicians directly rather than quality management staff; and reaching out to
senior management staff directly.
One limitation of the Bradley et al. study is the authors did not assess the QIOs’ perceptions of their
own role; doing so could have potentially projected the study findings in a different light. The study
also has limited statistical power given its small sample size. Despite these limitations, the study is
an important contribution to the QIO literature. Bradley et al. was the only study in the literature
focused specifically on the perspectives of directors of quality management, although these
individuals are arguably the integral link between hospitals and QIOs.
3.2.3

Evaluations of the QIO Program in Post-Acute and Long-Term Care Settings

One program-wide evaluation in nursing home or home health settings was identified in the
literature since 1999. The study was a demonstration project for The Center for Medicare and
Medicaid Services (CMS) conducted in 9 states between 1999 and 2002 in order to assess whether
QIOs could play a role in promoting standing order programs (SOPs) in long-term care facilities
(Shefer et al 2004; Shefer et al 2005). According to the Government Accountability Office (2002), an
SOP is used to increase preventive care services such as vaccinations in nursing homes and other
long-term care facilities. The SOP enables a nonphysician such as a nurse or pharmacist to provide
a vaccination without a physician’s order. The study evaluated the impact of QIO-directed
intervention projects in eight states; another five states served as controls. Specifically, the QIOs in
18

the intervention states worked with long-term care facilities, providing relevant education materials
and conducting site visits. Data were collected in June 1999 about the vaccinations programs in the
long-term care facilities and again in March 2001 at follow up. The pre-post evaluation revealed that
a larger percentage of facilities in intervention states adopted influenza and pneumococcal SOPs
than facilities in non-intervention states, suggesting that QIOs may have played a critical role in
promoting the systems change. While these results are informative, it is not clear whether the
adoption of SOPs led to higher quality of care at the long-term care facilities. Without further
research, it is difficult to draw conclusion that the QIO interventions in this demonstration project
actually resulted in higher quality of care for Medicare beneficiaries in long-term care facilities.
Furthermore, more studies would be necessary to determine that QIO interventions have had a
positive effect on quality of care for Medicare beneficiaries in long-term care settings.
3.2.4

Evaluations of the QIO Program in Primary Care Settings

No summative studies of the QIO program with a focus on primary care were identified in the
literature since 1999.

3.3

Qualitative Reviews of the Evolution and Impact of the QIO
Program

Several articles reviewed and commented on the evolution of the QIO program and its history of
quality improvement. While they offer some insight into the effectiveness of the QIO program,
they are structured as descriptive pieces rather than formal evaluations. However, the complexity of
the QIO program warrants many different types of evaluations and studies, varying in scope and
methodology. The descriptive literature on the QIO program was included in this report to provide
the audience with a better understanding of the qualitative research and analysis available since 1999.
Future evaluations of the effectiveness of the QIO program should be informed by descriptive
reports and rigorous quantitative studies.
The most important review of the QIO program to date was conducted by The Institute of
Medicine (IOM 2006). The IOM developed an assessment of the QIO program not only to
evaluate its impact on the quality of health care for Medicare beneficiaries, but also to examine the
extent to which other organizations could perform the quality improvement functions currently in
the realm of QIOs. The report cited several challenges inherent in assessing the impact of QIOs on
improving the health care of Medicare beneficiaries. One of these challenges is a result of the
changing requirements of QIO activities in each scope of work, making it difficult to assess changes
in the impact of a similar set of activities over time. The report also asserts that the current evidence
available is inadequate to determine the extent to which the QIO program has contributed to
improvements in health. At the conclusion of the report, the IOM offers several recommendations,
including one to make more data available about the impact of the QIO program. The IOM (2006)
posited that CMS should develop four types of evaluations to assess the QIO program, three of
which would be internal evaluations to assess QIO performance against CMS-determined goals and
priorities. One of these evaluations would evaluate the program as a whole. The second would
evaluate individual QIO performance against the core contract. The third would evaluate selected
quality improvement interventions. The fourth evaluation would be external, conducted by an
19

independent party, and would evaluate the overall contributions of the program.
The IOM suggested that these evaluations include qualitative aspects to reflect the nuanced nature
of the QIO’s role in quality improvement relative to that of other actors. They recommended that
evaluations should look at a variety of provider settings and locations and should assist with
resource allocation by analyzing the cost-effectiveness of interventions. The authors noted that a
program evaluation should incorporate an assessment of CMS management and oversight over the
program as a part of judging the program’s overall effectiveness. The IOM offered a number of
methodologies for the evaluation design (randomized controlled trials, time series and crossover
analyses, studies with nonequivalent control groups, case control studies and qualitative analyses).
Between 2004 and 2006, the American Health Quality Foundation (AHQF) analyzed the role of the
QIO program in health information exchange initiatives and activities. AHQF and the e-Health
Initiative (EHI) worked together to convene an expert work group to discuss the role of the QIO
program with respect to information exchange given the backdrop of new health information
technology activities and opportunities at the local, state, and national level. The expert work group
spoke via teleconference three times for two-hours each conference. This work group led to the
development of a survey where QIOs were asked to document their activities with respect to health
information technology.
In addition to the survey, follow-up interviews were conducted with QIO representatives. QIO
representatives from 26 states responded to the AHQF survey. The results were published in
March 2006. The authors found that QIOs can and do play an integral role in fostering health
information exchange. Such activities include working with stakeholders in the community,
consensus-building with respect to health information exchange, creating governance structures for
health information exchange, and supporting physicians in clinical processes.
Sprague (2002) examines the role of QIOs in improving quality of medical care delivered to both
FFS and managed care Medicare beneficiaries. Sprague’s methods included qualitative research and
several telephone interviews with QIO researchers and hospital group executives. This paper traces
the evolution of PSROs to PROs and then QIOs. The paper then provides an overview of the
structure of the QIO program and a thorough description of the seventh contract cycle. The last
section of the paper raises several policy issues surrounding the QIO program. Sprague concludes
that while QIOs have the potential to foster culture change fundamental to overall quality
improvement, it is yet unclear whether their partnership efforts are sufficient to drive that change.
In another paper, Bhatia, et al. (2000) described the evolution of the QIO program over time, noting
its successes and future direction with respect to each Scope of Work. Bhatia and colleagues
reviewed various papers written during the 6th SOW to provide a picture of the contract and tasks.
The article addressed the challenge faced in evaluating the impacts of the PRO program, specifically
that while PROs themselves reported improvement in two-thirds of their projects during the fourth
and fifth contract periods, CMS was “not able to demonstrate any overall improvement or impact
on quality.” The authors also elaborate on future directions of the program, including potential
improvements with regard to the structure and emphasis on partnerships.
The next section explores the program’s effectiveness by reviewing studies about individual QIO
initiatives in different health care settings.

20

3.4

The Impact of Individual QIO Quality Improvement Initiatives

In its 2006 report, the IOM suggested that collecting data from studies of individual QIO quality
improvement interventions can provide important knowledge about which QIOs are excelling and
which types of quality improvement activities produce the best results. Heeding this suggestion, the
following section reviews a subset of the available literature on individual QIO interventions and
activities with respect to priority clinical conditions listed in the 7th and 8th SOW. It is crucial to
acknowledge that these studies are QIO-specific, and as a result, the findings do not depict the
effectiveness of the broader QIO program. A collective review of these studies did not produce a
strong conclusion regarding the collective impact of the QIO program’s quality improvement
initiatives. Some interventions have been more effective than others with regard to improving care.
These studies do suggest, however, that QIO interventions can catalyze improvements in process
measures and, to a lesser degree, outcome measures in care settings.
The selected studies represent a diverse group of QIO-led interventions since 1999. The literature is
separated by health care setting to reflect the priorities of the 7th and 8th SOWs and illuminate areas
for further research.
3.4.1

Evaluations of QIO Initiatives in Hospital Settings

In the 7th SOW, QIOs focused on hospital-related projects related to acute myocardial infarction,
heart failure, prevention of post surgical infections, and pneumonia. Hannah et al. (2005) evaluated a
statewide partnership of the West Virginia Medical Institute (the West Virginia QIO) and health
organizations. The research focused on improving vaccination rates in hospitalized Medicare
beneficiaries. The authors concluded that, as a result of an intervention which consisted of training
meetings, assistance, and other educational materials, the rate of assessment for immunizations at
patient discharge increased statewide. Though the increases were impressive over the two year study
period (1999 – 2001), the study cannot conclusively attribute the increases in the rate of assessment
for vaccinations to the partnership’s intervention. The authors do suggest that hospitals can play a
substantial role in improving quality.
The 8th SOW has also expanded its focus to include process-oriented and system-wide quality
improvement. This priority is reflected by another hospital-focused study (Schade et al., 2004),
which used quality data collected by the West Virginia Medical Institute to assess whether audit and
feedback systems improved quality of care for Medicare beneficiaries in all 44 acute care hospitals in
West Virginia. In this study, hospitals were offered quality reports from 1998 to 2001 that outlined
their performance on 15 quality indicators during this period. An analysis of the data found that 14
of the 15 quality indicators studied suggested statistically significant improvements when tested after
the intervention (p<.05). The study suggests that quality improved as a result of the feedback to
hospitals. However, this study does not actively engage the QIO in any other aspect of the study
except for data collection purposes.
3.4.2

Evaluations of QIO Initiatives in Post-Acute and Long-Term Care Settings

Quality improvement in long-term and post-acute care settings is also a large component of the 7th
21

and 8th SOWs. This is a reflection of the 7th SOW’s expansion to include new requirements that
mandate QIOs to conduct quality improvement projects in a variety of different health care settings
such as home health agencies and nursing homes. Research also continues to suggest that quality
improvement for Medicare beneficiaries in long-term care settings is necessary. For example, the
Government Accountability Office (GAO) reported in 2005 that there are still challenges to
improving nursing home quality and safety and, as a result, quality improvement is an important
priority. However, since 1999, few studies have been published about the impact of QIO
interventions on the quality of care in nursing homes. The studies cited in this section were all
published after 2002. Due to the paucity of literature, it is difficult to conclude whether QIO
interventions have been instrumental in fostering quality improvement in long-term care settings.
However, the literature does suggest that quality improvement activities carried out by QIOs may be
able to produce quality of care improvements in nursing homes and home health agencies.
Furthermore, the literature to date, however limited in its explanatory power of the larger QIO
program, will certainly inform future quality improvement activities in nursing home and home
health settings.
To begin with QIO interventions in nursing home settings, one notable quasi-experimental study on
pain management and quality improvement found that collaborative quality improvement efforts
reduced rates of pain among Medicare beneficiaries admitted to Rhode Island nursing homes. The
2004 study was conducted by Baier et al. of Brown University in conjunction with the Quality
Improvement Organization for Rhode Island and Quality Partners of Rhode Island. Baier et al.
developed a five-prong intervention for Rhode-Island Medicare or Medicaid-certified nursing homes
in order to evaluate the impacts on processes of care and outcomes in nursing homes. Between
August 2000 and December 2001, Baier et al. conducted the quality improvement intervention which
was composed of pain management education, audit and feedback, a systematic approach of PlanDo-Study-Act (PDSA) cycles, mentoring for participating nursing homes, and collaboration between
nursing homes, in 17 participating facilities. Using a pre-post study design, Baier et al. measured
quality improvement within facilities and also aggregated across facilities at baseline and follow up
by looking at medical chart data of residents with a pain diagnosis. The study found that, at the
aggregate level, three measures of nonpharmacological processes of care showed statistically
significant improvements between baseline and follow-up (p<.001) at the 95 percent confidence
level: appropriate pain assessment, pain intensity scale used, and nonpharmacological treatment.
While the Baier et al. study suggests that QIO-led quality improvement initiatives may enhance care
in nursing homes, limitations in the study’s methodology may preclude its generalizability to other
QIO interventions in nursing home settings. As noted by the authors, since the study only includes
data from residents with pain, the residents are less likely to be receiving a pharmacological
intervention prior to the study’s intervention. Due to this characteristic of the sample, it is possible
that the data are biased and improvement in the outcome measure (pain reduction) may be observed
even when it may not exist. Future research is necessary to confirm these findings.
In addition to the Baier et al. (2004) study, Abel et al. (2005) also examined the impact of a quality
improvement initiative in nursing homes. The Texas-based project analyzed the implementation of
a pressure ulcer prevention project conducted by the Texas QIO, the Texas Medical Foundation
(TMF). The goal of the project was to improve pressure ulcer prevention in twenty Texas nursing
homes by assigning quality improvement teams to participating facilities. Through an analysis of
medical record data on quality indicators between November 2000 and August 2002, the authors
found that the system changes were statistically associated with quality improvements. Nursing
22

homes showed statistically significant improvements for 8 of 12 performance measures, nursing
homes experienced improvements. Most interestingly, Abel et al. found that the facilities that
experienced the greatest improvements in quality also had lower pressure ulcer incidence rates
relative to nursing homes with the least quality improvement. These results do suggest that QIOnursing home collaboratives related to systems change may lead to improvements in care.
Cortes (2004) also evaluated the impact of quality improvement program initiatives in Texas nursing
homes. Cortes concluded that quality improvement interventions conducted in 2002 and 2003 by
TMF (Texas’ QIO) and the Texas Department of Human Services (DHS) reduced the prevalence of
restraints in Texas nursing facilities. While this is an informative conclusion from a quality
improvement standpoint, the main goal of the paper was to disentangle the data in order to estimate
how much quality improvement could be attributed to the interventions conducted by TMF versus
those conducted by DHS. Cortes was specifically interested in the “attributable fraction” of the
improvement, namely the amount of improvement that could be attributed to TMF’s activities as
opposed to those of DHS. During the quality improvement interventions, TMF provided resources
such as provider education and technical assistance in its nursing facility intervention, while DHS
conducted unannounced visits at the nursing facilities and restraint reduction training sessions. The
quality improvement interventions by DHS and TMF differed in other structural ways as well.
Based on aggregate restraint data for Texas nursing facilities, Cortes (2004) determined that 90
percent of the outcome of restraint reduction was attributable to DHS’ technical assistance efforts
with only 10 percent of the improvement attributable to the efforts of the TMF. Cortes (2004) was
not able to explain why DHS had a greater impact on restraint reduction that the Texas QIO due to
several study limitations, one of which was that the study sample was not randomly selected. This
factor could have introduced self-selection bias.
Cortes (2004) makes two important points. First, federal and state quality improvement
interventions are not redundant. Second, given that quality improvement interventions are
occurring concomitantly, it is crucial for QIOs to collaborate with state agencies by sharing
information and data necessary to assess the effectiveness of quality improvement activities.
In addition to studies that examine quality improvement in nursing homes, two recent QIO studies
focused on Outcome-Based Quality Improvement (OBQI) in home health agencies. In 1999, home
health agencies were required to begin collecting Outcome and Assessment Information Set
(OASIS) data to track patient outcomes. The OASIS data was used to develop OBQI program so
that each home health agency could identify and adopt continuous quality improvement activities
(CMS 2006c). With no other mechanism in place to educate home health agencies about their new
OBQI responsibilities, CMS identified QIOs to provide technical assistance, education, and training.
CMS piloted the OBQI process in five states under the auspices of the QIOs in Maryland, Michigan,
New York, Rhode Island, and Virginia. Maryland’s QIO, the Delmarva Foundation, was the lead,
while the four other QIOs were selected to implement quality improvement efforts in home health
agencies in their respective states. The goal of the pilot projects was to support quality
improvement activities in home health agencies and also to determine whether QIOs were the
appropriate institution to facilitate the OBQI process on a larger-scale. In total, 877 home health
representatives of 425 home health agencies participated across the five states. Two particular
analyses from the pilots in Maryland and Michigan were identified for this literature review.
In 2002, Chisholm and Murdock published a paper about Maryland’s experience as a participant in
the OBQI pilot project. The Delmarva Foundation for Medicare Care, the QIO for the state of
23

Maryland, conducted focus groups with 39 participating home health agencies in order to foster
conversations about OASIS data collection and methodology. A training module occurred in 2001,
where Delmarva worked with home health agencies to educate agency representatives about the
OBQI and develop appropriate plans of action. Home health agency representatives were then
expected to return to their respective agencies and implement what they learned at the training
sessions.
While the Chisholm et al. analysis does not provide a statistical interpretation of performance
improvement, the paper documents the successes and barriers reported by the participating
Maryland home health agencies and the Delmarva project coordinator during the pilot. For
example, some agencies found it difficult to reach consensus when selecting outcome measures for
the pilot. Another challenge noted was that some agencies attempted to change too many care
behaviors at the same time. Agencies also described that resources were too limited to involve
clinical staff members in the process-of-care investigation. In light of these challenges, the
Delmarva project coordinator’s role was to provide assistance and guidance to the participating
agencies. The collaboration between the coordinator and the agencies was important, but Chisholm
et al. note that final decisions about process were always made by the agencies during this pilot. The
authors find that “QIOs are uniquely situated to work within the states to meet agencies’ unique
circumstances and needs.”
Similarly, the Michigan Peer Review Organization (MPRO) was selected as a pilot site to implement
the OBQI process. A paper by Allen et al. (2004) indicated that MPRO worked with 69 home health
agencies in Michigan during the pilot project. In order to foster continuous quality improvement in
the home health agencies, MPRO worked with agency participants between January and February
2001 to train them about OBQI. Participants received training about quality improvement and
quality assurance, OASIS data, the OBQI process, and team building. MPRO also offered technical
assistance to home health agencies throughout the entire project.
The paper by Allen et al. presented the performance of aggregated outcomes for the participating
Michigan nursing homes. The paper suggested that participating agencies experienced performance
improvement for a variety of outcomes. In aggregate, the agency’s improvement was statistically
significant, relevant to its baseline performance (p<.03). Allen et al. concluded that the OBQI
process contributed to these improvements. Additionally, the authors note that the results “reflect
positively” on the QIO training materials and education sessions.
More information about the impact of QIO activities with respect to outcomes-based quality
improvement is necessary.
3.4.3

Evaluations of QIO Initiatives in Primary Care Settings

A review of the literature on QIO quality improvement activities in primary care settings was also
conducted. Many studies have been published on QIO quality of care initiatives in primary care
settings. For this report, nine studies conducted after 1999 were identified: four focus on diabetes
quality improvement; three relate to QIO interventions that promote better quality of care for
underserved populations; one addresses hypertension; and one is a qualitative study on physicians in
small practices.

24

The first of the diabetes-related studies was conducted by researchers at the Georgia Medical Care
Foundation (Georgia’s QIO). McClellan et al. (2003) conducted a group-randomized evaluation of a
quality improvement intervention focused on diabetes mellitus. The study applied a quality
improvement intervention in conjunction with CMS’s Health Care Quality Improvement Program
(HCQIP). McClellan and colleagues analyzed the impacts of the quality improvement intervention
on quality indicators for 22,971 Medicare patients diagnosed with diabetes in Georgia between 1996
and 1999. Patients were randomly assigned to primary care physicians. Primary care physicians
were then randomized into an intervention and comparison group consisting of roughly the same
number of primary care physicians in each. Over a period of six months, physicians in the
intervention counties received packages of clinical practice guidelines, diabetes care, and other
education materials through the mail. At follow up, McClellan et al. found that there was a
statistically significant greater increase in HbA1C testing in the intervention group counties than in
the comparison group counties (p=.02), suggesting an increase in quality of diabetes care. One
limitation of the study is that it is difficult to isolate the impact of the quality intervention. In other
words, it is unclear whether the increase in testing occurred because physicians followed the
guidelines sent to them through the mail, or for other reasons. The random selection aspect of the
study does reduce the potential for selection bias.
The second study was a presentation of findings from the North Carolina QIO, Medical Review of
North Carolina (Massing et al. 2003). The Medical Review of North Carolina project examined data
on North Carolina residents enrolled in the Medicare program, specifically assessing the prevalence
of diabetes in the population between July 1997 and August 1999.iv The authors presented a picture
of diabetes prevalence in North Carolina through a discussion of patient characteristics and quality
indicators. The study used odds ratios from regression analyses to compare women and men,
African Americans and Caucasians, and people in various age brackets. The authors indicate that, of
the population of 83,913 North Carolina residents with diabetes who met study inclusion criteria,
women were more likely to receive an eye exam than men and African-Americans were less likely
than Caucasians to receive appropriate diabetes care. The authors did not report confidence limits
and tests of significance because the study results described the population rather than a sample.
Another study that focused on diabetes quality improvement in the primary care setting was
conducted by Gould et al. (2002). This paper was co-written by researchers from the University Of
Connecticut School Of Medicine and Qualidigm, the QIO of Connecticut. The study examined the
effect of a quality improvement curriculum on the quality of primary care delivered by 77 medical
students. Medical students worked in small groups to complete quality improvement projects
focused on diabetes mellitus at community practice sites. Qualidigm designed a diabetes quality
improvement “project-in-a-box” for the students which included clinical protocols and other
materials for students. Students implemented the quality improvement project on a random sample
of patients with diabetes mellitus at 24 community practice sites. Quality indicators and pre-post
intervention data were examined at baseline and then again six months later after the intervention.
Students also completed feedback surveys. The study found that medical students can “successfully
initiate” community quality improvement projects, yielding better care for patients. Gould et al.
concluded that medical students may be an underutilized resource in quality improvement. A major
limitation of this study is that control groups were not used for students and community practice
sites. Additionally, the results may not be generalizable to other medical programs or community
Note that since the data collection started in 1997 and ended in 1999, it is difficult to identify whether this study is
representative of CMS’s 6th or 7th SOW.
iv

25

quality improvement projects.
One final diabetes-focused QIO study developed out of a collaborative between the Baylor Health
Care System (BHCS) and the Texas QIO, TMF (Ballard et al. 2002). This study randomly assigned
22 primary care practices to one of three diabetes interventions to assess whether claims-based
measures of care process and outcome changed from baseline in 2000. In total, data was collected
on 2,158 diabetic patients. During each of the three interventions, TMF examined claims data on
HBA1c testing, eye examinations, and annual lipid profiles. The authors indicated that they would
be testing their hypotheses regarding changes from baseline data during the follow-up periods of
their study at 6, 12, and 18 months. It does not appear that their follow-up results have been
published yet. The authors noted the importance of working with their QIO. Namely, the Texas
QIO was able to obtain the aggregate-level Medicare claims data that made this study possible.
Furthermore, the authors recommended that health plans and health care providers partner with
their QIOs in future initiatives. The authors noted, however, that analyzing quality with Medicare
claims data is often problematic given the lack of detail on patient visits as well as other issues
related to patient confidentiality and time lags.
In addition to diabetes-related studies, the Connecticut Quality Improvement Organization worked
with 17 physicians in primary care to enhance quality of care for elderly patients with hypertension.
Meehan et al. (2004) studied Medicare patients and assessed medical record data in order to
determine whether provider feedback and practice-enhancing materials would produce
improvements in care. The baseline group of patients was assessed in 1997 with follow up data
being collected in 1999. Thus, this study may or may not be completely reflective of the 6th SOW.
The authors did not find statistically significant improvements in hypertension for patients.
Holmboe et al. (2005) conducted a qualitative study on physicians in small practices and their
perceptions of quality improvement, barriers to quality improvement in practices, and the role of
QIOs. Qualidigm, QIO of Connecticut, played a key role in The Connecticut Primary Care Project,
serving as the director of the study. Between 2000 and 2003, Qualidigm interviewed 25 physicians
and assessed the qualitative data. The resulting report included physicians’ perceptions and feedback
on quality improvement and quality interventions in office settings. While the role of QIOs was not
discussed at length, the authors do note that “the most important aspect was the involvement of the
QIO in helping the offices improve quality by offering credible physician-specific data, educational
materials for patients and physicians, prefabricated tools such as checklists, and advice when
requested.” The authors also concluded that physicians will need more hands-on assistance in order
to accomplish quality improvement. In order to ensure that physicians are receiving an interactive
education about quality improvement techniques, Holmboe et al. recommend that QIOs become
more directly involved in physician offices through QIO-physician practice projects. Holmboe et al.
conclude that “policymakers, researchers, foundations, and insurers should partner with QIOs,
practitioners, medical societies, and others to conduct more studies in [the primary care] setting.”
Still focusing on primary care-related studies, papers examined interventions that targeted reductions
in health disparities and better care for underserved populations. Levy et al. (2003) published a
report on a project conducted by the New Mexico Medical Review Association (NMMRA), New
Mexico’s QIO, which aimed at increasing rates of immunization for Hispanic and non-Hispanic
groups. The New Mexico Medical Review Association’s intervention collected data from three
culturally distinct groups of Hispanics in New Mexico. The data were used to design subsequent
interventions geared towards increasing immunization rates. In the first phase, NMMRA examined
26

the immunization rates and disparities between the Hispanic beneficiary population and nonHispanic beneficiaries in New Mexico. NMMRA used the Behavioral Risk Factor Surveillance
System (BRFSS) and Medicare claims data despite some limitations associated with these data
sources (e.g. BRFSS is a telephone-based survey which could create selection bias since seniors in
rural areas may not have access to telephones). The second phase consisted of a series of interviews
with seniors and community leaders about their perspectives regarding pneumococcal and influenza
immunizations. The authors were interested in learning whether differences varied according to
geography, cultural trends, and other factors. In total, 816 Medicare beneficiaries and 110
community leaders from the three counties participated in in-person qualitative interviews. They
used stratified sampling to not only enhance the representativeness of the sample, but also to better
understand the impact of the environment on health-related attitudes and behaviors. Different
survey instruments were developed for individual Medicare beneficiaries, groups of beneficiaries,
and community leaders. The survey asked about health knowledge, attitudes, behaviors, and
experiences as well as demographic information.
Levy and colleagues draw several conclusions. First, it is important to integrate cultural competency
into quality improvement initiatives. Second, successful study follow-up is contingent upon earning
the trust and respect of community members. Finally, qualitative interviews with promotoras
(community health advisors) provided a unique opportunity to understand the disparities in the
community. The final phases of the project will evaluate the impact of the interventions and assess
their effectiveness. Final results are pending.
In another study by Sobel et al. (2003), authors reviewed the efforts of Qualis Insights of Delaware
in developing an outreach program for African-American women. The intervention was called
“Mature African Americans for Mammography Coalition” and its goal was to increase awareness
about the importance of mammography in the senior African American community. Delaware’s
QIO, Qualis Insights, provided education tools and resources to the coalition and also developed an
innovative scorecard tool to track the number of women in the community who committed to an
appointment for a mammogram. The scorecard reached an estimated 3,000 women with 350
committing to getting their first mammogram. Authors concluded that the intervention had an
impact; the rate of disparity was reduced in the targeted county by 3.2 percent in 1999 with little
change in the other two counties in the state, according to CMS administrative data. The paper did
not provide a rigorous quantitative analysis.
Finally, a report by Michalowski et al. (2003) examined the efforts of Wisconsin’s QIO, MetaStar
with respect to reducing health care disparities for African-American Medicare beneficiaries in the
state. MetaStar developed a plan to identify and study the disparity that exists in lipid panel testing
rates. After reviewing the literature, the authors worked with Community Health Concepts of
Milwaukee to conduct focus groups with African American Medicare beneficiaries with diabetes.
Additionally, in-depth interviews were conducted with nine physicians. The qualitative research
techniques were then used to develop a hypothesis: targeted cultural competency training for
primary care physicians and education to African-American beneficiaries with diabetes will decrease
the disparity in lipid panel testing rates. Interventions such as provider-focused education and
interactive cultural competency seminars were held in 2001. Several patient-centered interventions
were designed: a “Diabetes Sundays” program in 30 predominately African-American churches in
southeastern Wisconsin to raise awareness and provide education about diabetes; a one-day seminar
geared towards parish nurses; and an education program called “Food for Mind, Body and Soul”
which provided participating churches with training in lipid education and food preparation. The
27

evaluation of the project is a pre-post design which analyzes Medicare claims data in the intervention
area. While the results are still forthcoming, the authors provided several lessons learned during the
project, including the importance of addressing health care disparity issues through more than one
approach.
Although the findings in this section suggest that QIOs may play an important role in improving
quality of care in various care settings, it would be inappropriate to generalize the results of one
study to the effectiveness of the larger QIO program, particularly given the limitations in study
design as well as the fact that each study is largely an evaluation of one QIO’s impact on care in one
state.

3.5

Summary

The Institute of Medicine’s 2006 report, “Medicare’s Quality Improvement Organization Program:
Maximizing Potential,” and an array of other randomized controlled trials, quasi-experiments, cohort
studies, cross-sectional evaluations, and qualitative analyses have studied QIO interventions in order
to better understand whether the program is effective at improving quality of care for beneficiaries.
While this body of literature brings to policymakers’ attention the importance of quality
improvement in Medicare and also suggests that QIOs play a role in promoting quality of care, as a
whole, the evidence is inconclusive as to what portion, if any, of improvements in care can be solely
attributed to the QIO program. This conclusion stems from four observations:
(1) While a diverse literature describes the imperfect state of health care quality for Medicare
beneficiaries, the body of literature that examines the effectiveness of the QIO program in
addressing these quality concerns is arguably sparse.
(2) Few national evaluations or large-scale studies have been conducted on the QIO program.
Since little of the literature since 1999 has focused on the summative effectiveness of the
QIO program, it is not possible to definitively attribute Medicare quality improvement to the
QIO program.
(3) Most of the literature to date is focused on specific QIO interventions and/or collaboratives.
The review of this literature did not yield a conclusive answer as to whether or not specific
QIO-led interventions resulted in higher quality, lower quality or no change in any given
provider setting. While several QIO interventions or collaboratives did suggest that QIOdirected quality improvement activities were effective at improving selected process and
outcome measures, these findings ranged in levels of statistical significance.
(4) Many of the studies that evaluate the effectiveness of the QIO program are wrought with
methodological limitations inherent in the study designs. Such problems are threats to the
internal and external validity of the studies and may bias study findings. Perhaps evaluations
of previous Medicare quality efforts have not found clear evidence of their effectiveness as a
result of methodological errors in study design. In the future, new and methodologically
rigorous studies will be necessary to offer more meaningful conclusions about the
effectiveness of the QIO program.
Although the quality of care received by Medicare beneficiaries may be improving, the literature as a
28

whole does not conclusively indicate that quality improvement is attributable to the QIO program.
As the section above demonstrates, past studies reveal much about the difficulties and inadequacies
of the methods used to evaluate the QIO program. They do not, however, provide convincing
evidence of the value of the QIO program. Future studies, which address the many methodological
limitations described in this section, will be needed to determine the extent to which QIOs achieve
their goal of assisting providers improve the quality of care delivered to beneficiaries. The literature
evolving out of the Medicare QIO program’s 7th and 8th SOWs provides government officials,
policymakers, clinicians, and researchers with an understanding that more rigorous evaluations will
need to be developed to assess the effectiveness of the QIO program and its impact on quality of
care for Medicare beneficiaries.

29

4.0

DEVELOPMENT OF QIO INVENTORY

In order to obtain an inventory of QIO activities for the 7th and 8th SOWs, NORC conducted a
comprehensive environmental scan. As part of this scan we gathered a standardized set of
descriptive information about each of the 53 QIOs; data consisted of basic identifying information
such as address and the name of the Chief Executive Officer. Other data consisted of information
on the organizational structure, profit status, board membership and composition. To the extent
available, we gathered activity-level information on each of the QIOs, and any information related to
the organization’s day-to-day operations and activities. Information gathered from the
environmental scan was used to populate a database of QIO activities. Where available, information
from the environmental scan was used to develop QIO-specific site visit interview protocols to
address issues of relevance to the organization. Finally, data from the scan assisted staff in the
development of evaluation designs.

4.1

Sources of Data Information to Conduct the Environmental Scan

Relying on individual QIO websites, QIO Support Center (QIOSC) websites, web searches and
materials provided by CMS and the IOM, we collected information on tasks QIOs perform as a part
of their core contract; any products, tools, publications, training sessions/workshops, and other
educational or otherwise technical assistance-related materials they produced; collaborations with
other QIOs and other organizations; special studies, and, if possible, their methods for collecting
and managing data. Furthermore, we documented existing sources of data on QIO activities along
with the name of the system or party which either housed or controlled access to these data. Data
may have been contained in progress reports to CMS or housed either at QIOSCs or internally at
individual QIOs.
4.1.1

Access to Publicly Available Data on QIO Interventions and Activities

Although there is a small volume of published and unpublished literature available on the
characteristics, activities, and impact of QIOs on quality improvement, there is no single source of
information that systematically describes the unique characteristics of individual QIOs or their
activities, interventions, and approaches for rendering technical assistance to providers. Efforts to
gather information by searching publicly-accessible sources, such as QIO websites, Lexis-Nexis and
Google Scholar, proved to be time consuming, and of limited value. For instance, NORC staff
systematically reviewed the website for each QIO, but generally found only brief overviews of the
organization, summaries of the 7th and 8th SOW tasks, resources and toolkits (or links to these
resources and toolkits) for providers and, in some cases, success stories. Although many QIOs
published and posted annual reports on their websites, frequently these were several years out of
date. Information on how QIOs engage providers or their strategies for rendering assistance to
providers to improve performance is often unavailable. Similarly, despite the availability of tools
and resources for quality improvement, detailed descriptions of QIOs’ processes for motivating,
training/educating and supporting providers are limited.

30

4.1.2

Resources Provided from the Institute of Medicine

NORC requested and obtained from the IOM copies of documents that were used in developing
their 2006 QIO evaluation report. Among the information available from the IOM were materials
used by CMS staff to brief the IOM on the QIO program, lists and reports of Special Studies
conducted by QIOs, and support contractors’ reports. Many of the documents available from the
IOM served as useful sources of background information on the QIO program, and the IOM report
(2006) was an invaluable source of information throughout our study. These documents, however,
contained limited information on QIO-specific technical assistance activities, strategies, or
interventions. Nevertheless, to the extent that information used to develop the IOM evaluation
report could be linked to specific QIOs it was entered into the inventory.
4.1.3

Access to CMS Resources on the QIO Program

Perhaps the most valuable source of information on QIOs is CMS. NORC staff met with staff
from the Office of Clinical Standards and Quality (OCSQ) on several occasions to gather
background information on the QIO program and to attempt to access any QIO-specific data that
would provide insight into individual QIO’s approach to quality improvement. Additionally, and in
keeping with the requirements of our ASPE contract, NORC met with CMS staff in order to obtain
copies of invoices that could be used to understand the budgetary process. Other pieces of
information of particular interest for this study were project work plans (which we believed would
provide specific details on the activities used to meet SOW requirements), reports from special
studies, and annual reports.
CMS provided NORC staff with limited access to QIONet. QIONet is an intranet website that is
used in part by QIOs to submit project deliverables for approval as well as for CMS to communicate
to QIOs and to approve deliverables. Information that NORC gathered from QIONet included
dashboard reports, Standard Data Processing System (SDPS) memos,v and templates used in the
Financial Information and Vouchering Systemvi (FIVS). Upon request NORC further received
counts of the number of IPG providers for all tasks and QIOs, as well as FIVS data for the QIOs
that participated in site visits and proposals for non-competitive Hospital Payment Monitoring
Program (HPMP) Special Studiesvii Although information obtained from the QIONet system proved
somewhat useful in designing evaluation alternatives, as it identifies performance measures that were
collected during the 7th SOW and describes how each QIO’s performance was evaluated, it was of
limited value in understanding of how or whether QIOs effect change across the different settings
of care.
The Program Activity Reporting Tool (PARTner) system appears to be the main repository of
information for the QIO program; it is used by QIOs to report on deliverables, including taskspecific project work plans. It is also a mechanism by which CMS monitors deliverables and
approves plans. CMS restricts access to QIO data, particularly data that contains any references to
v SDPS memos are communications between CMS and QIOs; they are used for multiple purposes such as issuing
announcements and making clarifications on ambiguous contractual issues.
vi The FIVS is an electronic system used for invoicing and for monitoring contract expenditures.
vii Task 3b of the 8th SOW requires QIOs to submit a proposal to conduct a special study related to inappropriate billing,
coding or utilization.

31

provider or beneficiary identifiers is deemed confidential under the QIO contract and federal
regulations (42 CFR 480). viii Although NORC was not provided access to data in the PARTner
system, staff was provided with a copy of the PARTner system code book or User’s Guide, which
suggested that some detailed information on QIO performance improvement strategies may have
been available for the 7th SOW.
Discussions with CMS staff as well as experts outside of CMS suggested that the lack of QIOspecific information related to technical assistance is due not only to regulatory requirements, but
also to the fact that CMS contracts with QIOs are performance-based. As such, greater emphasis is
placed on collecting performance data (results are typically reported in the form of dashboard
reports) than on gathering information on specific approaches for meeting contractual
requirements.

4.2

Database of QIO Activities

During and following the environmental scan of QIO activities and site visits, NORC developed a
Microsoft Access® database to store and organize the information that was collected as part of the
multiple data collection efforts described in Section 3.1. NORC entered a standardized set of data
items about the QIOs and their activities, organized by six major types of technical assistance and
other activities: (1) trainings/workshops, (2) publications, (3) projects, (4) collaborations, (5)
services, and (6) beneficiary outreach. Descriptions of each of these activities can be found in Table
4.1.
The database permits basic queries to facilitate analysis and evaluation design. It contains two types
of information: QIO structural and descriptive characteristics as well as activity or subtask-level
information relevant to each QIO. The organization of information in this database is intended to
facilitate subsequent work to develop QIO evaluation designs. It may also be used as a stand-alone
database to produce reports detailing activities conducted by individual QIOs for each task and
subtask in the 7th and 8th SOWs, compile lists of special studies by QIOs or by costs, or produce any
number of other customized reports using the data elements that were collected during the
environmental scan and site visits.
4.2.1

Database Development

The Access inventory catalogues publicly available information on QIO characteristics and activities
conducted to meet SOW requirements; it allows the user to conduct queries based on multiple
parameters. The database contains two types of information: (1) descriptive and corporate figures
on each QIO, and (2) a representative view of specific technical assistance interventions or activities
undertaken by each QIO under each subtask for the 7th and 8th SOWs. Information on specific
activities from the 7th SOW was much more limited than for the 8th SOW. In several cases, some
activity description fields are unpopulated because insufficient information is available to create a
meaningful entry. Similarly, when a QIO merely stated participation in a mandatory CMS national
viii Personal communication with Captain Arnold C. Farley, Health Insurance Specialist, Department of Health and
Human Services, Centers for Medicare & Medicaid Services, Office of Clinical Standards and Quality, Improvement
Group, October 5, 2006

32

initiative – e.g. “Home Health Quality Initiative” – with no detailed information on specific
interventions, no entry was made regarding its participation.
While data entry forms for subtasks under “7th SOW Task 2 – Information and Communications”
were created, these activities were almost exclusively tied to interventions under the Task 1 subtasks.
To provide as complete a picture of the interventions under each subtask as possible,
communication activities were included with the description, and not repeated without Task 2
context.
4.2.2 Database Content
As previously mentioned, the information to populate the database was gathered from public
websites of each individual QIO, other QI public websites, documentation provided by the IOM,
selected documentation provided by CMS, literature search engine (such as PubMed or LexisNexis)
results which were obtained using each QIOs’ names as a keywords, and information provided
directly by the QIOs in the case of the nine selected for site visits.
The “QIO General Table” in the database contains identifying information for each QIO: QIO ID,
State, QIO Name, Address, Telephone Number, HHS Region, Director Name, Director Email,
Contact Name, Contact Email, Contact Number, URL, for profit status, legal status, year first
contracted with CMS, site visit participant, standard board structure, whether involved in non-QIO
activities, whether involved in special studies; whether the QIO performs QIOSC function, the total
CMS budget, the total non-CMS budget, and the number of participating providers in the QIO.
Technical assistance activities and interventions, or any QIO work conducted to meet requirements
of the 7th or 8th SOW, were categorized and entered in the database. Individual activities were
classified as shown in Table 4.1. Screenshots of the database screens are shown in Appendix B.

Table 4.1 Categories Used in Compiling QIO Activities/Interventions
Label

Project
Publication
Service
Training/Workshop
Collaborations*
Beneficiary Outreach

Description

On-going initiative, unique to the QIO or under the umbrella of a national initiative,
e.g.”Home Health Quality Initiative”.
Any published product e.g. white papers, articles, brochures printed and distributed,
written reports on intervention results.
Activities consisting primarily of resources gathered for access by providers, e.g. links to
CMS and others, downloads of materials from other sources.
Any activity designed to educate the target audience on a chosen topic, e.g. Web-Ex
sessions, conferences, individualized program at a specific provider location.
An activity or intervention resulting from direct work/cooperation with other
institutions or organizations e.g. joint-sponsorship of a conference, creation of statewide guidelines or policies.
Activities targeting beneficiaries directly, e.g. mailing of brochures to beneficiaries’
homes, sponsored radio ads, setting up booths at local health fairs.

*Many QIOs referred to group learning and information sharing events as collaboratives. These were catalogued under the label “Project.”

33

4.3

Key Findings

The following describes the most salient findings, organized according to the categories listed in
Table 4.1. Higher level insights, which transcend or cut across multiple categories, are also
presented.
4.3.1

Projects

QIOs have engaged in a range of projects in the 7th and 8th SOWs. Most QIOs noted their
participation in CMS quality initiatives (e.g. the Hospital Quality Improvement program or Nursing
Home Quality Initiative) in their individual states. Websites and press releases generally articulated
and promoted CMS’ goals for each setting, including awareness campaigns, reduction of rates of
negative events or improvement in rates of testing or prevention. However, there were very few
details on the technical assistance that was offered or specific interventions QIOs implemented
under the general initiatives. Furthermore, there was little evidence of innovation in methods for
provider education, culture change, toolkits, etc. The two approaches for rendering technical
assistance that were most commonly identified were 1) one-on-one technical assistance, for example
with a needs assessment and subsequent design of a quality improvement plan; and 2) facilitation of
“collaboratives” or opportunities for group learning and sharing of best practices. The following are
a few examples of the QIO projects listed in the database:
™ The Alabama Quality Assurance Foundation Alabama Pressure Ulcer Initiative was
instituted in eighty-five Alabama nursing homes. The major project activities included:
•
•
•
•
•

Distributing new assignment sheets with a skin audit tool to nursing assistants,
Placing pink bunny cards in rooms of high risk patients to remind staff of their risk,
Implementing the Braden scale and PUSH tool in some facilities,
Using red turn schedules for high risk residents, and
Installing signs in the showers to remind staff to routinely examine patients’ skin.

™ Louisiana Health Care Review, Inc. was faced with addressing the need for increased
support and improved care as a result of Hurricane Katrina. To this end, LHCR contacted
home health agency partners who were displaced and assisted them in regrouping. For those
agencies that were not directly impacted, LHCR provided support in continuing Outcomes
Based Quality Improvement (OBQI) processes.
™ The New Mexico Medicaid Review Association NM Remaking American Medicine (RAM)
project was a national effort that used television broadcasting to disseminate information
and provide resources on areas related to healthcare improvement. Programming included
the following television segments:
•

•

“Extension for Community Healthcare Outcomes” discussed using telemedicine to
expand education in rural areas around particular diseases such as Hepatitis C,
diabetes, hypertension, and obesity.
The Senior Mentor Program of the University of New Mexico Center on Aging and
34

•

School of Medicine’s Geriatric Program, pairs senior citizens with medical students
to improve communication skills and inform future doctors about the health-related
issues of the senior population.
The St. Joseph Community Health and Diabetes Education Project that uses
gardening and cooking programs to educate the Hispanic population on diabetes.

4.3.2 Publications
A few QIOs appear to have invested considerable resources in the publication and dissemination of
information relating to selected technical assistance initiatives and programs. In most cases, QIO
publications were limited to one or two subtasks; this could reflect requirements of the SOW under
either the 7th or 8th SOW.ix Most activities labeled as “Publication” in the database refer to the
writing/designing, printing and distribution of items such as posters, brochures or booklets on a
range of diseases, health topics, and QIO activities: diabetes, hypertension, vaccine and other patient
safety, DOQ-IT activities, etc. QIO publication highlights include:
™ Sample cluster reports are produced by the Arkansas Foundation for Medical Care and
distributed to acute care inpatient facilities and critical access hospitals. These reports supply
data to enable a hospital to compare their performance relative to those in the same cluster.
™ The Acute Care Hospitalization (ACH) Event Tree published by the Delmarva Foundation
depicts the pathways that may lead to a hospitalization of a home health patient to enable
providers to understand and identify the many different causes or contributors to an acute
care hospitalization.
™ The “Reducing ACH Audio Recording - An Introduction for Clinicians” is a 10-minute
presentation for clinicians published by Quality Insights of Pennsylvania that provides
general information about the relationship between QIOs and the home health agency's
quality improvement teams. The recording addresses the history of the acute care
hospitalization outcome as the national priority outcome, and the need for a multidisciplinary approach to reduce hospitalizations, as well as motivation for staff to reduce
avoidable acute care hospitalizations.
Evidence of duplication of efforts by two or more QIOs in the area of materials development and
distribution was noted. In some cases it was clear that duplication of efforts did not occur, but
rather, QIOs offered providers access to the same sets of materials that were developed and
promoted by the task-specific QIOSC. In other cases, it appeared as if resources were developed
independently by different QIOs. For instance, several QIOs developed very similar materials
dealing with the assessment of skin integrity in the nursing homes, brochures on a specific topic or
“how to” tips for using the Compare databases. It was difficult to determine why this duplication
occurs. One possibility is that the QIOSC had not yet developed the materials when it was needed
by the QIO. Other possibilities are that the QIO simply prefers to customize a particular tool to
better meet the needs of their provider population or that the QIO is unaware that the material
As an example, under the 8th SOW QIOs are required to publish a report, in the organization’s newsletter, another
professional newsletter, a peer reviewed publication or other publication, that describes the project and findings
associated with the Task 3b (Hospital Payment Monitoring Program) non-competitive Special Study.

ix

35

already exists.
Some materials offered for mailing or download on the QIO websites did not indicate authorship;
these were not labeled as “Publication” but rather “Service”. About half of the QIOs produced
newsletters for one or more setting. These ranged in length from single-page faxable documents to
multi-page publications. The mode of dissemination also appeared to vary. Most QIOs appeared to
offer newsletters and publications in easy to use, downloadable PDF formats. Others appeared to
mail publications. The frequency of publication also varied, but the majority seemed to be published
on a monthly or quarterly basis. In some cases, the frequency of publication changed during the
SOW. Several QIOs maintained an archive of publication back issues which covered multiple
years.
4.3.3

Service

Items categorized as “Services” were universally web pages, providing web links to resources and
downloads that were accessible on other organizations’ websites. Indeed, the majority of QIO
websites maintain links to the CMS website; resources available from QIOSC websites, resources
developed by various national public and private agencies or organizations with a focus on quality
improvement, such as the Agency for Healthcare Research and Quality and the Institute for
Healthcare Improvement, the CDC, and the QIO lobbying group, AHQA. In addition, links on
QIO websites direct users to information on condition-specific related tools for managing and
caring for pain, depression, pressure ulcers, adult pneumonia, and heart failure.
4.3.4 Training/Workshops
Information gathered from the environmental scan and entered into the QIO database suggests
QIOs participated in a range of training and or workshop related activities. At least thirteen QIOs
organized – either independently or in collaboration with other organizations – public and/or
private large-scale conferences focused on topics such as pay-for-performance strategies, health
information technology, patient safety, culture change in nursing homes, medication reconciliation,
and cultural competence in healthcare settings. Most of the training opportunities provided by
QIOs appeared to be small scale learning/training sessions for IPG participants in the form of a
conference call series, teleconferences, and Web-Ex events. These events covered such topics as
best practices, electronic health records, strategies to assist physicians and nursing homes to improve
quality, as well as a host of disease specific information on depression, pressure ulcers, heart failure,
and diabetes. Some QIOs have extensive libraries of past webcasts and recordings of conference
calls available for the public to download from the QIO website. QIOs also reported hosting
workshops for care providers on reducing acute care hospitalization, disease management, and
understanding Medicare Part D.
There was some evidence of cross-QIO knowledge sharing, where staff from one QIO conducted a
webcast promoted by a second QIO. No information regarding the number of participants or the
success of these educational opportunities was noted.

36

4.3.5 Collaborations
Some QIOs reported joining efforts with a variety of public and private organizations in their states
to further opportunities for outreach, education and technical assistance. For the most part,
information on these collaborations was rather sparse. Interestingly, most of the descriptive
information related to these collaborations was associated with physician office activities. Table 4.2
provides examples of some of the collaborations that were identified from the environmental scan
and which currently reside in the QIO Activity Database.

Table 4.2 Collaborations Identified in QIO Database
QIO/State

CIMRO – NE

Collaborator

Nebraska Hospice and Palliative
Care Association

FMQAI – FL

All state stakeholders

Stratis – MN

Minnesota Department of
Health

Delmarva – MD
& DC

Maryland Hospital Association

Title and Description
Nebraska Hospice and Palliative Care Partnership 36+

members, focus on improving care and conditions for
chronically ill patients or those at the end of life.

Florida Health Information Network (FHIN)

An integrated information system to connect stakeholders and
provide access to medical information.

Diabetes Plan CENTRAL

Designed a web tool as an interactive hub for the diabetes
community; part of Diabetes Plan 2010.

Maryland Patient Safety Center: Intensive Care Unit
Safety Culture Collaborative

Designed action-based programs to promote the culture of
safety, prevent infection and adverse drug events.
MPRO – MI
IFMC – IA

American College of Cardiology
Michigan Chapter, BC/BS of
Michigan
Iowa Caregivers Association

QPRI – RI

CMS, Pioneer Network

Cardiovascular Discharge Documentation Initiative

Creates and promotes use of integrated discharge documents
to ensure that key care elements are reported systematically.

Better Jobs-Better Care Grant

Developed a leadership training program for CNAs.

The 2005 St. Louis Accord National Conference

Conference to create strategies to bring resident centered care
or culture change to nursing homes.

Included in the database is the name of the QIO and outside organizations involved in the
collaboration, along with a brief description of the project’s purpose or goal. Typical examples of
national initiatives sponsored or created by outside organizations, but which incorporated extensive
QIO participation, included the End Stage Renal Disease Network’s “Fistula First” campaign, the
Institute for Healthcare Improvement’s “100K Lives Campaign” and a variety of American Hospital
Association programs such as the “Acute Myocardial Infarction” initiative.
4.3.6 Beneficiary Outreach
Although the tasks related to beneficiary outreach (Task 2 in the 7th SOW) were eliminated from the
8th SOW, several QIOs reported continued activity in directly educating and raising awareness
among the beneficiary population. Beneficiary outreach activities in the 8th SOW consisted primarily
of distribution of informational brochures or bilingual education materials, both paper and
37

electronic, on various topics such as the CMS Nursing Home Compare tools and other nursing
home selection resources, controlling diabetes, and screening for breast cancer. Outreach and
dissemination of information strategies differed among QIOs.
™ The Alabama Quality Assurance Foundation participated in a community health fair and also
arranged a regional town hall meeting to provide Medicare beneficiaries, their families, and
support groups with essential information on diabetes.
™ The Carolina’s Center for Medical Excellence (CCME) broadcast television commercials
providing information about adult immunization and Breast Cancer targeted to African
Americans.
™ Health Care Excel partnered with the local public broadcasting station to air a four-part
television series, “Remaking American Medicine”, on health care and quality improvement
efforts.

4.4

Challenges and Limitations

Data for selected tasks, such as the Special Study Projects that QIOs have been funded to conduct,
appears to be relatively complete. (Refer to Appendix C for examples of Special Studies funded
under the 8th SOW.) Nonetheless, for the overwhelming majority of tasks, large gaps exist in the
data. The scope of findings reflected the paucity of activity- or intervention-specific information
available in public resources, particularly activities related to the 7th SOW. In several cases, no
substantive information on any specific project could be found for a given QIO and subtask. The
quality and depth of information did, nonetheless, vary greatly from QIO to QIO. Even for a single
QIO, the information available often varied from setting to setting. Efforts to locate details on
projects that were identified by name often proved futile and while most QIOs stated that they are
currently or have previously participated in national or local quality improvement initiatives, specific
details as to the QIOs’ scope or role in the initiative was generally unavailable.
Because of the heterogeneity of the information obtained, populating the database often required
research staff to make subjective judgments as to the work QIOs were and currently are engaged in
during the 7th and 8th SOW. As such, entries in the database are illustrative of trends among QIOs;
but they are not a comprehensive catalogue of QIO activities and interventions. Caution should
therefore be used in conducting analyses or interpreting results of analyses conducted with
information from this database.

38

5.0 SITE VISIT RESULTS
NORC conducted site visits to nine QIO contractors, representing 12 states and the District of
Columbia. Site visits were an invaluable, opportunity to gain “on-the-ground” insight into individual
QIO's daily operations, their approaches for rendering technical assistance to providers, experiences
working with providers and CMS, and their perceptions as to the current approach to QIO
evaluation.
This Section describes NORC’s approach for selecting and recruiting QIOs, developing the site visit
protocol and conducting site visits, as well as key site visit results.

5.1

Site Visit Methods

Using information collected as part of our environmental scan as well as discussions with ASPE and
CMS staff, we purposively identified nine QIOs that differed in the following areas:
•
•
•
•
•

Size of state that they served;
Location;
Single or multiple QIO contracts,
Profit status; and
Presence of QIOSC contracts.

Once sites were selected, CMS distributed an SDPS memo that informed each site that they had
been selected to participate in NORC’s study and that they would be contacted directly by NORC
staff concerning their participation.
NORC e-mailed the Chief Executive Officer (CEO) or Executive Director at each of the QIOs an
invitational letter which described the purpose of the project, its goals and a general description of
the areas that would be discussed during each visit. Soon after, a senior member of the project team
followed up by telephone with the CEOs and other members of the organization’s leadership to
respond to questions or concerns, and to secure the QIO’s participation.
After an agreement to participate was secured, NORC worked with the CEO, or his or her designee,
to identify and arrange interviews with other key staff. Depending on the structure of the QIO,
these individuals generally included the Medical Director, Quality Officers, Task Leaders, Chief
Operating Officers, Information Officers, Directors of Beneficiary Services and other members of
the management team or frontline staff. In a few sites QIO-selected members of an IPG were
interviewed. Because we did not want to limit interviews to those providers that were identified by
the QIO, in two sites we identified and interviewed (without informing the QIO ahead of time)
providers that had worked with the QIO.
Protocol Development: Prior to each site visit, NORC gathered key information on each QIO, as
identified from the environmental scan. This included background materials on QIO leadership and
the Governing Board, organizational characteristics, descriptions of special studies or QIOSCs,
involvement in collaboratives and, if relevant, media coverage.

39

A semi-structured, multi-module protocol, which was customized to address the range of activities
that the QIO engages in and the types of respondents (e.g., frontline vs. managerial) was developed
to guide interviews. In general, we queried respondents from each QIO about the structure of the
organization and governance, their strategies for completing tasks under the core contract, any
quality improvement activities that they engage in beyond the core contract, special study funds that
they received in the 7th or 8th SOW, their involvement or use of QIOSC resources, and perceptions
concerning their interaction with CMS, the QIO contracting and evaluation process. Table 5.1
summarizes the range of issues discussed during the course of a typical site visit.

Table 5.1 Standard Protocol Questions by Module
Module

Senior Leadership
Module

Task 1: Technical
Assistance Modules

Task 3: Beneficiary
Protection Process
Module

Task 4: QIOSC and
Special Studies
Module

Areas of Discussion Addressed by the Standard Protocol
•
•
•
•
•

History of the organization as a QIO
Staff composition and expertise
Selection and composition , expertise, and role of the Governing Board
General approach to quality improvement
Strategies for adapting between SOWs, i.e., staff changes, budgeting, sustaining
relationships with providers, etc.
• Perceived competitors in the quality improvement field and impact on the QIO’s work
• Frequency and nature of interaction/communication with CMS
• Assessment of CMS’ evaluation criteria, the extent it measures QIO performance, and
how to improve the measures
• Overview of major activities performed under each task specifically
• Key collaborators, e.g., professional and quality associations, consumer groups, etc.
• Staff training
• Processes/strategies for IPG selection and recruitment, including barriers faced
• Types of technical assistance provided, i.e., consultative vs. collaborative; others
• Interaction with non-IPGs vs. IPGs
• Strategies to regularly monitor and assess the QIO’s own performance and to improve
• Perception of task-specific performance evaluation criteria and selection of quality
measures
• Use and perceived value of task-specific QIOSCs
• Overview of beneficiary protection activities—from the submission of a complaint to
termination and the process for appeals
• Frequency and nature of beneficiary complaints
• Selection/training of contracted physician case reviewers
• Approach to working with providers to implement quality improvement plans
• Reaction to the IOM’s recommendation to contract beneficiary protection activities out
to organizations other than QIOs
• Approach to the Hospital Payment Monitoring Program contract requirements
• QIOSC contracts held in the previous or the current SOW
• Interaction with and perceived value of QIOSCs
• Types of products and services provided by QIOSCs
• Involvement in Special Studies and exposure to/knowledge of other QIO’s studies
• Dissemination of QIOSC and Special Study materials and/or findings
• Perception of whether current measures of performance capture the QIO’s true impact
• Perception of contract requirements—too high or too low?
• Suggestions for improving CMS performance evaluation criteria

The sets of modules served as an informal checklist that allowed the interviewer(s) to follow the
flow of the conversation and explore different avenues of questioning as new issues arose, all the
40

while ensuring that critical topic areas were addressed. Site visits lasted over the course of one or
two days, usually for a total of eight hours of direct interaction with QIO staff and leadership. In all
but one case, at least four project staff members participated on each site visit—two interviewers
and two note-takers. At a few sites, we also spoke with IPG providers to obtain information on their
perspectives and experiences working with their state’s QIO.
The importance of conducting the site visits cannot be overemphasized. The direct, in-person
interaction with individual QIO staff and leadership provided unique insights into the size, scope,
and mix of QIOs’ technical assistance and quality improvement strategies and related activities,
many of which are appropriate targets for evaluation within and across QIOs. Equally as important,
the site visits clarified our understanding of what it will take to document the true nature of QIO
technical assistance, thereby helping us to ground potential evaluation designs in the realities of
QIOs’ daily operations.

5.2

Results: Organization and Governance of QIO

The extent to which the QIO's Board of Directors influences the strategic direction of the QIO and
its quality improvement approach appears to be limited and varied, although most QIOs indicated
that their Board monitors and offers feedback on Dashboard Reports, which are used to track
performance on SOW subtasks. Variation in board influence may be driven by characteristics such
as the proportion of revenue generated by the QIO contract relative to other lines of business and
the length of term on the board. (Preliminarily, it appears that the less revenue and the longer the
term on the board, the less the Board’s influence.) Clearly, this is an area which would benefit from
more in depth analysis, which we consider when designing our evaluation options.
The Boards of Directors of the QIOs that we conducted site visits to continue to be dominated by
physicians. With one or two exceptions, consumer representation on QIO boards is still relatively
limited, as is representation from the nursing home and home health industry. In a couple of
instances, the consumer representatives consist of retired physicians who happen to be Medicare
beneficiaries. In other cases, consumer representatives were selected from organizations such as the
AARP in order that these individuals could serve as “emissaries” between the two organizations,
educating the CEO and others about how best to communicate with the Medicare population and
providing the consumer with information about resources available to seniors.
All the QIOs that NORC visited held contracts with other public (usually Medicaid) or private
organizations to conduct activities related to those required under the SOW (e.g., utilization review,
case management), yet the extent to which they depended on the QIO contract as a dominant
source of revenue varied across organizations. In a couple of states the QIO contract comprised a
moderate proportion of the organization’s budget whereas in other states it appeared that the
organization would cease to exist if it were not for the QIO contract. Consequently, some
organizations were very concerned about potential instability in the program and changes that could
result from the high level of recent attention given to the QIO program by members of the Senate,
the OIG, the GAO, and ASPE. As such, a couple of organizations indicated they were considering
expanding their non-QIO lines of business to ensure their continued viability in the event that they
lost their contract or experienced substantial decreases in funding. At least two QIOs indicated that
the CMS “conflict of interest” rules were unclear and, as a result, have been slow to expand their
41

portfolio of work. One CEO indicated that the organization had recently terminated about 40
employees that performed review activities to ensure that they were not in violation of CMS’ conflict
of interest rules.
On a related note, all but one QIOx indicated that funding from CMS was inadequate to accomplish
tasks under the 8th SOW. One organization indicated that it had already informed CMS that
funding was insufficient and that they would be unable to meet 8th SOW requirements for the threeyear contract period. A couple of organizations indicated that they historically experienced financial
losses under the QIO contract and that it was necessary to maintain other lines of business to
remain financially viable. Likewise, organizations noted that one important advantage of a QIO
contract – and a reason to maintain the contract despite financial losses – was that it gave them the
experience and credibility that they needed to successfully compete for other public and private
contracts.
Several of the organizations that NORC visited held QIO contracts in more than one state. Yet
organizations differed in their opinion of whether QIO administrative or quality improvement
functions could (or should) be centralized across these states. In fact, multi-state QIOs differed in
their opinion about whether field offices were necessary to effectively offer technical assistance to
providers in individual states. One multi-state QIO chose to centralize both administrative and
quality improvement functions in one office to maximize efficiency. They also perceived that
interventions could work equally well across the states and regions they covered. Another multistate QIO chose to centralize administrative functions (e.g., human resources), but not quality
improvement activities; this QIO maintained a separate central office in each state as well as field
offices to facilitate contact with all required providers. When asked why a de-centralized approach
was used, certain QIOs indicated that the state was too large, and the characteristics of providers
(e.g., solo v. group practice) and beneficiaries (e.g., rural vs. urban), was significantly different.
However, these QIOs did acknowledge that staff were required to be "out in the field" for several
days to make contact with the required number of providers.
When asked whether any functions could or should be centralized, many QIOs indicated that while
it may be more logical to consolidate case review and beneficiary protection activities, differences in
state regulations (e.g., licensure laws) may make it difficult to centralize these functions. Further,
QIOs were adamant about the importance of retaining case review within the purview of their
activities. One reason for this is because they had already established a long-term, trusting
relationship with the providers in their community and, as a result, providers would be more likely
to respond to the QIO’s improvement efforts. Further, the intimate knowledge of the providers'
practices that the QIOs gained over the years put them in a better position than an outside review
agency to design a quality improvement plan tailored to providers’ individual needs.

5.3

Results: Technical Assistance Offered to Providers

QIOs are engaging in a range of technical assistance activities that seem to vary in intensity, but do
not differ much by type. Generally, approaches for offering technical assistance include
x Although this one organization indicated that funding was at an appropriate level, staff were concerned that continuing
modifications to the SOW could lead to financial difficulties.

42

teleconferences, one- or two-day workshops, conferences, regional meetings, web-based seminars or
training, participation in collaboratives, and one-on-one consultations (which may occur in person,
by telephone or e-mail). QIOs were not uniform in their perception of which strategy worked best
in providing technical assistance; some QIOs thought that collaborative models or group training
approaches were more effective because—by creating "communities of practice"— QIOs could
learn from one another which implementation strategies would work best in their environment.
On the other hand, due to significant differences among physicians and physician groups and that
extra push that certain institutions may need to make changes, other QIOs believed that significant
improvement could only be attained using a consultative model. This was particularly true for Task
1d1 - the implementation of electronic health records - where providers vary tremendously in their
capacity and willingness to adopt HIT into their daily practices. Indeed, most providers used
multiple approaches, and most QIOs agreed that the type of technical assistance that “works best”
varies by type of provider. In general, QIOs appeared to favor the collaborative approach when
working with hospitals over the other settings.
It was also clear that most QIOs combined different strategies and attempted to incorporate oneon-one consultation, particularly with those providers that were struggling to meet quality
improvement goals. The extent to which one-on-one consultation, collaborative approaches, or
other approaches for offering technical assistance are being used is unclear, but comments made by
QIO staff suggest that strategies depend in part on budgetary constraints as well as geographic
distribution of providers, the presence of field offices, type of provider and task.
Ultimately, regardless of the types of assistance, many QIOs indicated that unless “buy-in” from
senior management and physician leaders is available, it is difficult to effectively implement any
intervention.
Some QIOs felt much more freedom to experiment and develop their own tools than other QIOs
that relied solely on tools available from the MedQIC site and support available from the QIOSCs.
Albeit not necessarily proven to enhance quality improvement, some of the more innovative
strategies cited by QIO respondents include:
•

Purchase of computer equipment for rural providers without the resources to do so
on their own;

•

Five QIOs collaborating in a “secret shopper” program to identify problems with
the beneficiary help line program;

•

Staff at one QIO give providers an “engagement score” to assess their involvement
with QIO activities in order to develop strategies for working with providers;

•

The “I Found a Stage 1” campaign, which provides CNAs with a nominal monetary
payment for identifying pressure ulcers at an early stage

However, it is important to note that some QIOs reported feeling restricted in their ability to be
innovative and responsive to the unique needs of their communities. This is due to what a few
QIOs labeled as increasing micromanagement by CMS. QIOs reported that both the number of
deliverables and evaluation measures has increased between the 6th and 8th SOWs, thereby further
43

hampering their freedom to create unique quality improvement techniques and approaches tailored
to their local providers.
Tracking provider progress and day-to-day project management varies across QIOs. In most cases
it appears that QIOs use both formal tracking mechanisms and less formal processes, such as
regular team meetings, to monitor provider performance. A few QIOs have implemented formal
electronic systems that, among other data, maintain information on each provider’s plan of action,
expected date of completion of individual activities, actual dates of completion and results. QIO
staff use these systems to track provider progress in meeting selected quality improvement
objectives and to assist QIO staff in identifying when a specific provider needs assistance in
completing specific tasks or meeting goals. Many QIOs appear to use less formal means to track
provider progress. For instance, on an as-needed basis staff may call providers that are not meeting
quality improvement goals to identify factors that may be impeding progress and determine what
interventions (e.g., visit from the QIO medical director, additional tools or training) are needed to
get the provider back on track. Regularly scheduled team meetings may be held as a way to remain
current on provider progress and discuss ways to deal with issues encountered by providers in
meeting quality improvement goals.
QIOs interviewed indicated that “trial and error” is required to determine whether a particular
intervention is more effective in promoting quality improvement than other interventions.
However, the short time frame that CMS allows an intervention to be active prior to evaluating QIO
performance combined with lags in the availability of data used to measure performance, makes it
difficult for QIOs to evaluate the impact of an intervention and “change course” during the 3-year
period of the contract.
QIOs' ability to conduct “real time” tracking of the effectiveness of specific interventions is
significantly restricted by CMS’s data lag. For example, QIOs indicated that when initiating an
intervention they often do not have baseline data until well after the start of the contract. A
member of one QIO’s nursing home project staff indicated that “we are barely seeing data that
demonstrate our interventions and we are halfway through the contract.” Some QIOs appear to be
better at getting around data lag issues than others; however this is limited primarily to the hospital
setting, where providers have been capturing and recording electronic data for several years now.
One QIO indicated that they had two approaches to tracking of data; data from the hospital data
warehouse, which may be several months old, and voluntary "real time" data submitted by hospitals
to allow for cross-hospital comparisons. These data often serve as a proxy data set until CMS data is
made available. The same QIO was working on regression models that used quality improvement
data derived from specific interventions employed in the 7th SOW or first quarter of the 8th SOW to
predict provider improvement (and whether they would meet CMS performance targets) in future
months.
In recruiting providers for their IPGs, most QIOs report doing “cherry-picking” in order to meet
CMS’s performance targets; in other words, QIOs explicitly choose to work with those providers
that will enable them to receive a passing score on the evaluation. Most QIOs indicated that unless
required by the SOW, such as for Task 1a, in which QIOs are required to work with poorly
performing nursing homes, QIOs tend to avoid working with the poor performers because they may
either not have the resources or motivation to meet quality improvement requirements in the SOW.
Likewise, while QIOs recognize the importance of including high performing providers and leaders
(to motivate providers and provide peer mentoring, as well as to meet quality improvement
44

requirements) in the IPG, they recognize that in some cases there is a “ceiling effect” that may limit
how much improvement a provider is able to achieve. It is therefore not in the best interest of the
QIO to include solely high performing or low performing providers in their IPGs.
IPG selections tends to be based on whether the provider participated in prior SOWs, data gathered
from prior experience with the providers (e.g., level of quality improvement achieved in prior
SOWs), QIO-designed surveys and assessments, recommendations from provider trade associations
and board members, and willingness to meet the time commitment required to make changes.
Despite the potential cherry-picking, all QIOs indicated that they provided technical support and,
indeed, had a contractual responsibility to offer assistance to all providers (whether or not they were
part of the IPG) in a state. The type of assistance offered to non-IPG providers is not entirely clear,
but discussions with QIO representatives suggest that it may exclude consultative activities.
Generally, non-IPG providers appear to be referred to their website for tools and have access to
various collaborative activities including conference calls, regional meetings, and webinars. Financial
resources, CMS evaluation criteria (i.e. whether or not QIOs are evaluated on statewide evaluation
measures) and the weights associated with meeting these targets appear to be among the factors that
determine the extent to which technical assistance is available to non-IPG providers.
In a couple of instances, when QIOs received many more requests to participate in an IPG than
were stipulated under the SOW, the QIOs established “formal” and “informal” IPGs. Those in the
“formal” group were identified in the IPG lists sent to CMS and served as the basis for evaluation of
QIO performance. Providers in the “informal” group received the same types of assistance;
however, their data are not included in CMS’s evaluation of QIO performance.
All QIOs reported not only that solid relationships with stakeholders and providers are essential in
conducting their QIO work but also that they had successfully established these relationships. Most
QIOs indicated that relationships require years to solidify and, given the number of years that they
have been working with hospitals, these relationships tend to be somewhat more established than
those with nursing homes and home health agencies. As one interesting example of how a relatively
new QIO went about building relationships, the QIO conducted a “road show” where key staff
traveled as a group across all parts of the state in order to introduce themselves to providers and
inform them about resources available through the QIO.
Some QIOs reported that relationships with providers are, on occasion, hampered by CMS’s
“arbitrary” and frequent modifications to both the SOW tasks and evaluation measures. As SOWs
change and CMS either changes or adds measures to its evaluation component, QIOs must adapt
and begin modifying its QI approach accordingly. Similarly, because QIOs tend to select IPGs in a
way that will maximize their ability to meet CMS’s evaluation criteria, certain providers with whom
they have worked may receive less assistance as SOWs change and, in some cases, may need to be
excluded from the IPG. Nonetheless, all QIOs indicated that they always offer as much assistance
as possible to any provider regardless of contractual changes and thought it was important to not
“burn bridges” as the need to work with the provider may arise again in the future.
In addition, relationships with state provider and professional associations are often critical to
ensure that QIOs’ quality improvement messages and resources are disseminated across the state.
QIOs reported working with professional associations to assist in recruitment of providers for the
IPG and to assist in the performance of physician health information exchange. Additionally, at
least one QIO found it necessary to provide technical assistance to state regulatory agencies since
45

state nursing homes were receiving information from surveyors that was contrary to best practices
disseminated by the QIO. The QIO worked with the Director of Long Term Care at the state
surveyor’s office so that they were aware of best practices and sent the same message to all nursing
homes in the state. A training program for state surveyors was also implemented at the time.
Some QIOs expressed frustration over the limited number of tools available for providing technical
assistance and, specifically, what they believed was the lack of validation of these tools. For
example, a majority of QIOs expressed concern about the Office of Minority Health Culturally and
Linguistically Appropriate tool which QIOs are expected to promote under Task 1d2. In addition
to concerns about the validity of the tool, the majority of QIOs indicated that providers found the
tool time consuming and that 9 CME credits was unlikely to motivate providers to take the course.
Some QIOs questioned whether another tool, which offered similar information and which was less
burdensome to providers, might be available. QIOs made similar comments with regards to certain
CMS surveys (i.e. cultural change), which are required deliverables for many settings, and some
resources developed by the QIOSCs. Respondents noted that some of these tools are based on
certain assumptions that have not been tested or have never been shown to be a “best practice.”

5.4

Results: Case Review, Beneficiary Protection and Program Integrity

Few differences were noted in how QIOs carried out Task 3 activities; one QIO indicated that few
differences across QIOs should be expected because most of the processes related to Task 3 were
prescribed by CMS or regulation. Although difficult to quantify or assess from the site visit
interviews, one area where QIOs may differ is in the extent to which they educate beneficiaries and
promote the use of alternative dispute resolution. All QIOs indicated that most beneficiary
complaints received were not true quality of care issues. Rather, complaints tended to deal with
service problems, such as long wait times, rude staff, etc. Some QIOs have medical staff review all
complaints, even if they are not initially identified as being a quality of care problem, whereas other
QIOs attempt to counsel beneficiaries and offer opportunities for alternative dispute resolution
before they channel these complaints to a physician reviewer. Some of the QIOs that we spoke with
indicated that staff received training in counseling of beneficiaries in dealing with interpersonal and
service problems that may occur with their provider.
All QIOs indicated that the number of beneficiary complaints that they receive tends to be rather
small. Reasons for the small number of complaints included characteristics of the population, such
as level of education, cultural attitudes, and location in rural areas.xi Another reason that QIOs
offered for the small number of complaints was the fact that CMS eliminated funding for
communication activities in the 8th SOW and that sufficient resources are not available to make
beneficiaries aware of the process for dealing with complaints.
All QIOs interviewed indicated that the perceptions that regulatory and quality improvement
functions should not be performed by the same organization are erroneous and that, in fact, they
maintain strong working relationships with providers. QIOs indicated that case review activities
directly feed into quality improvement activities. Nonetheless, few examples were obtained and it is
xi According to some respondents rural residents may be less apt to complain due to the small number of providers
available in the community and the importance of maintaining good relationships to ensure that they are able to utilize
provider services in the future.

46

not clear whether feedback between case review and quality improvement tasks occur in a systematic
or organized fashion, or whether it is a more haphazard process. All QIOs indicated that case
review activities should remain with the QIO not only because case review fed into their
improvement activities, but also because dealing with a separate case review organization would
impose another burden on providers.

5.5

Results: QIO Support Center Functions

QIOs differed substantially in their perceptions of the effectiveness of different QIOSCs. QIOs
identified some QIOSCs that they highly depended upon to develop and provide tools and offer
assistance in addressing specific questions. Materials available from these QIOSCs could be used
“off the shelf” without modifications. One QIOSC that received high praise by several QIOs was
the nursing home QIOSC. On the other hand, some respondents thought certain QIOSCs
produced materials that were of little value and required substantial modification to meet the QIO’s
particular needs. Certain QIOs also expressed frustration over the QIOSC selection process, as it
appeared as though some QIOSCs had little understanding or experience in the topic they were
contracted to provide assistance in.
Furthermore, QIOs recommended that CMS initiate a QIOSC contract at least 6 months prior to
the base QIO contract. Currently, QIOSCs must develop tools and resources on the same
contractual schedule as the QIOs. Given the time required to prepare materials, there could be a 6month or longer delay in a QIO’s ability to access these resources. As a result, several QIOs
reported developing their own materials because QIOSC resources were unavailable at the time they
needed to initiate an intervention. Another respondent indicated that they were “in limbo” and
attempted to anticipate the approach that the QIOSC would take in developing their own materials.
Despite the problems with the current support centers, QIOs endorsed the idea of such centers and
remarked that some of them were instrumental in supporting their QI efforts.

5.6

Results: CMS Program Management and Evaluation Issues

All QIOs expressed frustration at the number of times in which CMS has made modifications to the
8th SOW. Two QIOs indicated that at least 11 or 12 substantive modifications have occurred since
the start of the current SOW, and at least one QIO mentioned that there had been at least 30
modifications to the 8th SOW. According to the QIOs frequent modifications have led to
inefficient use of resources, difficulties in implementing quality improvement interventions, and
confusion as to how they will be held accountable for performance under the contract. As
examples, QIOs indicated that they have been forced to reconsider staffing decisions as well as
project work plans to ensure that they are able to meet revised criteria. Additionally, staff must also
be re-educated as to new performance expectations. Interviewees indicated that given the many
contract modifications improvement goals in the 8th SOW as well as the criteria that a QIO was
required to meet to receive a passing score were “moving targets.” Furthermore, QIOs indicated
that contract modifications have increased the scope of work without a concomitant increase in
funding levels.

47

Both QIOs with a QIOSC contract and those without a QIOSC contract were interviewed as part
of this project. On more than one occasion, we heard QIOSCs referred to as “an extension of CMS
staff” rather than an independent source for information on best practices. Two QIOs that held
QIOSC contracts indicated that most of their staff’s time is spent in addressing and attempting to
obtain answers to the various contractual modifications to the SOW. Two other QIOs – one with
and one without a support center contract – indicated that QIOSCs function as support staff for
Government Task leaders, who may have limited internal staff support and are often inexperienced
in the area they are expected to lead. Many of the persons interviewed questioned whether QIOSCs’
staffs are actually knowledgeable and among the most qualified to provide assistance on the topics
for which they are responsible.
On a related note, QIOs indicated that CMS’s approach for identifying special study topics, funding
unsolicited studies, and evaluating competing contracts was not clear. The seeming lack of external
review was deemed a problem. None of the QIOs that NORC spoke with indicated that they were
routinely aware of which QIOs held special studies or that they received data or findings from
special studies. More than one QIO respondent raised concerns that these procurements were not
subject to full and open competition. Additionally, some of the special studies that have been
awarded are unrelated to current QIO work, raising questions about their utility to the QIO
community. When probed on how special study findings were disseminated to the QIO community
or analyzed to improve future SOWs, most QIOs were uncertain. Two QIOs that indicated that
their study results were used to develop a future SOW were surprised to find that results were
incorporated as part of the SOW prior to their study even being completed and reviewed.
QIOs commented on the measures used to evaluate QIO performance; substantial differences of
opinion were noted in perceptions about the appropriateness of measures that CMS uses to evaluate
organizational performance. Some QIO staff thought that measures were valid, appropriate and the
best measures available, while others indicated that their performance in certain settings, such as
nursing home and home health (Tasks 1a and 1b), should be evaluated more on the basis of process
as opposed to outcome measures; QIOs thought that this was more appropriate given the amount
of work that goes into developing these quality improvement approaches, which is not considered a
part of CMS evaluation criteria. Others questioned whether certain measures (e.g., management of
depression symptoms in nursing homes) should be employed given the lack of best practices in this
area. As another example, a couple of QIOs indicated that the Appropriate Care Measure in Task
1c1 should be reconsidered since “there is no clinical science” supporting this measure. Many QIOs
involved in the site visits expressed concern about the manner in which performance in the area of
beneficiary complaints is measured (beneficiary satisfaction with the process and outcome of
review). Specifically, they mentioned that while the QIO has control over the process and that it is
appropriate to measure performance based on satisfaction with the process of dealing with a
complaint, the QIO has less control over beneficiaries’ satisfaction with the outcome.
Other QIOs indicated that measures are too numerous and too broad, making it difficult for them
to concentrate on all areas of performance. Several QIOs suggested that, since outcomes reflect
care that occurs in both acute and post-acute settings, CMS consider development of measures that
cover the continuum of care as opposed to specific settings measures. A couple of other QIOs that
performed well on selected measures indicated that the CMS contract should allow them the
flexibility to work on other measures (outside the SOW) where state performance could be
improved.

48

Another area in which most QIOs voiced concern was in the lack of transparency in the manner in
which CMS sets performance improvement targets for individual tasks. In many cases, QIOs
expected to reach the stated targets; however, they nonetheless expressed concern about whether
any evidence base existed to support the reasonableness of each target. Several QIOs indicated that
targets for particular tasks were unrealistic or that the bar for performance was set too high.
Similarly, the lack of transparency and seemingly random assignment of scoring weights were issues
of concern to some QIO staff interviewed.
On a related note, one QIO indicated that it was against state law for physicians to implement eprescribing at the same time that CMS’ contractual requirements mandated that QIOs work with
physicians to adopt this and other forms of health IT. This QIO voiced concern that CMS’
selection of quality improvement measures disregarded potentially conflicting state regulations.
Finally, another issue related to evaluation that was of concern to QIOs was the timing of
evaluations. QIOs indicated that changing provider behavior is a slow process and that given the
amount of time needed to identify, recruit, and implement interventions, insufficient time is available
to demonstrate improvement in an area.
As previously mentioned, questions about Government Task Leaders’ knowledge about the specific
topic area for which they assume responsibility were raised by several QIO staff members
interviewed. One CEO stated, some “GTLs are ‘green’, they have no clinical experience, or worked
outside an academic facility, ever worked with a provider, or done anything in QI.” Some QIOSC
staff indicated that one of the most important roles that they serve was providing information on
their topic area to the GTL.
Although not discussed by all QIOs, those that did comment thought that CMS Project Officers
were supportive and aware of the activities that the QIO was conducting. A couple of QIOs did
indicate, nonetheless, that information obtained from POs and GTLs was often inconsistent. Of
substantial concern, staff from one or two QIOs suggested that POs and GTLs made modifications
to their contracts without formal review by the CMS office responsible for QIO contracts.
With one exception, all QIOs indicated that funding for tasks in the 8th SOW were insufficient to
meet performance expectations. In addition to funding levels, many QIOs expressed frustration
about the CMS budgetary process. According to QIOs interviewed, CMS’s guidelines for
distribution of funds across tasks does not reflect the actual resources required to successfully
complete the tasks. This is of particular concern in the 8th SOW because of the limitations in
fungibility between Task 1 and Task 3 activities. Several QIOs indicated that funds for Task 1
activities were insufficient to complete the broad set of activities in the 8th SOW, whereas funds for
Task 3 activities were excessive relative to the scope of the tasks. Particularly given funding cuts
between the 7th and 8th SOW, QIOs generally believed that they should have the flexibility to move
resources across tasks as needed to improve quality.

49

6.0

EVALUATION DESIGNS & CONSIDERATIONS

Despite a budget of about $1.3 billion for the 8th Statement of Work (SOW), evidence of the
effectiveness of the Quality Improvement Organization (QIO) program is mixed. Previous
evaluations have found contradictory results and have been limited by methodological problems,
including confounding and selection bias. As indicated in previous sections of this report, the
primary objective of this study was, as identified by ASPE in its Request for Proposal, “to develop
several methodologies to determine whether the QIOs are effective in improving the quality of care
for Medicare beneficiaries and accomplishing their other tasks” (ASPE 2005). This section describes
general approaches that may be employed to evaluate both the core QIO program and
supplementary components of the program, which includes special studies and QIOSC contracts.
In addition, this section describes non-evaluative studies which could be used to gather information
to enhance future evaluations of the QIO program as well as to gain a more refined understanding
of the program and gather insight on how well the program serves as a catalyst for quality
improvement.
In proposing evaluation options we build heavily on prior evaluations that have been conducted and
that are described in Section 3.0 of this report. We also build upon findings from our QIO
inventory and site visits to QIOs. A major resource in shaping our recommendations was the 2006
report “Medicare’s Quality Improvement Organization Program: Maximizing Potential,” issued by
the IOM Committee on Redesigning Health Insurance, Performance Measures, Payment, and
Performance Improvement Programs. In structuring evaluation options we attempted to build
upon one of the IOM’s key recommendations, namely that:
“CMS should develop four types of evaluation to assess the QIO program.
CMS should conduct three of these four types of evaluations internally to
assess QIO performance against predetermined goals and priorities at the
following levels: (1) the program as a whole; (2) individual QIOs with respect
to the core contract, and (3) selected quality improvement interventions
implemented by QIOs. DHHS should periodically commission the fourth
type of evaluation – independent, external evaluations of the QIO program’s
overall contributions.”
Finally, the evaluation options described in this section were informed and shaped by the input of an
eight-member Technical Expert Panel (TEP).

6.1

TEP Contributions to the Evaluation Design Process

NORC identified and recruited eight experts to respond to and offer feedback and guidance on draft
evaluation design options. The main purpose of the TEP was to ensure that the evaluation designs
NORC proposed were as rigorous and appropriate as feasible considering the scope of the project,
the availability of data, and the constraints facing the government and an eventual evaluator of the
QIO program. Furthermore, the TEP was recruited to provide NORC with input regarding how
best to frame an evaluation in light of the IOM’s recommendations.

50

6.1.1

Selection of TEP members

Several considerations guided the process for selecting TEP members. First, it was important to
recruit technical experts who not only were knowledgeable about the QIO program, but who had
collective experience in federal program evaluation, and quality improvement and measurement.
Second, given that evaluation designs could potentially affect the types of data collected, the nature
of activities conducted, and resources expended, we sought to include representatives from the QIO
community to better ensure that recommendations were, in fact, feasible options. Third, we sought
to include quality improvement experts who not only could contribute innovative ideas, but also
who could communicate the ideas and recommendations discussed by the panel to others
knowledgeable about QIOs and health care quality across communities of practice.
An initial list of 24 candidates for the TEP was submitted to ASPE for consideration. (Additional
candidates were added to this list at a later point.) From the candidates nominated, a total of eight
were invited and agreed to participate in the TEP. The NORC consultant to this project, who
participated in the IOM subcommittee on QIOs, also was included as a member of the expert panel.
Names of these individuals and brief biographies are presented in Appendix D to this report.
CMS staff was also invited to attend the TEP meeting; a representative from CMS’ Quality
Improvement Group (QIG) in the Office of Clinical Standards and Quality (OCSQ) attended as a
guest. Other guests included a representative from the GAO and from the Congressional Research
Service. The ASPE Project Officer and other ASPE staff also attended the meeting.
6.1.2

Preparation of TEP Materials: Evaluation Options

Prior to the one-day TEP meeting, NORC prepared and distributed background materials to all
members, including meeting objectives, a literature review, copies of published QIO evaluation
studies, and summaries of potential evaluation options for discussion at the meeting.
Evaluations options were prepared by NORC staff. In designing these options NORC drew upon
findings from the literature review, the QIO database, and site visits. Experts in evaluation and
statistical analyses were consulted. NORC considered evaluation designs that could be conducted
with existing data as well as those that would require primary data collection. Some of the options
were, in fact, not evaluations, but data collection activities and short-term studies that could inform
the QIO program and perhaps provide a foundation for future program evaluations. Options
incorporated qualitative and quantitative techniques as well as retrospective and prospective
analyses.
Given that NORC’s access to CMS data was limited and the quality of QIO-related data maintained
by CMS was unknown, many of the options presented to the TEP were described in general terms.
Where appropriate, caveats concerning the availability of selected data were provided throughout the
meeting.
TEP members were asked to consider evaluation designs that would address the following research
questions:
•

How do IPG and non-IPG providers differ in terms of performance on quality
51

improvement measures?
•

To what extent has the Special Studies Program (including QIOSCs) contributed to QIO
performance?

•

What are the best practices in the provision of technical assistance in each task area?

•

What quality gains could be made by having QIOs focus improvement activities on poor
performing providers?

•

Are QIO performance expectations (measures and improvement targets) appropriate
and realistic?

6.1.3

TEP Proceedings

Following introductions, NORC staff began the day’s discussion with an overview of meeting
objectives, including five questions that TEP members were asked to consider throughout the
course of the meeting:
(1)

Is the research question or evaluation area a priority, or are there priority areas that are of
greater importance in assessing or understanding the performance of the QIO program?

(2)

Is the general approach (evaluation design) appropriate to answer the question(s) or are
there other approaches that may be more effective?

(3)

What are the advantages and limitations to each approach?

(4)

Are there ways to address these limitations?

(5)

Which data sources are appropriate and available to answer the evaluation question(s)
adequately?

Following a discussion of the meeting objectives and a summary of the QIO program, NORC
initiated discussion of various strategies that could be used to evaluate or better understand the
performance of the QIO program.
6.1.3.1 Global TEP Insights
The TEP identified several different frameworks that NORC should consider in proposing options
to ASPE for evaluating the QIO program. One framework, for instance, could focus on CMS policy
evaluation questions; when carried out these evaluations could result in regulatory or statutory
recommendations. A second type of evaluation could focus on evaluation of the program and
related implications for the performance of individual QIOs. Findings from such an evaluation may
lead to recommendations for enhancing QIOs’ processes, thereby enhancing the program overall.
The TEP noted that the techniques used to evaluate or promote change in each of these two areas

52

are different. Among their key recommendations, TEP members emphasized that, ideally, shortterm data collection activities should be completed as soon as possible so that longer term activities
could begin prospectively for the 9th SOW (in 2008).
The TEP emphasized the importance of the evaluator establishing a close working relationship with
CMS in order to achieve success in conducting the evaluation as well as achieving credibility among
those with policymaking authority. Presented below are additional global ideas and
recommendations emerging from discussion at the TEP meeting.
•

The QIO contract and the individual organizations are not the same. TEP members
pointed out some aspects of the QIO contract, including how organizations compete for the
contract, how they organize their staff, and other organizational structures. There was emphasis
made on the fact that QIOs do work outside of their core QIO contract. Thus, there are policy,
programmatic, and organizational issues to consider in designing a robust evaluation.

•

An evaluation of the QIO program should incorporate the IOM’s recommendation to
conduct smaller-scale studies. The question was raised as to whether “the whole [of the QIO
program] is equal to the sum of its parts”, i.e., is it possible to make an overall statement about
the effectiveness of the program nationwide by aggregating evidence from multiple small-scale
evaluations? TEP members noted that smaller studies may, in fact, be “more interesting” and
more feasible than a larger program evaluation, such as well-designed case control studies or
randomized control trials of specific interventions to examine providers’ responses to different
interventions (motivation). These types of studies could 1) potentially minimize attribution
issues and 2) yield results that are more actionable. Members cautioned, however, that a change
in regulation may be necessary to permit random assignment of providers for special study
purposes.

•

Ultimately, any evaluation of the QIO program should be able to speak to the larger
national picture and related implications for policymaking. NORC staff asked TEP
members what approach they would recommend if funding were available to conduct only one
type of evaluation. Among the responses, one member indicated that several small impact
studies could be used to address the question “Is the program effective today?” However, this
member was of the opinion that this was a less interesting question than: “If we redesign the
whole program to focus on the ones that work, how effective would it be in the future?”
Another member echoed the feeling that the question of whether the program is effective as it
exists currently is not a useful question to answer and that examination of the bulk of the core
contract task by task would only bring us back to the notion of IPGs vs. non-IPGs, which is a
limited way of looking at the issue. Another member added that there is good reason to avoid
doing an overall evaluation of the core program due to the likelihood that it could end up with
the same conclusion as every other previous study: “yes, there is improvement in quality but it is
unclear how much is attributable to QIOs.” Several members suggested that instead of the
historic approach to QIO program evaluation, which has always been a “one shot” approach, a
shift in paradigm to continuous quality improvement would be more informative and better
enable organizations to shift courses to make necessary programmatic changes.

53

6.1.3.2 IPG Selection Issues
Questions concerning the process QIOs use to select providers to participate in an IPG (or how
providers’ decide to participate), and the effect of the selection decision on the evaluation dominated
much of the TEP discussion.
NORC staff initiated discussion by describing the design of a study that could be used to examine
differences in quality improvement achieved by IPG and non-IPG providers. It was mentioned that
there is a need to better understand the selection process in order to address the issues of selection
bias and confounding that have plagued similar studies in the past. NORC staff added that there are
several factors that drive selection and additional information is needed on provider motivation to
improve quality and work with the QIO as well as the provider’s quality infrastructure (e.g., staff to
conduct quality improvement activities.) It was stated that providers may receive technical
assistance from many sources, such as the American Hospital Association and the Institute for
Healthcare Improvement. Even among those in an IPG, the QIO may not be the primary source
for quality improvement information.
•

Members of the TEP described the limitations of basing evaluation studies on the IPGnon-IPG dichotomy. It was suggested that in the short-term, the issue of selection bias must
be addressed and that a more thorough understanding of how IPGs and non-IPG providers
differ is needed. Members pointed out that CMS has created incentives for QIOs to choose a
set of providers who they believe can be helped to improve; they may even select organizations
that will improve without any technical assistance from the QIO. However, it was suggested by
one member of the TEP that selection bias “might not be a bad thing” and that it may be that
the criteria used to identify those who participate create the necessary incentives for QIOs to
target “high-profile” organizations with the ability to influence other providers to change.
Members noted the importance of talking with providers in order to not only understand how
they select areas for improvement, but also what motivates them to participate in an IPG or,
regardless of whether they are in an IPG, to participate in QIO activities.

•

IPG v. non-IPG comparisons may be flawed by a potential spill-over effect, thereby
making it difficult to distinguish between providers who do and do not receive
assistance. The TEP highlighted the importance of understanding the IPG selection process
well enough to model it econometrically. To this end, members of the TEP generally agreed
that measures of the intensity of an interaction may be a more significant determinant of quality
improvement than an IPG/non-IPG distinction and should be considered in the evaluation.

•

QIO program evaluations should examine the role of motivation in terms of whether
providers are internally motivated to reach quality improvement goals and whether QIOs
are able to serve as external sources of motivation. Members of the TEP discussed the
extent to which provider motivation to both work with the QIO and to achieve performance
improvement could be affected by QIOs. Questions were raised as to how to measure or
quantify motivation. Suggestions from members included interviewing or surveying providers to
determine, for example, the extent to which quality of care issues are discussed at board
meetings. An alternative suggestion was to interview providers who had an opportunity but
elected not to participate to gather insight on factors that motivate providers to opt out of the
IPG.

54

6.1.3.3

Technical Assistance and QIO Interventions

Among the study design options that received the greatest support among members of the panel
were those addressing questions about the effectiveness of alternative technical assistance strategies.
NORC staff inquired about the feasibility of evaluating alternative approaches using a case control
study design in which randomization occurs at the level of IPGs (within a QIO), at the level of the
QIO, or the possibility of conducting studies in which QIOs are matched on factors such as baseline
performance or beneficiary characteristics. One member of the panel suggested that efficacy trials
were necessary in order to determine which interventions work and to answer the question of
attribution. Although one member of the TEP indicated that it is probably feasible to conduct a
descriptive inventory of different interventions, most members of the TEP appeared to agree that
the possibility of doing an evaluation of technical assistance (a smaller scale study) using rigorous
methodological designs is substantially greater than the feasibility of conducting an overall program
evaluation. The TEP offered a number of considerations for designing studies that would examine
the impact of different technical assistance strategies.
•

Interventions are highly variable within and across states and there are many variables in
the environment that could influence the effectiveness of a QIO’s technical assistance
program. One member indicated that it might be interesting to determine which forms of
technical assistance are driven by CMS and which are coming from “the community close to the
ground” who are working with providers. The member added that training of some of the CMS
staff during the 6th SOW came from the Institute for Healthcare Improvement (IHI) under
contract with CMS, thus, many of the QIOs have adapted IHI’s Breakthrough Series design.
The question was raised whether examples could be drawn from other industries or disciplines,
such as systems engineering, which potentially could be applied to evaluating the program as a
whole. A member of the TEP offered support for this idea, indicating that many of the change
concepts employed by the IHI and used by QIOs are drawn from industries outside of health
care. In addition the TEP drew attention to the fact that the interventions varied across QIOs
and across tasks. In identifying areas for evaluation of alternative technical assistance
approaches members made several suggestions, including focusing on those tasks that QIOs
have not yet begun working on and avoiding tasks where national performance is high, as in
performance on Acute Myocardial Infarction Process rates.

•

The impact of non-QIO factors or influences on quality cannot be overlooked when
evaluating alternative technical assistance approaches. A member suggested that there
should be an understanding of other factors that influence quality; otherwise a statement of
attribution cannot be made. At the QIO level, it is important to differentiate between the
intervention (provider level) and the mode of delivery of technical assistance (tools, education
material, etc.).
6.1.3.4 Evaluation of Special Studies and the QIOSC program

NORC staff provided a brief overview of the QIOSC/special studies program. In particular, it was
emphasized that CMS views QIOSCs as a type of special study, and the purpose of each is to reduce
duplication of effort among QIOs, i.e., to prevent QIOs from “reinventing the wheel” with respect
to developing technical assistance materials. In discussing options for evaluating the extent to which
55

special studies “add value” to the QIO program, the TEP offered a few key recommendations.
•

One member suggested conducting case studies for an in-depth comparison of
“successful” versus “unsuccessful” special studies. More specifically, a TEP member
indicated that the evaluator should identify five or six examples of each type of special study to
determine the conditions or structures that led to their success or failure.

•

One member of the TEP suggested that NORC identify tasks that were neither unique
nor varied from state to state. This approach would enable us to determine whether the
QIOSC enabled QIOs to reduce redundancy of effort and devote their CMS resources to other
technical assistance activities.

•

Another member suggested comparing the performance of QIO in the first and third
contracting cycle to ascertain the impact of the QIOSC. In theory, QIO performance
during the first contracting cycle, when QIOSCs are starting up and have not had an opportunity
to develop tools/resources should be worse (or improvement should be slower) compared to
that of QIOs in the third cycle, because the QIOSC has had an opportunity to develop and
disseminate tools/resources.
6.1.3.5 Data and Measurement Issues

NORC devoted a portion of the TEP meeting to a discussion of the availability of data for
evaluation, especially CMS data and the limitations of these data. (A summary of these data sources
is presented in Appendix E.) Members discussed the various methods by which QIOs report their
activities, including the PARTner system and the Dashboards, and discussed, in length, the issue of
data lags. Questions were raised regarding the use of an experimental pre-post design to evaluate a
program that is attempting to continuously improve and accelerate the rate of improvements by
looking at multiple data points and trends, which would require ongoing access to continuous data.
It was stated that in most cases it would be necessary to shorten the lag in availability of data.
Shortening the data lag would mean having partial data, but it would be feasible with non-claims
based data, such as the Minimum Dataset and the Outcome and Assessment Information Set.
NORC also discussed the various surveys that are conducted as part of the QIO program and
possible opportunities for building on these surveys to fill gaps in data necessary for a robust
evaluation. The question was posed as to whether items from these surveys could be used or
whether modifications to these surveys could be made in order to more efficiently obtain
information necessary to conduct the evaluation. TEP members agreed that these surveys could be
useful tools but some questions in the survey do not produce “actionable” information. These tools
could, however, be modified or refined to serve as an efficient vehicle for gathering additional data
for the evaluation.
TEP members discussed concerns about the QIO activity codes that are used by CMS (in the
PARTner system) to report on work that QIOs have done with providers. Among the concerns
voiced was that these codes were of limited use in evaluation of the QIO program because they were
“substance-free” and offered limited insight into the technical assistance that QIOs offer.

56

6.2

Assumptions, Scope and Organization of Evaluation Designs

Prior to describing evaluation options, several assumptions framing our design recommendations
must be specified, the most important being that the options recommended are intended to evaluate
the program as it is currently structured. Although NORC is aware that CMS is considering various
structural changes to the program, our decision to base designs on the current structure of the
program - as opposed to a potentially re-structured program - is grounded in the principle that, for
purposes of policy-making and program operations, it is necessary to understand a program’s
performance prior to investing a substantial amount of resources into a redesign effort in order to
determine whether (a) a redesign is actually necessary, (b) what elements of the program should be
restructured and, (c) following restructuring, the cost-effectiveness or impact associated with the
changes that were made to the program.
Other major assumptions are that the performance measures in the SOW are, in fact, appropriate
measures of quality within specific task areas and that the populations, with regard to IPGs and the
IPG selection process, reflects CMS’s strategic decisions concerning the providers that they seek to
influence. As such, most of the design options presented assume that the performance targets, and
provider populations “touched” by QIOs are de facto indicators of CMS’s policy objectives.
Nevertheless, given the significant level of concern (among both QIO staff participating in site visits
and members of the TEP) about CMS’s approach for setting performance targets and the SOWs’
lack of focus on low-performing providers, we included evaluation options to assist in examining
changes to these policies.
The TEP suggested and project staff agreed that, instead of a retrospective “one-shot” evaluation,
design alternatives should focus on establishing the processes and systems for the on-going,
prospective evaluation of the QIO program. Each of the evaluations in this section was designed
with these criteria in mind. As appropriate, however, retrospective activities and those that could
produce data to inform the development of the 9th SOW are mentioned.
6.2.2

Scope of Evaluation Designs

Despite interest in understanding how well the overall QIO program performs, there are many nonevaluative research projects and developmental activities that are discussed in this section.
Discussion of these smaller-scale projects or activities is essential because they serve as building
blocks in evaluating the QIO program or selected components of the program. These questions,
several of which were developed and discussed as part of the TEP meeting, may also provide
fundamental information necessary to understand how the QIO program operates or, in the future,
to re-structure the program if it is determined that the present design fails to achieve the intended
quality improvement objectives. The designs described, and particularly the activities that are the
foundation for these evaluations, are expected to assist in building an on-going QIO evaluation
program that may be conducted in the 9th and future SOWs.
Of note, each of the evaluative and non-evaluative designs or activities described in this section of
the report is presented in rather general terms. Typically, specific tasks, subtasks, measures and
analytical approaches are not detailed. This is not to suggest that a “one-size-fits-all” approach
should be undertaken. On the contrary, we recommend that the QIO program be evaluated at the
57

subtask level and that the design, data collection process, statistical and analytical approach be
customized to address the unique elements inherent in the separate subtasks and corresponding
measures. Due to the many tasks over which QIOs assume responsibility, it is not feasible within
the scope of this report to propose separate models or evaluation frameworks for each subtask.

6.3

Designs for Evaluating the Core Program

The primary objective of this project is to describe options for evaluating the core QIO program,
focusing on Task 1.xii We begin this discussion by describing one approach for evaluating the core
program based on a national, provider-level analysis which incorporates a case-control panel design
to assess differences in IPG and non-IPG providers’ performance. This design option is described
in three stages:
•

First, the long-term evaluation goal and related approach is presented – in this case, an
overall evaluation of core QIO subtasks using econometric modeling techniques.

•

Second, primary and secondary data collection activities that are necessary to carry out
the proposed approach are described.

•

Third, supplementary short-term studies are described that will assist in primary and
secondary data collection and strengthen the overall program evaluation, in general.

Previous national-level evaluations of the QIO program (as detailed in Section 3.0 of this report)
have been based on outcomes analyses, where outcomes are operationalized using specific clinical
quality indicators and clinical care process measures. The conclusions drawn from these evaluations
are based on comparisons of whether and how the performance of IPG and non-IPG providers
differ from baseline to remeasurement on the select measures that were examined in each study.
The design presented below for evaluating the core program builds on these earlier studies but,
however, incorporates several refinements. First, as suggested by members of the TEP, we examine
performance at several time periods in the life of the QIO contract (at least annually) to observe
trends in performance. Second, we attempt to address issues of selection bias and attribution, two
factors that have limited evaluations of the QIO program to date. Third, in recognition of the fact
that an IPG/non-IPG dichotomy may not accurately reflect the technical assistance that providers
obtain from QIOs, we attempt to incorporate a measure of provider-QIO engagement into the
design.
6.3.1

Using Econometric Modeling Techniques to Assess Differences in IPG and
non-IPG Provider Performance

The first option is using econometric modeling techniques to examine differences in IPG and nonIPG provider performance on clinical quality and process of care measures. Similar to past analyses,
Members of the TEP supported the focus on Task 1 (quality improvement) activities and, with ASPE’s understanding,
task 3 activities were not specifically addressed in this section. Section 7.0 of this report describes research questions
that may be studied to gain a further understanding of the impact of Task 3 activities.

xii

58

it is hypothesized that for each health care setting under Task 1, performance on quality measures
(e.g., restraint use in nursing homes, on-time prophylactic antibiotic administration in hospitals, etc. )
is related directly to provider engagement with the QIO. This hypothesis, however, is flawed due to
the presence of selection bias. In theory, QIOs focus largely, albeit not exclusively, on assisting
providers in an IPG and, thus, it is expected that IPG providers would improve on performance
indicators at a faster rate than non-IPG providers. However, IPG providers are not selected
randomly. Rather, for most subtasks in each health care setting, QIOs are required to identify a
subset of providers with whom they will work intensely. As a result, there may be inherent
differences between providers who were selected (or volunteered) to participate in an IPG and
providers who were not selected (or did not volunteer) to participate. Due to non-random selection,
and the likelihood that IPG providers are selected to participate because they are the most likely to
improve (or they volunteer to participate because they are the most motivated to improve), estimates
of a QIO’s impact on performance are likely to be biased.
In situations where a randomized control trial cannot be used, a two-stage econometric model can
often be used to estimate program effects. This technique – which can be applied to take into
account factors that may influence a providers’ likelihood of working with a QIO – is proposed in
order to address the two methodological barriers that have hindered previous QIO program
evaluations – selection bias and confounding, or attribution.
Equations 1 and 2 below depict how the effectiveness of the QIO program could be modeled using
econometric techniques. The actual specification of this impact model may differ in significant ways
from that described below, in large part, because our current understanding of the IPG selection
process and the structure of data to conduct this study is limited. A refined version of this model
may be developed as results from analyses described in later sections are obtained.
Equation 1: p(S) = f (provider characteristics, environmental characteristics, QIO characteristics)
Equation 2: P = f (p(S), provider characteristics, environmental characteristics, year, QIO)
Equation 1 models the selection mechanism by estimating the probability that a provider of a
particular type (e.g., nursing home, home health agency etc.) participates or is selected to participate
in a QIO’s IPG). The likelihood of selection is modeled as a function of provider, environmental
and QIO characteristics. Individual subtasks per health care setting are modeled separately using the
measures and IPG providers associated with each subtask.
Equation 2 addresses selection bias by estimating provider performance as a function of the
likelihood of selection into an IPG as well as other variables that include provider, environmental,
and QIO characteristics. As in equation 1, performance would be measured separately for each
subtask and therefore the actual specification of equation 2 may vary across subtasks. Finally,
performance for IPG versus non-IPG providers is measured continuously (e.g., annually or semiannually) throughout the life of the SOW in order to observe trends in performance (denoted as
year).

59

6.3.2

Variables, Data Sources and Data Needs

This section describes the key variablesxiii and data sources that are necessary to model the
relationships described above. Both primary and secondary data collection would be required to
conduct these analyses. Some of the data to conduct this study, for instance, are collected already in
the usual course of operating the QIO program and are available from CMS administrative files.
However, given gaps and limitations in the existing data, we anticipate that primary data collection
also will be required to conduct this evaluation. Table 6.1 provides a summary of the discussion in
this section.
6.3.2.1 Data to Construct Dependent Variables

Equation 1 – Provider Participation in an IPG: Data on the providers that comprise each QIO’s
IPGs, by subtask, are a required deliverable, reported to the CMS Project Officer, Contracting
Officers and QIOSC; these data were stored in the PARTner system during the 7th SOW.xiv CMS
restricts access to the unique identifiers necessary to determine whether a provider is a member of
an IPG. Currently it is necessary for the evaluator to work either through individual QIOs or
QIOSCs to gather de-identified data to determine which providers are included and excluded from a
QIO’s IPGs. The status of the PARTner system during the 8th SOW is unclear, but presumably
identifiers associated with providers in each QIO’s IPG will continue to be available electronically.
Equation 2 - Provider Performance: Performance on subtask quality measures are collected as a
standard part of the QIO program and should continue to be available through CMS or the
QIOSCs. In fact, for many subtasks, the performance measures by which QIO performance is
evaluated are the same measures reported in the hospital, home health, and nursing home
COMPARE databases or that can be derived from sources such as the Nursing Home Minimum
Data Set or the Home Health Outcome and Assessment Information Set.
6.3.2.2 Data to Construct Independent Variables
Provider Characteristics
Variables: Although the provider-level characteristics associated with participation in an IPG are
not completely known, we believe that the probability of selection (equation 1) or participation in an
IPG could be driven by:
•
•
•

provider profit status;
rural vs. urban location;
system membership;

Variables considered to be of lesser importance in these models may have been omitted.
The PARTner system is being restructured for the 8th SOW and it is assumed that these data will be available in the
next generation of this system. Interviews with QIO representatives as well as accounts from the IOM report (2006)
raise questions about the quality of the data in this repository. Examination of the quality of the data was beyond the
scope of this project, and while CMS data sources are considered to be key components of these analyses, it is necessary
to thoroughly determine the availability, quality and limitations of the data prior to inclusion in these analyses.
xiii

xiv

60

•
•
•
•

willingness and motivation to work on quality improvement issues with the QIO;
resource availability (e.g., cost structure, staffing mix, infrastructure supporting quality
improvement);
the extent to which a provider uses other available quality improvement resources; and
provider performance and QIOs’ perceptions of the ability of a provider to achieve
improvement.

The last two factors are particularly important. To the extent that providers obtain quality
improvement resources from non-QIO trade organizations (e.g., the American Hospital
Association, the American Medical Association) and other quality improvement organizations (e.g.,
Institute for Healthcare Improvement), they may be less likely to participate in an IPG if they are
fulfilling their demand for quality improvement support elsewhere.
Data Sources: CMS administrative databases – including the Providers of Services file, the
Medicare Cost Reports, the Standard Analytical Files, and the Provider Enrollment, Chain and
Ownership System data – may each be used to extract information on provider characteristics, such
as profit status, membership in a system, rural/urban location, and staffing. Private sector
databases, such as the American Hospital Association Annual Survey, may supplement information
that is not available in CMS administrative databases.
A subset of provider-level variables that we believe to be critical for modeling the QIO’s role in
advancing provider performance is not available from these sources. Specifically, 1) information to
ascertain the provider’s level of motivation and willingness to work with the QIO on quality
improvement issues, 2) the extent to which the provider has the internal infrastructure to support
quality improvement efforts, and 3) utilization of non-QIO quality improvement resources is not
readily available and must be obtained through primary data collection.
The concepts of “willingness” and “motivation” are difficult to define and measure. For purposes
of this evaluation it is possible that proxies will need to be identified in order to measure these
concepts. Proxies could include measures of management, staff and/or resources that have been
committed to working with the QIO or participating in quality improvement initiatives. Other
proxies may be defined by whether the provider engages in specific quality improvement activities.
A potential instrument for collection of these data is the CMS “Survey of Provider Satisfaction with
Quality Improvement Organizations” (referred to in this report as the Provider Satisfaction Survey).
This survey is administered to both IPG and non-IPG providers primarily as a means to gather
information on their satisfaction with QIO assistance. Also included in this survey are items related
to broad categories of quality improvement assistance (i.e., internet access to websites, site visits,
training, and workshops). Inclusion of a module to address issues of motivation, willingness to
work with the QIO, quality improvement infrastructure, and use of quality improvement resources
from non-QIO organizations may serve as a cost-effective means to gather the provider-level data
necessary to conduct these analyses. Although the same providers are in an IPG during the threeyear contract period, a provider’s motivation or quality improvement resources may change during
the course of the SOW. Because one of our objectives is to promote continuous monitoring, the
survey would need to be conducted at least annually in order to track these changes.
Environmental Characteristics
Variables: Many managed care organizations require providers to participate in selected quality
61

improvement initiatives and, arguably, providers in areas dominated by managed care may be less
inclined to participate in an IPG. (It is also possible that providers engaged in quality improvement
initiatives with managed care organizations are more inclined to work with the QIO – or the QIO is
more likely to solicit their participation – since the added burden of working with the QIO may be
minimal for these providers.) Although the extent to which providers compete on the basis of
quality is unclear, it is possible that market competitiveness is predictive of participation in an IPG,
with greater competitiveness being directly associated with the likelihood of participating in an IPG.
Data Sources: Resources to characterize environmental features that may drive participation in an
IPG and other quality improvement activities are available from public and private sources. Among
these are the Bureau of Health Profession’s Area Resource File, the Medicare Denominator File (for
use in estimating managed care penetration in the elderly population), the Kaiser Family Foundation
State Health Facts database, and others.
QIO Characteristics
Variables: It is probable that quality improvement is driven not solely by whether a provider is an
IPG member, but also by the types, intensity, and frequency of technical assistance that QIOs offer
to providers. Although, in theory, IPG providers interact with QIOs at a higher level of intensity
than non-IPGs, it is important to note that non-IPGs can – and do – obtain technical assistance
from QIOs, and that IPG providers do not necessarily receive or seek technical assistance. The
concepts of “technical assistance” and “intensity” are difficult to define and measure, but should be
considered key determinants of providers’ performance improvement. QIOs may render technical
assistance in numerous ways depending upon the type of provider, the QIOs’ particular
circumstance, and CMS expectations of the QIO in each subtask. Although the most appropriate
means for measuring intensity of assistance is unclear and requires additional study, attention should
be given to the fact that the relationship between intensity of assistance and performance may be
non-linear.
Data Sources: For program evaluation purposes (either retrospective or prospective)xv the
PARTner system could be used to obtain data on the type of technical assistance offered by QIOs
to providers. Specifically, there are several fields designed to capture information on the types of
technical assistance activities that QIOs offer providers. Examples of these activity fields include:
•

Explanation about measures

•

Information about quality
improvement

•

On-site support

•

Stand-alone workshops on
quality measures

•

Stand-alone workshops on
quality improvement

•

Planned multi-contact
intervention on quality
improvement

These codes convey little substantive information about the technical assistance provided or the
intensity of interaction; in fact, under the 7th SOW providers were not required to enter more than
one code a month.xvi
xv
xvi

This assumes that these codes will continue to be collected in future versions of the PARTner system.
CMS. PARTner User’s Guide. Appendix G, p. 14-26

62

To strengthen inferences drawn from this evaluation, as we move forward with the QIO program it
may be advantageous to invest resources in the development of a substantive and more rigorous
approach for measuring technical assistance and intensity. In theory, measures or scales could be
created using detailed descriptions about the means by which QIOs render technical assistance, the
types of information conveyed, and the number of times that technical assistance is provided.
Ideally, for purposes of continuous monitoring, these measures/scales would be incorporated into
the next generation of the PARTner system. It may also be feasible to use the Provider Satisfaction
Survey as a vehicle for collecting these data.
Other Key Characteristics
Year/time: Year or time period is included in this model because, as suggested by a member of the
TEP, an effort should be made to examine continuous improvements in quality. As such, it is
recommended that performance be measured on at least an annual basis. Inclusion of this variable
thus enables us to track time trends in provider performance.
6.3.3

Limitations in Evaluating the Core Program

We believe that the model that we described above is a robust framework which could be adapted
for the ongoing evaluation of the core QIO program. The above approach, however, suffers from
several limitations. Among these limitations is the potential inability to fully specify equations 1 and
2, resulting in omitted variables bias and, possibly, erroneous conclusions about the effectiveness of
the program.
To some degree specification error may be addressed with greater understanding of both the IPG
selection process and QIOs’ processes for rendering technical assistance. Nonetheless, this type of
evaluation would be technically complex. Adding another layer of complexity, it may be necessary
to make changes to federal regulations, and specifically to regulations that limit access to the
names/identifiers of providers that are included in an IPG, in order to ensure that necessary data are
available to evaluate individual subtasks. Indeed, a few members of the TEP were not entirely
convinced that an econometric evaluation such as described in this section was the most feasible to
conduct. Another option for prospective evaluations would be random assignment of providers to
address selection bias, an evaluator’s ability to randomly assign providers into treatment and control
groups may also be limited by the prohibition of identifying providers in the IPG.
6.3.4

Studies and Activities Contributing to Core Evaluation

As members of the TEP indicated, investments in an evaluation of the overall QIO program may be
risky at this time because our limited ability to adequately model the IPG selection process and to
define and measure key QIO- and provider-specific variables, such as interaction with the QIO, the
intensity of technical support and provider “motivation.”
Despite the TEP’s concerns about the feasibility of such a design, the fact is that in absolute terms,
investments in the QIO program are large. Policy-makers are concerned about the impact and the
63

value of expenditures on this program. To restructure the program without considering its impact
could be costly and, without baseline information on performance, it would be impossible to
determine the cost-effectiveness of restructuring. Therefore, while we recognize the shortcomings
of this evaluation option, we believe that many of these limitations could be addressed over time,
through investments in short and mid-term studies, additional data collection, and changes in policy
that currently impede a thorough evaluation of the QIO program.
Much of the remaining discussion in this section of the report is aimed at describing the short-term
studies and data collection activities that need to occur in order to conduct a sound program
evaluation. It is important to note that some of these short-term studies, data collection instruments, and systems are
unlikely to be available by the time that the 9th SOW starts and, to the extent that CMS implements structural
changes to the QIO program, modifications to the overall evaluation design, data, and instruments will be required.
Short-term Data Collection Activities and Studies to Understand the IPG Selection Process:
Results from site visits suggest that several factors – level of provider performance, motivation, and
access to resources – influence QIOs’ decision to include providers in an IPG as well as providers’
willingness to work with QIOs. Despite the information gathered from these visits, there is a dearth
of information on the mechanisms that drive inclusion (from the perspective of a QIO) or
participation (from the perspective of the provider) in an IPG group. A more complete
understanding of this relationship is necessary to fully specify the models described above and to
accurately control for selection bias in estimating differences in quality improvement for IPG and
non-IPG providers.
A short-term study, combining both qualitative and quantitative techniques, may provide a better
understanding of the IPG selection process as well as evidence to better ascertain whether IPGs are
the most effective means by which to continue to offer the majority of technical assistance to
providers. As part of this study the evaluator could examine:
•
•
•

Processes by which the QIOs determine which providers to include/invite or
exclude/not invite to participate in an IPG;
Reasons why some providers seek to participate in an IPG and others do not, and
Factors that determine the amount and type of assistance that providers – both in and
out of the IPG – seek from the QIO as opposed to other organizations

Three options for gathering information to understand the selection process are described below.
These include data gathering strategies based on: (1) QIO staff and provider interviews, (2)
exploratory analyses of existing data, and (3) primary data collection with a provider survey.
(1) Interviews with QIO Staff and Providers: Interviews could be conducted with QIO staff
members who lead all subtasks to understand a) the criteria that QIOs use to identify
providers as candidates for an IPG and, to the degree that more IPG candidates than
required under the SOW are identified, b) how QIOs make the final IPG selection decision
and (c) . Interviews could focus on how provider motivation, infrastructure requirements
and baseline performance factor into the selection decision.As appropriate, data and tools
used in selecting providers should be collected and reviewed.
Interviews with providers – both IPG and non-IPG providers – may offer perspective on
why certain providers opt in and why certain providers who are invited to participate in an
64

IPG decline. Interviews may also be conducted with providers that were not invited to
participate in an IPG to gauge their awareness of the QIO and interest in receiving technical
assistance. For both IPG and non-IPG providers, alternative sources of technical assistance
and the relative importance of these alternate sources of assistance could be identified.
(2) Exploratory Analyses: Using existing data sources, such as CMS databases (Providers of
Services File, Cost Reports, etc.) and publicly available databases (e.g., AHA Annual
Survey) exploratory analyses could be conducted to assess how IPG and non-IPG
providers differ on basic structural and organizational measures. Comparison of IPG and
non-IPG providers will be limited by the types of information contained in the available
data but, at minimum, data on providers’ financial performance, service lines, cost
structure, staffing mix, location in rural/urban location, system membership, and
ownership are expected to be available.
(3) Survey Providers: As previously mentioned, data on several provider-level characteristics
that may drive participation are not readily available, including motivation, interest in
participating and working with the QIO, availability of infrastructure to support quality
improvement efforts, and use of non-QIO quality improvement resources. These data
could be obtained through primary data collection using the Provider Satisfaction Survey as
the collection mechanism. Before this can occur, however, investments must be made in
defining these concepts and designing suitable questions to gather this information using
the existing survey.
Short-term Study of QIO Interventions and Intensity: The type, frequency, and intensity of
technical assistance that QIOs offer providers are important factors to identify if assessing the
impact of QIO support on provider performance. However, scant data exist on the range of
technical assistance offered by QIOs and little has been done to characterize the intensity and
frequency of QIO interactions with providers. The PARTner data system activity codes were
identified as a possible source of information on QIO engagement with the provider. This
information was collected for selected task areas in the 7th SOW, however, it is unclear whether this
information will continue to be gathered in the 8th SOW (updated) version of PARTner or its
replacement. In either case, activity codes provide limited substantive information regarding the
types of technical assistance that QIOs offer and the intensity of that technical assistance.
In the short-term, investments in the development of measures or scales by which QIO technical
assistance can be categorized, both in terms of substance and intensity, will further our ability to
evaluate the QIO program. Among the data collection approaches that could be used to gather
information to construct a provider-QIO engagement are: (1) semi-structured interviews with QIOs
and providers and (2) a provider survey.
(1)

Semi-Structured Interviews with QIOs and Providers: To understand how technical
assistance is offered at the sub-task level, semi-structured interviews could be conducted
with QIOs and providers to catalog the numerous types of technical assistance strategies
and interventions that are employed across all QIOs, and to ascertain whether certain
provider or environmental factors—such as provider’s baseline performance or location in
a rural setting—influence the decision to use certain types of assistance over others.

(2)

Provider Survey: Additional information from the provider’s perspective may also be
65

ascertained by using the Provider Satisfaction Survey as a vehicle to gather detailed
information to clarify the nature of the specific intervention, substantive information
conveyed, and intensity of interaction with the QIO. These data are particularly important
because membership in an IPG is a relatively nebulous measure of QIO support. On the
one hand, members of an IPG decide the extent to which they utilize QIO resources and
some members of the IPG may have little to no contact with the QIO. On the other hand,
QIOs are required to work with all providers in the state and, in theory, it is possible that
some non-IPG providers need and receive more technical assistance from the QIO than
members of the IPG. For purposes of evaluation, understanding the type and intensity of
technical assistance will assist in addressing the fact that classification in an IPG is an
imperfect measure of the level of support that a provider receives from the QIO.

Table 6.1: Econometric Modeling Variables, Examples of Data Sources and Data
Needs to Assess Differences in IPG vs. Non-IPG Performance
Independent Variables
Provider Characteristics
Examples:

profit status
rural/urban location
system membership
motivation
willingness
resource availability

Primary Data Collection
Provider Satisfaction Survey
• Motivation
• Willingness to work with the
QIO
• Infrastructure for QI
• Use of non-QIO QI resources

None

•

Development of new measures
for PARTner
• Provider Satisfaction Survey
• Study on selection process
• Intensity of Interaction scales or
measures.

•

Environmental Characteristics
Examples:
managed care penetration
market competitiveness
QIO Characteristics

•

Examples:
Technical assistance type
QIO criteria for inclusion in IPG
intensity of technical assistance
Year/Time Period

yes or no

Provider Performance
On individual quality/process
measures

xvii

Area Resource File
• Medicare Denominator File
• Kaiser Family Foundation State
Health Facts database

n/a

Dependent Variables
Participation in an IPG

Secondary Data Collection
CMS administrative databases
• Providers of Services file
• Medicare Cost Reports
• Standard Analytical Files
• Provider Enrollment, Chain &
Ownership System data
Private sector databases
• AHA Survey

Primary Data Collection
None
None

PARTnerxvii

n/a

Secondary Data Collection
PARTner
• Data requests through QIOSCs
& individual QIOs
• Data requests through QIOSCs
& individual QIOs
• CMS COMPARE databases
• Nursing Home MDS
• Home Health OASIS

•

Limitations in the use of the PARTner system may exist since its status in the 8th SOW is unknown.

66

6.4

Designs for Evaluating the Special Studies Program

As mentioned in Section 2.2.2, the Special Studies Program consists of two different types of special
studies—Quality Improvement Organization Support Centers (QIOSCs) and all other special
studies. During the 7th SOW, CMS spending on the Special Studies Program amounted to more than
$130 million. Approximately $67 million went to the support of QIOSCs, which are considered to
be a type of special study. Funding for special studies is separate from the funding of core tasks
(Tasks 1 and 3) and is granted to QIOs on the basis of responses to calls for proposals or
unsolicited proposals that QIOs submit to CMS.
Based on site visits and according to the IOM, CMS disseminates special study results through such
forums as e-mail listservs, national conferences and, in the future, CMS has proposed using
QIONet. Beyond this and despite the amount spent on special studies, little is known about how
the results of special studies are used to support QIO functions or advance the quality of care. For
instance, it is unknown whether:
•
•
•

QIOs that receive special study funds apply results to address statewide quality issues;
results assist other QIOs in offering technical assistance to providers; and
CMS uses results to structure, guide or inform the QIO program.

As with other special studies, there also is little systematic information on how QIOSCs assist QIOs
in meeting their contractual obligations to CMS. This includes a lack of information on the
materials, products, analyses, and data that QIOSCs deliver or make accessible to the QIO, as well
as the technical issues and questions to which QIOSCs respond. Indeed, the nature of the
relationship between QIOs, QIOSCs, and CMS is not well understood.
Presented below are options for collecting data and evaluating the Special Studies and QIOSC
programs.
6.4.1

Development of a Special Studies Inventory

Ideally, an evaluation of special studies would provide insight into how study results contribute to:
(1) QIOs’ ability to offer technical assistance to providers, (2) providers’ willingness or ability to
improve quality of care and to work with QIOs, or (3) CMS’ program redesign or improvement.
However, little systematic evidence to address these issues or to conduct such an evaluation is
available. The objective of this study is to better understand where, or on what issues, special study
funds are currently being expended as well as how the results from special studies are being
disseminated and used. This short term project, which is designed to result in an inventory of QIO
special studies, is a key component of in the longer-term evaluation of the impact of special studies.
Key pieces of information that could be collected in the short-term and on an ongoing basis during
each SOW for this inventory include:
•
•
•
•

Special study identifying information;
Status of the special studies funded during the 9th SOW;
Results obtained from special studies; and
Dissemination methods and audiences targeted in dissemination efforts.
67

This information could be obtained using the qualitative research techniques that are presented
below.
Interviews with QIOs: To better understand the purpose and progress to date of special studies,
semi-structured interviews with all QIOs that have previously (for a retrospective analysis) or do
receive special study funds in the 8th and future SOWs could be conducted. QIOs may be queried to
gather information on various products emerging from special studies, including reports,
manuscripts, presentations, toolkits, training guides, technical assistance funded as a special study,
the avenues by which these products were distributed, and the potential (or actual) contribution of
study results to the QIO program.
Survey of QIOs: Alternatively or in addition to interviews with QIOs, a survey that requests QIOs
receiving special study funds in the 9th SOW to provide detailed information on the project status
and to submit materials prepared with those funds could be readily conducted. Telephone followup may be required to achieve the response rate necessary for gathering the information to complete
the inventory using survey techniques. Of note, while a survey is likely to produce data that are
more systematic and generalizable, costs associated with conducting the survey are likely to be high.
CMS Administrative Data: Annual or quarterly progress reports that document the status, results,
or impact of special studies—to the extent that CMS maintains and is willing to provide these
materials—should be reviewed and incorporated into the larger inventory.
6.4.2

Options for Evaluating the Special Studies Program: Case Studies, Key
Informant Interviews and Surveys

The inventory of QIO special studies described above cannot fully answer the question of whether
investments in special studies contribute materially to quality improvement, QIOs’ ability to offer
technical assistance, or enhancements to the QIO program. As described below, case studies as well
as QIO and provider surveys are among the various approaches that could be used to gather
information to determine whether and/or how special studies contribute to material improvements
in the quality of care.
Case Studies: The case study approach is one mechanism that may be used to understand the
impact of investments in special studies. In the simplest terms, cases for in-depth examination may
be gathered from information collected as part of the special studies inventory references in Section
6.4.1. For comparison purposes, eight to ten cases may be included in the case studies, including
QIOs with special studies that have been deemed to produce a “good return on investment (ROI)”
and those deemed to produce a poor “return on investment.” ROI may be measured in a variety of
ways, including the degree of dissemination or the extent to which findings led to substantive
improvements in program design or QIO technical assistance. On-site and telephone interviews
with staff directly involved in planning and conducting the special study could provide information
to determine the conditions, structure, methodology or other factors that contributed to a good ROI
in some cases and a poor ROI in other cases.
Interviews with CMS Staff: CMS staff has oversight over special studies; therefore, they also could
be interviewed to assess their perspective on QIO special study performance. Questions that could
68

be addressed in the course of these interviews include why certain studies were selected for funding,
how CMS anticipated using and actually used results, and the characteristics of special studies which
CMS staff have deemed most “successful”. Specific categories of individuals to interview include
members of the Office of Clinical Standards and Quality’s Science Council, Central and Regional
Office staff, Project Officers, and others serving on the Special Studies Review Panel.
Surveys of QIOs and Providers: Another convenient and relatively low-cost alternative strategy
for gathering information on how special studies add value to the QIO program is to incorporate a
related set of questions or module into the QIO Satisfaction with QIOSC surveys that is
administered by CMS. This instrument primarily is a satisfaction survey that queries QIOs about
the utility and quality of data and products that QIOSCs deliver, QIOSCs subject matter expertise,
and QIOs’ preferred modes of communication. (This survey is one component in the evaluation of
QIOSC performance.) Information contained in the Special Studies Inventory could prove useful in
constructing survey items that question QIOs on their familiarity with particular special studies and
the value that they obtain from these efforts.
The utility of special studies to providers is more difficult to assess because many are geared toward
gathering information and results to inform CMS and/or QIOs about specific programmatic or
performance issues, rather than focusing on provider- or technical assistance-related issues.
Nonetheless, to the extent that special studies are designed to produce provider-level resources or to
alter provider behavior patterns, it may be critical to measure the value of special studies to
providers. Several qualitative and quantitative approaches may be used to gather data concerning
specific special studies and how providers have used study results or products in practice. A brief
module could be added to the existing Provider Satisfaction Survey to gather information on
provider awareness of special studies as well as whether special study findings resulted in any
changes in the provider’s practice or performance.
6.4.3

Evaluating the QIOSC Program: Data Sources and Data Needs

Little is known about the nature of the relationship between QIOs and QIOSCs. More specifically,
it is unknown how, if at all, QIOSCs advance QIOs’ ability to carry out quality improvement
activities with providers and meet CMS performance targets. Thus, similar to the information that is
needed to evaluate special studies, an initial exploratory study of the QIOSC program is required;
the goal of this study would be to gather information on the types and levels of engagement between
QIOs and QIOSCs (both topic-specific and cross-cutting). Conducted as a first step towards
evaluating this component of the QIO program, the objectives of this study would be to describe
QIOSC activities, assess the materials that QIOs receive to ascertain their timing and availability
relative to QIO performance requirements, and to determine the impact of QIOSC activities on
QIOs and provider performance. As the foundation for an eventual evaluation, the following
information could be collected:
• Materials, resources, or assistance tools developed and/or made available by QIOSCs;
• Frequency of use of selected QIOSC materials, resources and tools;
• Perceived utility of materials and other resources available from QIOSCs;
• Reasons why QIOs use/do not use resources available from QIOSCs, e.g., QIO has

sufficient internal expertise that they do not require assistance from the QIOSC);
• Extent to which QIOSCs motivate and support QIOs to improve quality in each subtask;
69

• Types and frequency of interaction occurring between QIOs and QIOSCs; and
• Unmet technical assistance needs that QIOSCs could offer QIOs.

6.4.4 Options for Evaluating the QIOSC Program
In theory, a quantitative study could be conducted at the level of the QIO to examine how the
relative intensity of QIO engagement with a QIOSC is associated with the likelihood that the QIO
meets CMS’ performance expectations for work conducted with IPG groups. Such an evaluation
could measure the probability that the QIO will have met the target requirement (e.g., 15 percent
reduction in pressure ulcer rates; 25 percent or greater improvement on SCIP measures) as a
function of technical assistance obtained from different QIOSCs. Unfortunately, given the relatively
small number of QIOs, this analysis would lack the power to provide significant information on
how QIOSCs contribute to QIO performance. Therefore, other, qualitative options would prove
more fruitful.
Environmental Scan: A systematic environmental scan could provide detailed information to
better understand how QIOSCs use their resources to support QIOs. An environmental scan could
consist of a systematic online review of QIOSC materials and the tools, guides, and reports that are
prepared and made available to QIOs. Although many of these materials are available on QIOSC
web pages, QualityNet or MedQIC, in some cases materials are located in “members only” sites and
access must be secured from the organizations operating the site. NORC’s experience in conducting
an environmental scan of QIO interventions and activities suggests that it is unlikely that this
approach, alone, will offer the detailed level of information necessary to understand the types of
support that QIOSCs offer QIOs.
Requests for information could also be made to all QIOSCs. This request could include copies of
written reports, tools, analyses and other products that are offered to QIOs either on their website,
by request, or through other means. Any documentation that QIOSCs maintain on the number of
interactions with individual QIOs, including logs of issues, questions, or requests for information
posed by these QIOs could be obtained and reviewed by the evaluator.
Interviews with QIOSCs: In conjunction with the environmental scan, site visits to all QIOSCs
could be made. As part of these visits, semi-structured interviews with QIOSC staff could be
conducted to gather additional information to understand the nature of the QIO-QIOSC
relationship and how QIOSCs attempt to support QIOs. These visits may offer an opportunity to
question QIOSCs about their staff’s training, knowledge and use of best practices in the provision
of technical assistance to QIOs, and internal expertise available to conduct the many activities
required of them under the QIOSC contract. Furthermore, more refined information on the nature
of the products and services provided to QIOs, as well as the time and resources devoted to specific
activities, including the amount of support provided to CMS staff, could be addressed.
Scale of QIO Engagement with QIOSCs: In the future, it may be valuable to conduct a more
rigorous quantitative evaluation of the QIO program as well as the QIOSC program, using data on
the relationship between individual QIOSCs and QIOs. It may therefore be desirable to invest
resources in developing a QIO engagement scale. An “engagement scale” could combine
information on the substance or nature of technical assistance obtained from QIOSCs with
information on the intensity of assistance received, in order to systematically estimate the level of
70

support QIOSCs provide to specific QIOs.xviii
In concept, the developer of this scale would need to determine an appropriate approach for
classifying the range of QIOSC-QIO interactions, options for “valuing” different types and numbers
of interactions, and a strategy for combining these dimensions into an intensity “scale.”
Conceivably, data to develop this scale could be obtained from multiple sources, including
information gathered from the environmental scan and the semi-structured interviews. Another
potential resource is the “QIO Survey on QIOSCs.” This survey queries QIOs about the degree to
which QIOSCs prepare and deliver products that meet the needs of the QIO community. As
important as satisfaction with QIOSC products is, this survey does not provide sufficient
information for estimating the level of engagement with the QIO or substantive data on the nature
of that support. Nonetheless, it may be feasible to use this survey as a vehicle to gather information
on these issues.
Having developed this scale, collection of data to estimate QIOSC-QIO “scores” could be obtained
on an on-going basis by requiring QIOSCs or QIOs to systematically compile and submit data on
these interactions to CMS.

6.5

Designs for Evaluating Technical Assistance Approaches

Findings from the site visits suggest that QIOs’ strategies for meeting performance criteria differ,
but that, for the most part, QIOs employ two major approaches for offering technical assistance.
Some QIOs use “consultative” approaches (for a subset or all tasks), while others offer technical
assistance in the form of collaborative activities. Consultative models rely more heavily on one-onone interactions, which may occur in person, or via telephone or e-mail. Collaborative learning
models, on the other hand, are premised on providers sharing best practices in a group environment.
For instance, many QIOs use the Institute for Healthcare Improvement’s “Breakthrough Series
Model” (or a modified version of this model) to drive quality improvement. Organized around one
or several quality improvement objectives, collaborative models, incorporate group learning sessions,
such as seminars, teleconferences, Web-Ex (webinars), conferences, or regional meetings in order to
promote learning and effect change in specific areas.
Based on findings from our site visits, we have gathered limited information about why certain
QIOs’ technical assistance approaches differ. By way of example, QIOs located in small states may
encounter fewer barriers than QIOs in large states to offering consultative, technical assistance, due
in large part to easier statewide travel and fewer providers. Overall, however, there is a dearth of
evidence as to (1) the best approaches for “delivering” technical assistance and (2) the content or
substance that is most effective in driving quality improvement in particular settings, with particular
types of providers and for particular measures.
6.5.1

Evaluating Technical Assistance Approaches: Data Sources and Needs

In the short-term and prior to evaluating the effectiveness of alternative technical assistance
xviii

It may also be possible to use this scale in future evaluations of individual QIOSC performance.

71

strategies, there is a need to better understand the following:
(1) The mode or platform by which assistance is delivered (e.g., telephone, seminars, etc.);
(2) The substantive information that is conveyed to providers; and
(3) Factors that drive the use of selected modes and substance of assistance, such as provider
and environmental characteristics.
Interviews with and Direct Observation of QIOs: One potential method for gathering this type
of information is to conduct semi-structured interviews with various QIOs, including both high and
low-performing QIOs. Interview questions could be designed to identify the range of interventions
that QIOs use to offer technical assistance in each task area. Based on our experience interviewing
QIOs about specific assistance interventions, respondents likely will focus on the mode of delivery
(e.g., site visits, seminars) and not on content. The importance of obtaining detailed information on
content cannot be overemphasized. Therefore, the evaluator may find it useful to obtain copies of
materials used in the provision of technical assistance, some of which may also be found on QIO or
QIOSC websites. Assuming that issues of confidentiality are addressed, “shadowing” QIO staff as
they conduct site visits, seminars, or other training activities could provide a perspective that may be
unavailable from interviews alone.
Interviews with IPG Providers: Information from QIOs may be supplemented with interviews
with IPG providers who directly receive technical assistance and those IPG providers who choose
not to participate in the technical assistance activity in order to better understand in what ways the
mode of delivery and content is either meeting or not meeting providers’ needs.
6.5.2

Options for Evaluating Alternative Technical Assistance Approaches

Case Control Cross-Over Design Special Study: CMS’ special study mechanism offers many
opportunities for engaging QIOs in the study of the effectiveness of technical assistance using more
robust randomized case control, cross-over designs. The objectives of such studies would be to
determine which delivery mechanisms and what substantive information or content produces the
greatest improvement in performance among IPG providers. At minimum, this approach would
examine three strategies for rendering technical assistance: (1) consultative models, (2) group
learning models, (3) provider pay-for-performance models.
A case-control study could be structured in several ways with randomization of providers into
“cases” and “controls” occurring at either the IPG or QIO level.
•

At the IPG level, a QIO could randomly identify three subgroups of providers. Each
subgroup could receive technical assistance using a different technique; the QIOs’ usual
approach to technical assistance and two alternative models. (Although three subsets of
providers would be selected from each QIO more than one QIO may participate in this
special study

•

At the QIO level, one or more QIOs would offer technical assistance employing one
72

approach and other QIOs could offer technical assistance using another approach.
Use of a cross-over design adds an extra layer of rigor to the study design. As depicted below, with
an “O” symbolizing the point in time when provider performance measurement occurs and an “X”
symbolizing a technical assistance intervention, during the second intervention period the study
controls are exposed to the same type of technical assistance approach as the cases.

Cases
Controls

Baseline

Intervention
Period #1

PostIntervention #1

Intervention
Period #2

PostIntervention #2

O
O

X

O
O

X

O
O

Sub-task level performance measures, as identified in the SOW, could be collected to ascertain
provider performance and how much improvement was achieved by providers enrolled in case and
control groups. Measurement could occur at three different points in time - at baseline, when crossover occurs (Post-Intervention #1) and at the conclusion of the study (Post-Intervention #2) – or in
some instances at multiple points in time.
Considerations: The above discussion describes this study in a rather simplistic manner and in
reality many factors complicate the design and require considerable thought prior to
implementation.
First, randomization of providers within a QIO requires that the QIO offer assistance in at least two
or three different ways. Not only is it possible for contamination across groups to occur, but
providers in the control group may not be willing to accept delays in receiving technical assistance or
be displeased with the type of assistance received, choosing instead to obtain assistance from other
sources.
Second, randomization across QIOs, particularly when multiple QIOs are involved may be
confounded by inconsistencies in implementation approaches, the content of the material, or factors
beyond direct observation, such as the ability of the QIO staff to communicate with providers and
shape performance. Training of QIO staff may be valuable in order to ensure that consistency is
maintained in the provision of technical assistance to both the case and control groups.
Third, comparison of technical assistance approaches used by high- and low-performing QIOs may
offer specific hypotheses for testing and, while data may be analyzed in a variety of ways, analyses
could examine which approaches are most effective for specific tasks, providers and situations. On
a related note, the provision of certain types of technical assistance is prescribed in the SOW, thus,
QIOs may have little discretion in how they provide assistance in these areas. However, a basic
understanding of the relative effectiveness of varying approaches to technical assistance could affect
how future QIO tasks are designed.
Finally, investments in analyzing alternative approaches are best spent on those subtasks for which
there is large variation in performance as opposed to those with little variation in performance.

73

6.6

Designs for Supporting Poor-Performing and Less Motivated Providers

In the course of this study, project staff and members of the TEP identified and raised concerns
about policies governing the QIO program. One concern was whether the QIO program targets
the appropriate provider population and, specifically, whether the QIO contract should re-focus
requirements to encourage QIOs to work with providers who stand to benefit the most from
technical assistance, such as the poorest performers. Another possible target population includes
providers who are not among the poorest performers, but who could significantly improve their
performance if it were not for a lack of motivation to engage in quality improvement activities or to
work with the QIO.
Historically, QIOs typically invited providers to participate in an IPG and providers voluntarily
chose not only whether or not to participate, but also the level of technical assistance that they
received from the QIO.xix In the future, CMS may want to consider the benefits of providing
technical assistance to those providers who could most benefit from intensive interaction with
QIOs. Indeed, CMS already has shown an interest in moving toward this objective. In the 8th
SOW, QIOs were required to collaborate with their state survey agency to identify poor performing
nursing homes and to work on selected performance issues, such as reducing pressure ulcers and the
use of physical restraints, or improving the collection of data on employee and resident satisfaction.
6.6.1

Case Studies of Poor Performing Nursing Homes

To understand the potential of QIOs to serve as a catalyst for quality improvement among poor
performers, it first is critical to gather input about QIOs experiences in offering technical assistance
to these providers. As previously indicated, during the 8th SOW, QIOs were required to offer
technical assistance to a maximum of 3 nursing homes that were determined by the State Survey
Agency to be “persistently poor” performing homes. A case study approach—more specifically, a
short turnaround study of QIO experiences working with poor performing nursing homes during
the 8th SOW—offers the opportunity to gather information on QIOs’ performance and experiences
working with these providers.
QIOs that met Task 1a performance targets for IPG2 (persistently poor performing nursing homes)
and those that did not could be identified for case study. Site visits to each QIO could be
conducted and members of the nursing home team interviewed about:
•
•
•
•
•

the factors that contributed to the nursing home’s poor performance;
their strategy for assisting nursing homes in meeting performance objectives;
barriers encountered (e.g., motivational issues, availability of resources);
how barriers were addressed; and
the technical assistance strategies were most effective.

Nursing homes participating in the IPG could also be interviewed to obtain insight about the
technical assistance provided by the QIO, the types of assistance they found particularly helpful in
improving performance, and how internal processes, systems or operations were redesigned to
xix

In reality, even providers that participate in an IPG were not required to work with the QIO.

74

achieve performance improvement. In situations where performance failed to improve, the factors
(those internal to the nursing home as well as those related to QIO delivery of technical assistance)
that were associated with this lack of progress should be identified from both the QIO and the
nursing home perspective. The same information could be collected regarding situations in which
performance improved, thereby enabling a comparison of factors associated with improvement – or
failure to improve.
6.6.2

Identification of Strategies to Improve Provider Motivation and Performance

Entering into the 9th SOW several other quasi-experimental approaches may be used to assess how
the provider selection process affects performance and how selection criteria could be varied to
maximize program performance. Through the special study mechanism, CMS could empower
QIOs to develop alternative approaches for selecting and motivating providers, as well as exploring
creative solutions to work with providers to achieve selected performance objectives.
Rather than authorizing QIOs to identify the providers that they will work with on this particular
special study, CMS (or its agent) may identify subsets of providers for which there is specific interest
in promoting performance improvement. As mentioned before, one such set of providers are the
poor performers. As in Task 1a (8th SOW) CMS may identify setting-specific providers who are
considered among the “poorest performers” based on standardized criteria (such as data from the
hospital, home health, and nursing facility COMPARE databases) and contract with QIOs to
develop innovative approaches to assist these providers.
Another such set of providers are those that may lack motivation to improve. As one member of
the TEP aptly indicated, provider motivation to improve performance is endogenous and can
potentially be influenced by QIOs. Although many of the poorest performers lack motivation to
improve performance, there is likely to be a group of providers whose performance is “average” (or
even slightly above average) who could, nonetheless, make significant quality improvement gains
were it not for their lack the motivation to participate in QIO activities or to improve their
performance. Special study funds could be used to develop strategies to motivate selected providers
to improve performance. Although identification of this group of providers will prove challenging,
QIOs, accreditation and survey agencies, as well as a variety of performance improvement experts
could offer suggestions for identification of these providers.
6.6.2.1 Mechanisms for Improving Performance
QIOs could be given the latitude to explore various strategies, including those that involve financial
and non-financial incentives, so that it may be possible to ascertain which strategies are most
effective in motivating providers or encouraging them to achieve selected quality improvement
objectives.
These approaches may include QIO use of:
•

Financial incentives, with the amount of the incentive varied to ascertain the level
necessary to achieve significant improvement;

•

Non-financial mechanisms, such as the receipt of public awards or national
75

recognition;
•

Alternative technical assistance approaches, including consultative models and group
learning models;

Additionally, in some circumstance, QIOs that have demonstrated particular innovation in their
technical assistance strategies could be offered funded to develop their own novel approaches for
engaging poor-performing or unmotivated providers.
6.6.2.2 Analytical Approach
After having identified subsets of providers in selected task areas (e.g., nursing home, home health)
randomized case-control studies may be conducted to determine whether selected approaches are
more or less effective in assisting these providers to meet performance improvement objectives.
Assuming that these studies are conducted prospectively – during the course of the 9th and future
SOWs – it is feasible to design randomized case-control studies to assess differences in the impact
of alternative technical assistance approaches. Methodologically, the design of these studies parallel
the design described in Section 6.4.2, and will not be repeated here.
6.6.2.3 Selection of QIOs for Participation in Special Study
Careful consideration should be given to selection of QIOs to participate in this special study.
Optimally, several of the best performing QIOs in each subtask area that that have also
demonstrated innovation in meeting SOW objectives would be selected to participate and act as a
“change agents” in those task areas in which they excel. Although we assume that high performing
QIOs will be motivated to participate in this special study because of their interest in testing the
impact that alternative IPG selection strategies will have on quality improvement as well as the
additional funding that they receive, CMS may offer incentives to elicit greater participation in this
study. As one example, QIOs that fully participate in this special study may be assured a renewal in
the next contract cycle or may even receive a bonus payment in addition to the costs associated with
participating in the special study.

6.7

Designs for Evaluating CMS Performance Targets

Site visit respondents and members of the TEP voiced a number of concerns regarding CMS’
performance measures and related targets for improvement. Many of the QIOs that participated in
NORC site visits indicated that they could not meet CMS performance targets because they were
“unrealistic”—in large part because there is no scientific evidence to suggest that CMS’ current
targets could be achieved within the time frame used to evaluate performance and, in some cases,
because QIOs believed that particular characteristics of their beneficiary or provider population
made these targets less feasible or less appropriate. As mentioned previously, one QIO indicated
that it was against state law for physicians to implement e-prescribing at the same time that CMS’
contractual requirements mandated that QIOs work with physicians to adopt this and other forms
of health IT. This QIO voiced concern that CMS’ selection of quality improvement measures
76

disregarded potentially conflicting state regulations. QIOs and TEP members underscored that, in
general, it is unclear how CMS identifies its quality improvement benchmarks and, overall, CMS’
approach for setting performance measures and targets must become more transparent if QIOs are
to understand more fully the goals they are expected to achieve.
6.7.1

Evaluating CMS Performance Targets: Data Sources and Data Needs

Although there are exceptions, the extent to which performance targets are based on evidence from
peer reviewed literature or findings from special studies conducted under previous SOWs is unclear
and should be a priority in setting the performance benchmarks for the 9th and future SOWs. In
addition to interviews with CMS staff to determine the process by which performance targets are
set, it would be valuable to conduct a systematic review of the literature to document ranges of
performance improvement that have been achieved by specific types of providers and in given time
frames. To the extent that these data exist, the approaches used to achieve performance
improvement and the resources expended in achieving improvement should be documented.
A report that describes the degree to which scientific evidence supports quality improvement targets
and the areas where evidence is unavailable should be prepared and made available to QIOs and
other interested parties. In addition to promoting transparency in the target-setting process, this
report could 1) provide a research agenda that potentially could be carried out by funding special
studies in future SOWs (for those areas where evidence is unavailable), and 2) could lead to
refinements in the QIO contract (for those areas where scientific evidence exists to support quality
improvement targets).
6.7.2

Consensus Panel to Examine Performance Targets

Although a longer term task, in those cases where evidence is unavailable to support CMS
performance benchmarks, a two-step approach is recommended. First, the distribution of
performance on these tasks across QIOs could be examined to identify those tasks with the greatest
variation. Case studies of QIOs that perform at different points in this distribution, such as the top
or bottom quartile, could be conducted to determine whether selected characteristics are associated
with or drive differences in performance. Among the characteristics of importance are those that
are mutable, such as the type of interventions employed, and those that are immutable, such as
attributes of the provider population (e.g., size of physician practices, number of providers located in
rural locations) and environmental factors (e.g., managed care presence). To the extent that
differences in performance are immutable, consideration may need to be given to setting different
performance targets across QIOs.
This information will guide CMS in the next suggested step, which is to convene a consensus
building panel to establish performance targets for each subtask. The panel should be comprised of
subject matter experts as well as researchers and experts in performance measurement,
improvement, and the QIO program. Members of the consensus panel could be asked to review
evidence from the literature and from QIO experiences to assist CMS in establishing realistic
performance measures and setting appropriate ranges of performance improvement targets. As
appropriate, the consensus panel may take into consideration those immutable characteristics that
77

influence QIO performance on specific tasks to determine whether different targets should be
established for providers who differ on selected characteristics.xx

6.8

Other Areas for Study

In the process of conducting this study, NORC staff identified many other research and evaluative
questions, answers to which could significantly advance our understanding of the QIO program and
potentially assist in identifying programmatic areas that could be structured more efficiently. In this
section, we identify these questions and discuss their importance relative to the QIO program.
However, evaluation designs are not presented as the sheer complexity and the lack of data to
address these issues precludes us from devising sufficient options at this point in time.
6.8.1

Beneficiary Quality Complaints

As mentioned in Section 2.2.1, under Task 3, QIOs are responsible for reviewing beneficiary
complaints regarding quality of care concerns. Although case review is a relatively large proportion
of a QIO’s budget (over 30 percent), measured in terms of the number of cases, it appears that there
is very little activity in this area. One specific area where QIOs have received criticism is in the
number of beneficiary complaints (one component of case review) that they handle. Per the IOM
(2006), in the two-year period between 2002 and 2004, the 53 QIOs handled a total of about 5,900
complaints. During the course of site visits, QIOs provided a number of reasons why so few
complaints are submitted, including beneficiaries’ unwillingness to complain (particularly in rural
areas where the number of providers may be limited) and reduced funding for beneficiary
communication activities during the 8th SOW, which may mean that beneficiaries are unaware of the
complaint process.
Questions for further investigation include the following:
(1)

Are Medicare beneficiaries aware of the process for filing complaints?

(2)

Are QIOs proactively reaching out to beneficiaries to notify them of processes and
opportunities for filing and dealing with complaints?

6.8.2

To what extent do quality complaints lead to systemic changes in the
structure and organization of QIOs’ quality improvement programs

The IOM committee recommended that case review activities be severed from the core QIO
contract and, instead, released for competitive bid by organizations not holding QIO contracts. The
rationale underlying this recommendation is that QIOs cannot credibly uphold relationships with
providers that are both regulatory and collaborative in nature. In response to this recommendation,
Understanding which immutable characteristics influence performance is essential from another perspective – that is,
future investments in quality improvement research to be funded through special studies could be structured to address
these issues.

xx

78

the QIO community has voiced concern that, by severing case review from the SOW, opportunities
to identify and solve quality problems that may exist system-wide would be eliminated. Another
concern that has received less attention is related to the resources associated with having each state
QIO handle review activities and the economies that may be achieved by consolidating review
function.
The potential restructuring of the QIO program so that case review activities are removed from the
core contract raises several questions:
•

To what extent do case review findings actually feed back into the QIOs’ quality
improvement activities?

•

What are the costs per case and resources consumed in conducting case review and
how would costs be affected by having a smaller subset of QIOs conduct all review
functions?

•

Relative to other organizations that conduct case review in the private sector, how
do the costs of QIO case review compare?

6.8.3

Regionalization of QIOs

Currently, there are 53 QIOs. As we move toward national guidelines the question arises as to
whether this is the most efficient structure or whether it would be more cost-effective to consolidate
QIO activities into an even smaller number of contracts. Site visit interviews at multi-state
organizations provided conflicting perspectives as to the benefits and feasibility of consolidating
QIO activities within a smaller number of organizations. On the one hand, some decentralized
multi-state QIOs indicated that differences across states (e.g., providers, regulatory environments)
made it difficult to consolidate functions. On the other hand, some centralized multi-state QIOs
indicated that standards of practice are (or should be) comparable across states and, for this reason,
functions could be centralized. On a related note, decisions to centralize and decentralize varied in
single state QIOs, with some pointing out the importance of field offices and others emphasizing
the efficiencies to be gained by maintaining only one office.
A key consideration for future restructuring of the QIO program is whether economic and
performance gains be achieved by centralizing or regionalizing QIO activities so that through a
competitive approach:
•

One QIO is selected to offer technical assistance to providers in a given region.

•

One or a limited number of QIOs – those who are the “best performers” in a given
task area – serve as the QIO for a specific task.

Indeed, it may be feasible to implement and assess the effects of regionalization in one or two
different sections of the country before re-structuring the entire program.

79

7.0

RECOMMENDED FIRST STEPS

Section 6.0 of this report offers ASPE several evaluation options that may be feasible over the short
and long term. Given that data necessary to conduct or inform many of these evaluations are
unavailable, short-term options, designed to serve as “building blocks” for a larger and more robust
evaluation, were presented. The investments that CMS has made in the QIO program are
significant, and with the constantly shifting policy environment, we recommend that an ongoing or
continuous program for evaluating QIOs has the potential to make the program more effective.
Ideally, the data collection tools and processes used to evaluate a program are developed
concurrently with the program. This ensures that information necessary to adequately conduct the
evaluation are available at the time that the evaluation occurs. Evaluation of the 8th SOW will
require the use of retrospective approaches. Moving towards the 9th SOW and beyond, prospective
approaches, which may enable the use of more rigorous methodological techniques, such as
randomized case-control designs, may be feasible if data and systems necessary to conduct these
evaluations are in place.

7.1

Inventory CMS Data Systems & Develop Systems for On-going
Evaluation of the QIO Program

Several of the evaluation designs described in this report call for primary data collection. Others
refer to existing data systems. In actuality, NORC staff had limited access to information that QIOs
report to CMS or that CMS collects. CMS did provide NORC staff with access to selected database
codebooks, such as that for the PARTner system. Nonetheless, anecdotal evidence gathered from
respondents interviewed during site visits suggests that PARTner data may be incomplete or of poor
quality. Indeed, the IOM 2006 report indicated that “CMS staff warned IOM, however, that some
of the data sets were not complete and consistent enough for analytical purposes…” Because it is
our understanding that the PARTner system is being updated as part of the 8th SOW, it is unclear
whether additional data will be available for use in evaluation, or whether data referenced in this
document will no longer be available.
To facilitate future evaluations, data collection tools must be developed, validated, and incorporated
into the QIO program prior to the start of the SOW. Training of QIOs on use of these tools may
be required to ensure consistency in the data collected. Tools should further be updated so that as
changes to the program occur, as with the addition or elimination of SOW tasks, tools adequately
capture the data needed to evaluate QIO performance.
Prior to initiating any evaluation, it is critical to conduct a thorough review of CMS data systems
associated with the QIO program. This review should look beyond the codebooks to the actual data
to best understand how it is collected and the quality of the data. As the re-design of data systems
progresses, meetings between evaluators, database experts and CMS staff involved in the re-design
should occur to ensure that data, tools, and systems necessary to meaningfully evaluate the 8th, 9th
and future SOWs are available or can be established.

80

7.1.1

Understanding Reasons for and Reducing Data Lags

One of the issues that should be considered in this review relates to the lags in accessing
performance data, particularly for those tasks and settings in which performance is assessed using
claims data, namely, for hospitals and physician offices.xxi Data lags make it difficult for QIOs to
achieve quality improvement objectives because data may not reflect performance at the time when
technical assistance is rendered to providers or when monitoring of the impact of technical
assistance is being conducted. Lags in collecting and preparing data bases that are ultimately used to
report to individual QIOs make it difficult for QIOs to track the progress of the providers they are
assisting and to determine when a different technical assistance approach or strategy may be
warranted.
As consideration is given to development of an on-going evaluation process, thought should be
given to convening a panel of public and private sector data experts to work with those CMS staff
members that are most knowledgeable about data systems to identify opportunities for shortening
data lags.

7.2

Address Limitations in Access to Provider Identifying Data

One of the reasons why NORC’s access to data was limited is because of regulations which prohibit
the release of data with provider identifiers; this includes information on whether a provider is a
member of an IPG. Confidential information is defined in 42 CFR Section 480.101(b) and includes
“information that explicitly or implicitly identifies an individual patient, practitioner or reviewer” and
“quality review study information which identify patients, practitioners and institutions” (Farley and
Hammel 2004).
It is likely that these stringent provider confidentiality policies derive from the earlier period when
PSROs and PROs were more focused on conducting provider reviews of utilization and practice
patterns. Today, access to information on provider performance is available from multiple sources,
including the CMS COMPARE databases that are made available to the public, as well as multiple
other public and private organizations and programs attempting to assist consumers in choosing
providers on the basis of quality. In an effort to foster and facilitate evaluation of the QIO
program, consideration must be given to whether or not such stringent provider confidentiality
regulations should continue to be in place. Without a change in regulation even the most basic
provider information required by an evaluator, such as whether a provider is a member of an IPG,
must continue to be identified and processed through a QIOSC or a QIO. xxii Although the process
of working through QIOSCs and QIOs to obtain data to conduct an evaluation is cumbersome and
may make the evaluation process more time consuming, it is a feasible alternative granted the
evaluator is able to obtain the cooperation of the QIOSC or QIO. More problematic, however, is
that because data are collected by the QIOSCs or QIOs – organizations with vested interests in
demonstrating improvement – the validity of the results may be questioned.
Site visit respondents and TEP members indicated that these lags are substantially longer whenever data is pulled
from claims and, in general, data are available only for the baseline and remeasurement period. Performance data
derived from the MDS or OASIS system would be expected to be available much sooner.
xxii Certain government agencies, such as the Office of the Inspector General or the Government Accountability Office,
have the power to access, review or requisition this information
xxi

81

7.3

Maintain Transparency in Designing and Conducting Evaluation

In conducting this study, QIOs expressed frustration with many programmatic issues. QIOs were
disconcerted about the number of changes to the 8th SOW that occurred in the short period of time
in which it had been issued. Many others were skeptical about the appropriateness of performance
improvement targets or the complicated formula used to evaluate performance, noting ambiguity in
how they were developed and whether or not they were realistic expectations.
The success of an evaluation will, to a great extent, depend on the cooperation obtained and the
ability of the evaluator to work effectively with CMS, the QIOs and providers; each of these
stakeholders may be asked to contribute information on their operations, collect/submit data, and
participate in specific evaluation projects. For these reasons, we highly recommend that the
evaluator maintain transparency in designing and conducting the evaluation. This includes offering
as much information as is feasible concerning the purpose, design, and results of the evaluation. In
particular, regardless of which agency or organization conducts an evaluation, CMS staff as well as
the QIO community, should be continuously apprised of the status and results in order to assist in
understanding the extent to which program objectives have been met and preparing future SOWs.

82

REFERENCES
Abel, RL, Warren K, Bean G, Gabbard B, Lyder CH, Bing M, McCauley C. 2005. “Quality
Improvements in Nursing Homes in Texas: Results from a Pressure Ulcer Prevention Project.”
Journal of the American Medical Directors Association 6 (3): 181-188.
Allen BL, Burt P, Roychoudhury C, Chen B. 2004. “Analysis of OBQI Outcomes in Participating
Michigan Home Health Agencies.” Journal of Nursing Care Quality 19 (2):149-155.
American Health Quality Foundation. March 2006. “Quality Improvement Organizations and
Health Information Exchange.” Prepared by the e-Health Initiative.
http://www.ahqa.org/pub/uploads/ QIO_HIE_Final_Report_March_6_2006.pdf (accessed July
29, 2006): 1-65.
ASPE (Office of the Assistant Secretary for Planning and Evaluation). Task Order Request for
Proposal #05EASPE001013, Issued July 29, 2005.
Baier RR, Gifford DR, Patry G, Banks SM, Rochon T, DeSilva D, Teno JM. 2004. “Ameliorating
Pain in Nursing Homes: A Collaborative Quality Improvement Project.” Journal of the American
Geriatrics Society 52 (12):1988-1995.
Ballard DJ, Nicewander D, Skinner C. 2002. “Health Care Provider Quality Improvement
Organization Medicare Data-Sharing: A Diabetes Quality Improvement Initiative.” American Medical
Informatics Association Symposium: 22-25.
Bhatia AJ, Blackstock S, Nelson R, Ng TS. 2000. “Evolution of Quality Review Programs for
Medicare: Quality Assurance to Quality Improvement.” Health Care Financing Review 22(1):69-74.
Boards of Trustees, Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust
Funds. The 2006 Annual Report of the Boards of Trustees of the Federal Hospital Insurance and Federal
Supplementary Medical Insurance Trust Funds. May 2006. http://www.cms.hhs.gov/
ReportsTrustFunds/downloads/tr2006.pdf (Accessed November 15, 2006).
Bradley, E. Melissa DA. Carlson, WT. Gallo, JC, Campbell, MK, Krumholz, HM. 2005. “From
Adversary to Partner: Have Quality Improvement Organizations Made the Transition?” Health
Services Research 40 (2): 459-476.
Bratlzer, DW. October 26, 2005. “Letter to the Editor: Quality Improvement Organizations and
Hospital Care.” Journal of the American Medical Association 294 (16): 2028-29.
CMS (Centers for Medicare and Medicaid Services). How to Become a QIO-like Entity.
http://www.cms.hhs.gov/QualityImprovementOrgs/03_HowtoBecomeaQIO.asp#TopOfPage.
(Accessed November 15, 2006a).
CMS (Centers for Medicare and Medicaid Services). 8th Round SOW Contract. Version 080105-2.
http://www.cms.hhs.gov/QualityImprovementOrgs/downloads/8thSOW.pdf. (Accessed
December 15, 2006b).
83

CMS (Centers for Medicare and Medicaid Services). Overview of Home Health Initiatives.
http://www.cms.hhs.gov/HomeHealthQualityInits/01_Overview.asp#TopOfPage (accessed
August 9, 2006c).
Chisholm DL and Murdock K. 2002. “The Outcome-Based Quality Improvement Pilot Project: A
Perspective from Maryland.” Home Health Care Management & Practice 14 (3): 179-184.
Cortes, LL. 2004. The Impact of Quality Improvement Programs in Long-Term Care: Are State and Federal
Quality Improvement Initiatives for Nursing Homes Redundant? Texas Department of Human Services,
Division of Long Term Care.
Farley A and Hammel M. “Confidentiality and QIOs.” A presentation to the Institutes of
Medicine, June 22, 2004.
Gaul GM. “Once Health Regulators Now Partners: Private Groups Limit Patient Access to Medical
Files, Rarely Punish Doctors.” The Washington Post. 26 July 2005a. A01.
Gaul GM. “Medicare Officials' Attendance at Lavish Contractor Meetings Probed.” The Washington
Post. 6 Jan 2006. A07.
Gaul GM. “Wider Probe of Medicare Firms.” The Washington Post. 10 Dec 2005b. A04.
Gaul GM. “Grassley Seeks Medicare Data on Response to Complaints; Overseers of Quality
Control Criticized for Laxity, Secrecy.” The Washington Post. 13 Aug 2005c.
Gould BE, Grey MR, Huntington CG, Gruman C, Rosen JH, Storey E, Abrahamson L, Conaty
AM, Curry L, Ferreira M, Harrington KL, Paturzo D, Van Hoof TJ. 2002. “Improving Patient
Outcomes by Teaching Quality Improvement to Medical Students in Community-Based Practices.”
Academic Medicine 77 (10): 1011-1018.
Hannah KL, Schade CP, Cochran R, Brehm JG. 2005. “Promoting Influenza and Pneumococcal
Immunization in Older Adults.” Joint Commission Journal on Quality and Patient Safety 31 (5): 286-293.
Holmboe E, Kim N, Cohen S, Curry M, Elwell A, Petrillo MK, Meehan TP. 2005. “Primary Care
Physicians, Office-Based Practice, and the Meaning of Quality Improvement.” The American Journal
of Medicine 118 (8): 917-922.
Hsia, DC. 2003. “Medicare quality improvement: bad apples or bad systems?” JAMA Editorial 289
(3): 354.
IOM (Institute of Medicine), 2006. “Medicare’s Quality Improvement Organization Program:
Maximizing Potential.” Washington, DC: National Academies Press, 47.
IOM (Institute of Medicine). 1990. Quality Assurance in Medicare. Washington DC: National
Academies Press.
Jencks SF. October 26, 2005. “Letter to the Editor: Quality Improvement Organizations and
Hospital Care.” Journal of the American Medical Association 294 (16): 2028.
84

Jencks, SF, Huff, ED, Cuerdon, T. 2003. “Change in the quality of care delivered to Medicare
beneficiaries, 1998-1999 to 2000-2001.” Journal of the American Medical Association: 310.
Jencks SF, Cuerdon T, Burwen DR, Fleming B, Houck PM, Kussmaul AE, Nilasena DS, Ordin DL,
Arday DR. October 2000 (reprinted). “Quality of Medical Care Delivered to Medicare
Beneficiaries: A Profile at State and National Levels.” Journal of the American Medical Association 284
(13): 16.
Jencks SF, Wilensky GR. 1992. “The Health Care Quality Improvement Initiative: A New
Approach to Quality Improvement in Medicare.” Journal of the American Medical Association
268(7):900-903.
Kulkarni C. 2005. “QIO Group Disputes JAMA Article Findings.” United Press International Science
News (accessed September 23, 2005).
Levy C, Carter C, Priloutskaya G, Gallegos G. “Critical Elements in the Design of Culturally
Appropriate Interventions Intended to Reduce Health Disparities: Immunization Rates Among
Hispanic Seniors in New Mexico.” 2003. Journal of the Health and Human Services Administration 26 (3):
201-234.
Massing MW, Henley N, Biggs D, Schenck A, Simpson RJ. “Prevalence and Care of Diabetes
Mellitus in the Medicare Population of North Carolina.” 2003. North Carolina Medical Journal 64 (2):
51-57.
McClellan WM, Millman L, Presley R, Cousins J, Flanders WD. 2003. “Improved Diabetes Care by
Primary Care Physicians: Results of a Group-Randomized Evaluation of the Medicare Health Care
Quality Improvement Program (HCQIP).” Journal of Clinical Epidemiology 56: 1210-1217
Meehan TP, Tate JP, Holmboe ES, Teeple EA, Elwell A, Meehan RR, Petrillo MK, Huot SJ.
May/June 2004. “A Collaborative Initiative to Improve the Care of Elderly Medicare Patients with
Hypertension.” American Journal of Medical Quality 19 (3): 103-111.
Michalowski KM, Gold JA, Morse DL, Bluestein JN. “Reducing Disparities in Lipid Testing for
African-Americans with Diabetes: Interim Report.” 2003. Journal of the Health and Human Services
Administration 26 (3): 363-381.
Rollow W, Lied TR, McGann P, Poyer J, LaVoie L, Kambic RT, Bratzler DW, Ma A, Huff ED,
Ramunno LD. September 2006. “Assessment of the Medicare Quality Improvement Organization
Program.” Annals of Internal Medicine 145 (5): 342-353.
Schade, CP, Cochran BF, Stephens MK. 2004. “Using Statewide Audit and Feedback to Improve
Hospital Care in West Virginia.” Joint Commission Journal on Quality and Patient Safety 30 (3): 143-151.
Schulke D. Letter to Mark McClellan. April 27, 2006.
McClellan_Ltr_QIO_Pgm_Modernization_060427.pdf (Accessed November 15, 2006.)
Shefer A, McKibben L, Bardenheier B, Bratzler DW, Roberts H. 2005. “Characteristics of LongTerm Care Facilities Associated with Standing Order Programs to Deliver Influenza and
85

Pneumococcal Vaccinations to Residents in 13 States.” Journal of the American Medical Directors
Association 6 (2): 97-104.
Shefer A, Bardenheier B, Bratzler DW, Mckibben L, Roberts H, and Stange P. May 2004. “Impact
of State Quality Improvement Organizations on Use of Standing Order Immunization Programs
among Long-Term Care Facilities in the U.S.: Results of the Immunization Standing Orders Project,
1999-2002.” The 38th National Immunization Conference, Centers for Disease Control and Prevention.
Snyder C and Anderson G. 2005a. “Do Quality Improvement Organizations Improve the Quality
of Hospital Care for Medicare Beneficiaries?” The Journal of the American Medical Association 293 (23):
2900-2907.
Snyder C and Anderson G. October 26, 2005b. “In Reply to Letter to the Editor: Quality
Improvement Organizations and Hospital Care.” Journal of the American Medical Association 294 (16):
2030.
Sobel ER and Mannis C. “Reporting a Health Quality Improvement Project for Reducing the
Disparity in Screening Mammograms among Senior African American Women.” 2003. Journal of the
Health and Human Services Administration 26 (3): 350-361.
Sprague L. 2002. “Contracting for Quality: Medicare’s Quality Improvement Organizations.”
National Health Policy Forum Issue Brief 774.
Sugarman JR. October 26, 2005. “Letter to the Editor: Quality Improvement Organizations and
Hospital Care.” Journal of the American Medical Association 294 (16): 2029.
GAO. (United States Government Accountability Office). December 2005. Nursing Homes: Despite
Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117
Washington. D.C.
GAO. (United States Government Accountability Office). April 2002. Medicare: Beneficiary Use of
Clinical Preventive Services. GAO-02-422. Washington, D.C.

86

APPENDIX A
8TH SOW QUALITY IMPROVEMENT ORGANIZATIONS

STATE
AK
AL
AR
AZ
CA
CO
CT
DC
DE
FL
GA
HI
IA
ID
IL
IN
KS
KY
LA
MA
MD
ME
MI
MN
MO
MS
MT
NC
ND
NE
NH
NJ
NM
NV
NY
OH
OK
OR
PA

QIO NAME
Mountain-Pacific Quality Health Foundation
Alabama Quality Assurance Foundation
Arkansas Foundation for Medical Care
Health Services Advisory Group
Lumetra
Colorado Foundation for Medical Care
Qualidigm
Delmarva Foundation
Quality Insights of Delaware
Florida Medical Quality Assurance, Inc.
Georgia Medical Care Foundation (GMCF)
Mountain-Pacific Quality Health Foundation
Iowa Foundation for Medical Care
Qualis Health
Illinois Foundation for Quality Health Care
Health Care Excel
Kansas Foundation for Medical Care
Health Care Excel
Louisiana Health Care Review, Inc.
MassPRO
Delmarva Foundation
Northeast Health Care Quality Foundation
MPRO
Stratis Health
Primaris
Information and Quality Healthcare
Mountain-Pacific Quality Health Foundation
The Carolinas Center for Medical Excellence (CCME)
North Dakota Health Care Review, Inc.
CIMRO of Nebraska
Northeast Health Care Quality Foundation
Healthcare Quality Strategies, Inc. (HQSI)
New Mexico Medical Review Association
Health Insight
IPRO
Ohio KePRO
Oklahoma Foundation for Medical Quality
Acumentra Health
Quality Insights of Pennsylvania
A-1

8TH SOW QUALITY IMPROVEMENT ORGANIZATIONS

STATE
PR
RI
SC
SD
TX
UT
VA
VI
VT
WA
WI
WV
WY

QIO NAME
Quality Improvement Professional Research Organization
Quality Partners of Rhode Island
The Carolinas Center for Medical Excellence (CCME)
South Dakota Foundation for Medical Care TN Qsource
Texas Medical Foundation
Health Insight
Virginia Health Quality Center
Virgin Islands Medical Institute, Inc
Northeast Health Care Quality Foundation
Qualis Health
Metastar
West Virginia Medical Institute
Mountain-Pacific Quality Health Foundation

A-2

APPENDIX B
SCREENSHOT OF QIO INVENTORY: CHARACTERISTICS

B-1

SCREENSHOT OF QIO INVENTORY: ACTIVITIES

B-2

APPENDIX C
SELECTED SPECIAL STUDIES FOR THE 8th SOW

AL Alabama Quality Assurance Foundation
Remaking Alabama Medicine
This AQAF special study is early in development. The study will help recognize those who are
working to enhance the health care received by the state's Medicare beneficiaries. Plans for
“Remaking Alabama Medicine” include offering continuing education opportunities as well as
making the program available for use as in-service.
CA Lumetra

Prevention of Unnecessary One-Day
Admissions Project
The purpose of this project is to reduce unnecessary one-day admissions by intervening with
hospitals estimated to be contributing to the greatest number of CA's unnecessary one-day
admissions, recover and prevent improper Medicare Trust Fund Payments due to unnecessary oneday admission rate among high-error and high one-day stay volume hospitals, and examine feasibility
of QIP implementation among a broader hospital population.
CO Colorado Foundation for Medical Care

Investigate billing error of
outpatients billed as inpatients
Data suggest that outpatients billed as inpatients costs the Medicare Trust Fund nearly $8 million
annually in the state of Colorado. This project will work collaboratively with five participating
hospitals that account for approximately half of the identified billing errors, to investigate the
problem and implement solutions. Chart abstraction to calculate baseline error rates and to identify
root causes will be conducted. Interventions will be developed and implemented for a specific time,
after which chart abstractions will be repeated to calculate remeasurement error rates and assess
changes from baseline. The goal is to reduce the errors by 50 percent and share lessons learned.
This project was funded at $171,325.
DE Quality Insights of Delaware
One-Day Stay Inpatient Admissions
Through this project, the QIO hopes to realize a 10 percent absolute reduction in one-day length of
stays for DRG 143, 182/183 in DE acute care hospitals (monitored by FATHOM). One-day stay
claims have realized a 31 percent increase from FY 2003 to FY 2004. In particular, DRG 143 (Chest
Pain), 182 (Esophagitis, gastroenteritis & miscellaneous digestive disorders age > 17 with cc) and
183 (Esophagitis, gastroenteritis & miscellaneous digestive disorders age > 17 without cc), when
trended, show a rate higher than the national average. Hospitals will be provided with educational
materials on alternative levels of care, such as observation, compliance program monitoring and
billing and OPPs education provided by the fiscal intermediary. Hospitals will be provided with
assistance in developing individual intervention strategies where appropriate.
C-1

IA

Decreasing Payment Errors Associated
with DRG 416 (Septicemia) in Iowa
This project will attempt to reduce coding errors for DRG 416 (septicemia) in two targeted hospitals
with significant over-coding. Results will be upon analysis of FATHOM, CRT, and PEPPER data,
and actual baseline error rate at one of the targeted hospitals. Case reviews, intermediate chart audits,
coding education, distribution of PEPPER reports to all IA hospitals will be performed. The QIO
will conduct regular conference calls with 2 hospitals and as needed with other hospitals. A final
report and publication of an article are expected.
MA

Iowa Foundation for Medical Care

Mass PRO

One-Day Stays for Percutaneous Coronary
Interventions
This study will examine and reduce inappropriate one-day stay hospital stays in MA involving
percutaneous coronary intervention (PCI). DRG 527 (Percutaneous Cardiovascular Procedure with
Drug-Eluting Stent) was the second leading DRG for one-day stays in MA with 1,967 of 3,276
patients admitted for one day or less. These procedures are being performed more and more in the
outpatient setting. Given this, there may be many inpatient DRG stays being paid for by Medicare
that should be paid in the more efficient and less costly outpatient setting. MassPRO will examine
the most recent guidelines and information provided by the American Heart Association and the
American College of Cardiology. Hybrid data collection, full case review and provider education to
reduce both the number of one-day stay admissions as well as the number of inappropriate one-day
stay admissions for the targeted DRG will be planned. The project will target all 14 hospitals in MA
that perform PTCA.
MI MPRO

Decrease the proportion of payment error
cases in one-day lengths of stays
The objective of this study is to decrease the proportion of payment error cases in one-day lengths
of stays (LOS) by 5% for acute care prospective payment system (PPS) hospitals in the state of
Michigan and increase provider and physician awareness of the one-day LOS payment error pattern
and trend related to inappropriate one-day stays. Focused interventions (i.e. data dissemination,
guideline distribution, and staff education) targeted at 15 hospitals identified from data analysis as
having a high volume of discharges and a high proportion of one-day stays will be conducted.
MN Stratis Health
Chest Pain Short Stay
Stratis will attempt to reduce the number of unnecessary admissions by 15 percent from baseline to
re-measurement in a targeted group of hospitals in the population of patients defined by DRG 143 –
chest pain, with a length of stay of one day. It will collaborate with the targeted group of hospitals
to develop an understanding of cases that are appropriate for inpatient admission, promote
understanding of billing for observation services, and develop tools and resources to assist hospitals
in appropriate assignment of level of care and provide a forum for hospitals to share best practices.
Case review, FATHOM and PEPPER data analysis, teleconferences, and a Web-Ex session will be
used to review findings.

C-2

MS Information and Quality Healthcare

Prescription Continuity of Care Project July
2004 - June 2005
This project will integrate CMS databases with external sources to form an amalgamation of patientlevel databases that allow CMS to continuously identify, monitor, address and evaluate prescription
medication use in the Medicare elderly.
MS Information and Quality Healthcare

One & Two Day Stays for DRG 127 Heart Failure and Shock
The study object is to reduce the number of one and two- day stays for DRG 127 by 25% in 7
targeted hospitals. Per data analysis from 1/01/04 through 6/30/05, DRG 127 accounts for 21
percent of one and two-day stays. The QIO will provide hospitals with specific interventions to
ensure more appropriate use of the observation level of care. Further, it will provide one-on-one
education conducted by the project leader of hospital administrative, physician (in particular
emergency room physicians), and utilization staff. Telephone contacts, posters, leaflets, workshop, 4
teleconferences, final meeting for hospitals to present their storyboard and lessons learned/best
practices will also be made available.
NH Northeast Health Care Quality Foundation

Reducing Medically Unnecessary One Day
and Two Day Stay Admissions for DRG
182/183 for one hospital in New Hampshire”

No description available.
NV Health Insight
Re-Defining Billing/Level of Care - Nevada
Health Insight will attempt to decrease the number of inappropriate Medicare short-stay admissions
by 25% in an accelerated fashion by focusing on systems-level re-design and incorporating human
factors science. To this end, it aims to bridge the gap between quality assurance and quality
improvement activities within the QIO system. A series of workshops will provide fundamental
concepts of human factors and safety management principles to project participants, and
applications of these principles to the admission process. This project will target 8 large urban
hospitals in southern Nevada (which accounted for 60 percent of one-day admissions in the state
during FY2005). Three workshops in each hospital and an Outcomes Congress will be conducted.
This study follows on work from the Special Study on Human Factors in the 7th SOW.
NY IPRO
MAQRO-QAPI Special Study
IPRO is one of three QIOs selected to help CMS review managed care organizations' quality
improvement efforts.
OH Ohio KePRO
Remedial Quality Improvement Plan - QIP
To reduce the payment error rate (PER) by helping providers identify areas of inefficiencies in their
processes related to admission and documentation of observation services KePRO will target six
Ohio providers with one-day length of stay (LOS) admission errors where services rendered could
have been provided in an observation setting. The QIO will train the facilities on how to collect and
C-3

analyze their data to determine root causes of their errors, develop a focused QIP based on the root
cause, and monitor the improvements to determine if they are reducing the payment errors.
Further, KePRO will help providers assess their own needs, investigate processes, improve their
QIP, develop monitoring tools, and implement process improvements. A final report and a toolkit
will be developed.
OR Acumentra Health (formerly OMPRO)
One-Day Stay
The objective of this project is to reduce the proportion of unnecessary inpatient admissions or
associated coding and billing errors with a length of stay of one day or less by 8 percent (a 25
percent relative reduction) in 13 hospitals. Case selection will be based on targeted DRGs 014
(Intracranial Hemorrhage or Cerebral Infarction "Stroke/Intracranial Hemorrhage”), 127 (Heart
Failure and Shock), 143 Chest Pain, and 182 (Esophagitis, Gastroenteritis and Miscellaneous
Digestive Disorders, Age Greater than 17 with/without CC). Educational tools, hospital
improvement plans, and use of a collaborative model in which successes, challenges, and best
practices are shared among participants will be developed.
SD South Dakota Foundation for Medical Care
Unnecessary Admission
The project goal is to reduce improper payments by evaluating the validity of one- day acute
admission necessity in South Dakota’s thirteen PPS hospitals with a DRG 182 (gastroenteritis and
miscellaneous digestive disorders age>17 with complication or comorbidity), DRG 183
(gastroenteritis and miscellaneous digestive disorders age>17 without complication or comorbidity),
or DRG 143 (chest pain). The QIO will use admission screening criteria, appropriate use of
observation, as well as identification of types of patients whose care may be provided efficiently and
safely on an outpatient basis. Also it will focus on billing practices/interventions needed, and
documentation appropriate for DRG. Hospitals will be given a list of suggested intervention
materials, and the medical director will visit hospitals as requested. Data analysis and writing of a
peer-reviewed paper are expected.
TX Texas Medical Foundation
One-Day Stays for medical DRGs
The Texas Medical Foundation aims to decrease the number of inappropriate one-day stays for
medical DRGs by 3 to 5 percent within a target group of hospitals by promoting changes in the
hospital admission process. From FY 2001 though FY 2004, almost one third of all one-day stays
for medical DRGs in TX were found to be inappropriate admissions. A systems approach is
required to address one-day stays for all medical DRGs. Statewide education will be conducted with
the support of partner organizations. The QIO will develop a collaborative model to facilitate
change and spread process change throughout multi-hospital systems by working with corporate
offices of 25 target hospitals with potential to impact another 53 hospitals. Quarterly statewide
tracking (monthly for participating hospitals), training materials statewide, educational materials for
a variety of audiences statewide through TMF and partner organization web sites, journals and
newsletters, physician training (with CME) and train-the-trainer materials will be developed.
UT Health Insight
Re-Defining Billing/Level of Care
The goal of this project is to decrease the number of inappropriate Medicare short-stay admissions
C-4

by 25% in an accelerated fashion by focusing on systems-level re-design and incorporating human
factors science. Further, this project aims to bridge the gap between quality assurance and quality
improvement activities within the QIO system. A series of workshops will provide fundamental
concepts of human factors and safety management principles to project participants, and
applications of these principles to the admission process. This project will target 8 hospitals in Utah;
three workshops will be conducted in each hospital Outcomes Congress. The work of this project
follows on work from Special Study on Human Factors in 7SOW.
WI Metastar

Reduction of Unnecessary One-Day Stays
Through Use of a Case management
Protocol
This project aims to reduce unnecessary one-day acute care hospital stays through the use of case
management protocol. Hospitals that employ a comprehensive case management program more
accurately classify patients than do hospitals relying completely on the physician placing the
appropriate order in the record. This project will broaden the use of a case management protocol by
assisting WI PPS hospitals in the development of a protocol acceptable to the hospital, their medical
staff and the fiscal intermediary (FI). The case management protocol will be used as an educational
tool to improve physician knowledge and use will result in more accurate placement of the patient in
the appropriate care/payment setting. FATHOM data analysis, pre-pilot “webinar”, pilot protocol,
post protocol webinar on lessons leaned from pilot, and regional meetings will be conducted and
summarized in a final report.

C-5

APPENDIX D
TECHNICAL EXPERT PANEL BIOGRAPHIES
Gerard F. Anderson, Ph.D. is professor at the Department of Health Policy and Management at
Johns Hopkins Bloomberg School of Public Health. He was the National Program Director for the
Robert Wood Johnson Foundation sponsored program “Partnership for Solutions: Better Lives for
People with Chronic Conditions”. Dr. Anderson is a professor of health policy and management
and international health at the Johns Hopkins University Bloomberg School Public Health,
professor of medicine at the Johns Hopkins University School of Medicine, director of the Johns
Hopkins Center for Hospital Finance and Management, and co-director of the Johns Hopkins
Program for Medical Technology and Practice Assessment. Dr. Anderson is currently conducting
research on chronic conditions, comparative insurance systems in developing countries, medical
education, hospital payment reform, and technology diffusion. He has directed reviews of health
systems for the World Bank in Korea, Mexico, Taiwan, and Ecuador. Prior to his arrival at Johns
Hopkins in 1983, Dr. Anderson held various positions in the Office of the Secretary, U.S.
Department of Health and Human Services, where he helped to develop Medicare prospective
payment legislation. He has authored two books on health care payment policy, has published over
200 peer reviewed articles, testified in Congress over 30 times as an individual witness, and serves on
multiple editorial committees.
Dale Bratzler, DO, MPH currently serves as the Medical Director of the Hospital Interventions
Quality Improvement Organization Support Center and the Hospital Quality of Care Measures
Special Study located at the Oklahoma Foundation for Medical Quality. In these roles, he provides
clinical and technical support for local and national quality improvement initiatives including the
Medicare National Pneumonia Project and the National Surgical Care Improvement Project. He is a
Past President of the American Health Quality Association and was recently appointed by the
Secretary of Health and Human Services to the National Advisory Council for the Agency for
Healthcare Research and Quality. Dr. Bratzler has published and presented locally and nationally on
many occasions on topics related to healthcare quality, particularly related to improving care for
pneumonia, increasing vaccination rates, and reducing surgical complications. Dr. Bratzler received
his Doctor of Osteopathic Medicine degree at the University of Health Sciences College of
Osteopathic Medicine in Kansas City, Missouri, and his Master of Public Health degree from the
University of Oklahoma Health Sciences Center College of Public Health. He is board certified in
internal medicine. Dr. Bratzler is an adjunct associate professor of health administration and policy
at the University of Oklahoma College of Public Health.
Allyson Ross Davies, PhD, MPH is a Principal at ARD Consulting LLC. In September 2004, Dr.
Davies reactivated her consulting practice, having completed a year as interim CEO at MassPRO,
Inc., the health care quality improvement organization that provides quality oversight for Medicare
and Medicaid programs in Massachusetts. She had spent the preceding two years as Executive Vice
President at QualityMetric Incorporated, a privately held corporation focused on advancing
consumer-based assessment technologies for improving health care, where she led both business
development and product and service development activities. A nationally recognized expert in the
measurement and use of patient-reported outcomes, quality of care assessment, and quality
improvement, Dr. Davies spent the preceding 13 years consulting to and working in hospitals,
D-1

managed care companies, and health systems on outcomes measurement and monitoring systems,
and another 12 years in health services and quality of care research at the RAND Corporation and
UCLA. She received her MPH and PhD, both in health services research, from UCLA. A two-term
director of the American Health Quality Association, Dr. Davies served on MassPRO’s board for
nine years, four of them on the Executive Board and two as the board’s first non-physician chair.
She is also a founding director of QualityMetric Incorporated.
Kelly J. Devers, Ph.D. conducts research on a wide range of health care organization, delivery, and
policy issues. Her recent research has focused on hospitals and medical group responses to changing
market and policy forces and their impact on cost and quality and patient safety. She also is an
expert in qualitative and mixed research methods. Currently, she is a co-investigator on an Agency
for Healthcare Research and Quality (AHRQ) funded project examining physician office-based
supports to improve smoking cessation counseling and a Robert Wood Johnson Foundation
(RJWF) funded project to improve the delivery of preventive health care services in primary care
practices. She also serves as task leader on an AHRQ funded H-CAHPS project on physician-patient
communication about hospital quality data and its implications for hospital referral and choice and
as a qualitative consultant on a National Cancer Institute (NCI) study of barriers to colon cancer
screening. Dr. Devers also recently served is a co-investigator on a congressionally mandated study
conducted through the Center for Medicare and Medicaid Services (CMS) on physician-owned
specialty hospitals. She has also served as a temporary member of the Health Services Organization
and Delivery (HSOD) study section, National Institute of Health (NIH) and Agency for Healthcare
Research and Quality (AHRQ), special emphasis panels. Dr. Devers has published widely in major
health services research and policy journals and served as guest editor of Health Services Research,
edited a book on managed care, and is currently co-authoring a textbook on mixed methods
research. Dr. Devers teaches courses in health care organization theory, qualitative and mixed
research methods, and quality and patient safety. She holds a joint appointment in the VCU School
of Medicine, Department of Family Practice.
Mary Jane Koren, M.D., assistant vice president, joined the Commonwealth Fund in 2002 and
leads the Picker/Commonwealth Program on Quality of Care for Frail Elders. Dr. Koren, an
internist and geriatrician, began her academic career at Montefiore Medical Center, in the Bronx,
where she helped to establish one of the early geriatric fellowship programs in New York. Dr.
Koren also practiced in both nursing home and home care settings and was the associate medical
Director of the Montefiore Home Health Care Agency. She later joined the faculty of Mt. Sinai's
Department of Geriatrics and served as associate chief of staff for extended care at the Bronx V.A.
Medical Center. Leaving academic practice, she was appointed as director of the New York State
Department of Health's Bureau of Long Term Care Services, where she ran the nursing home
survey and certification programs, led the state's implementation of OBRA'87 (the Nursing Home
Reform Law) and participated in many of the state's long term care policy initiatives. Following that,
she served as principal clinical coordinator for the New Jersey Peer Review Organization, which
directed the Federal Health Care Quality Improvement Program. In 1993, she joined the Fan Fox
and Leslie R. Samuels Foundation, first as an advisor and later as vice president of a grant making
program in the field of health services and aging. Throughout her career she has been active as a
health services researcher in the area of long term care quality.
Myles Maxfield, Ph.D., is a vice president and director of health research at Mathematica's
Washington, D.C. office. Dr. Maxfield specializes in the design and evaluation of health care quality
D-2

programs and disease management programs. He currently directs the development of physician
performance measures for Medicare's Physician Voluntary Reporting Program (PVRP), in
partnership with the American Medical Association and the National Committee for Quality
Assurance. In a related study, he directs the design of the PVRP data validation system, estimation
of physician costs of participating in PVRP, and investigation of the need for DSH payments in
PVRP. He directs the performance measurement system for CMS's Medicaid Health Support
program, a pilot of a disease management component to the Medicare program. In recent years, he
directed an assessment of the Hospital Quality Alliance program for measuring the quality of care
provided by acute care hospitals. He is a senior advisor to the design and analysis of the on-going
health care survey of DoD beneficiaries, a CAHPS-like survey of beneficiaries of the military health
system, TRICARE. He is also a senior advisor to the development and operation of the Medicare
Quality Monitoring System (MQMS), which produces outcome measures such as risk-adjusted
mortality rates hospitals. Dr. Maxfield received his Ph.D. in economics from the University of
Maryland. He has presented his recent findings on hospital transformational change in quality of
care at a number of conferences including those of the American Public Health Association, the
American Healthcare Quality Association, and AcademyHealth. He has published recently in the
Journal of Patient Safety and the Journal of Behavioral Health Services and Research.
William Scanlon, Ph.D. is a senior policy advisor with Health Policy R&D. He serves as a
consultant to the National Health Policy Forum and is a Research Professor with the Health Policy
Institute, Georgetown University. He is also currently a member of the Medicare Payment Advisory
Commission, the National Committee on Vital and Health Statistics, the National Commission for
Quality Long-Term Care, the White House Conference on Aging Advisory Committee. Until April
2004, he was the Managing Director of Health Care Issues at the U.S. General Accounting Office
(GAO). He has been engaged in health services research since 1975. Before joining GAO in 1993,
he was the Co-Director of the Center for Health Policy Studies and an Associate Professor in the
Department of Family Medicine at Georgetown University and had been a Principal Research
Associate in Health Policy at the Urban Institute. At GAO, he oversaw Congressionally requested
studies of Medicare, Medicaid, the private insurance market and health delivery systems, public
health, and the military and veterans’ health care systems. His research at Georgetown and the
Urban Institute focused on the Medicare and Medicaid programs, especially provider payment
policies, and the provision and financing of long-term care services. Dr. Scanlon has published
extensively and has served as frequent consultant to federal agencies, state Medicaid programs, and
private foundations. Dr. Scanlon received his Ph.D. in Economics from the University of
Wisconsin-Madison.
Shoshanna Sofaer, D.P.H., is the Robert P. Luciano Professor of Health Care Policy at the School
of Public Affairs, Baruch College, in New York City. She completed her master's and doctoral
degrees in public health at the University of California, Berkeley; taught for six years at the
University of California, Los Angeles School of Public Health and served on the faculty of George
Washington University Medical Center, where she was professor, associate dean for Research of the
School of Public Health and Health Services, and director of the Center for Health Outcomes
Improvement Research. Dr. Sofaer's research interests include providing information to individual
consumers on the performance of the health care system; assessing the impact of information on
both consumers and the system; developing consumer-relevant performance measures; and
improving the responsiveness of the Medicare program to the needs of current and future cohorts
of older persons and persons with disabilities. In addition, Dr. Sofaer studies the role of community
D-3

coalitions in pursuing public health and health care system reform objectives and has extensive
experience in the evaluation of community health improvement interventions.
Leon Wyszewianski, Ph.D. is an associate professor in health management and policy at the
University of Michigan School of Public Health. Dr. Wyszewianski's research focuses on strategies
for changing physicians' clinical behaviors to increase quality of care and efficiency. Most of his
publications are on the definition, measurement, and improvement of quality of health care. He has
also published on a number of other topics in health care delivery and health policy. He teaches a
graduate seminar on quality of care as well as a graduate course on the constituent parts and overall
functioning of the health care system in the United States. Professor Wyszewianski received his
Ph.D. in Medical Care Organization from the University of Michigan.

D-4

APPENDIX E
POTENTIAL SOURCES OF DATA FOR USE IN EVALUATION
Data Source

Description

Case Review
Information System
(CRIS)

Created for the 7th SOW, CRIS allows QIOs to track medical records
within their own organization and perform online case review for
Diagnostic Related Groups (DRG), Quality, Utilization, Beneficiary
Complaint, Hospital Issued Notice of Noncoverage, and EMTALA.
The CRIS Helpline module allows QIOs to record and track their
Medicare beneficiary Helpline activities. The timeliness of review by
QIO staff is tracked through reports generated by CRIS.

Chronic Condition Data
Warehouse (CCW)

The CCW contains existing CMS beneficiary data (from multiple data
sources) linked by a unique identifier, allowing researchers to analyze
information across the continuum of care. The CCW currently
contains data from fee-for-service Institutional and Non-institutional
claims, enrollment/eligibility, and assessment (all payers) data
(Minimum Data Set, Outcome and Assessment Information Set, Swing bed
assessments, and Inpatient Rehabilitation Facility Patient Assessment Instrument)
from January 1, 1999 forward for a random 5% sample of the
Medicare beneficiary population. Researchers may request CCW
data—reference time periods, diagnosis and procedure codes,
number/type of qualifying claims (e.g., must have 2 Carrier claims
during reference time period), coverage (e.g., must have Part A and
Part B coverage), geography, and exclusions. Also, researchers may
submit requests for control populations at the same time as initial data
requests.

Financial Inventory
Vouchering System
(FIVS)

The QIO program financial reporting system, which includes data on
QIO expenditures per task and subtask areas, average monthly cost per
provider, and average monthly cost per identified participant.

Nursing Home
Minimum Data Set
(MDS)

The MDS is a standardized, primary screening and assessment tool of
health status; it measures physical, medical, psychological and social
functioning of nursing home residents in Medicare or Medicaid
certified nursing and long-term care facilities. The general categories of
data and health status items in the MDS include demographics and
patient history, cognitive, communication/hearing, vision, and
mood/behavior patterns, psychosocial well-being, physical
functioning, continence, disease diagnoses, health conditions,
medications, nutritional and dental status, skin condition, activity
patterns, special treatments and procedures and discharge potential.
Since 1991 CMS has required all applicable nursing homes to
E-1

Data Source

Description
administer the MDS on admission, quarterly, annually, whenever the
resident experiences a significant change in status, and whenever the
facility identifies a significant error in a prior assessment. Also,
residents receiving Medicare SNF PPS payment require more frequent
assessments (5, 14, 30, 60, 90 day). The nursing home quality
measures are calculated from the MDS.

Outcome and
A group of data elements that represent core items of a comprehensive
Assessment Information assessment for an adult home health care patient, and form the basis
Set (OASIS)
for measuring home health patient outcomes for purposes of
outcome-based quality improvement (OBQI). OASIS items were
designed for the purpose of enabling the rigorous and systematic
measurement of patient home health care outcomes, with appropriate
adjustment for patient risk factors affecting those outcomes.
Outcomes have been defined in many ways, but those derived from
OASIS items have a very specific definition: they measure changes in a
patient’s health status between two or more time points. Outcome
measures include those related to utilization (acute care hospitalization,
discharge to community, emergent care) activities of daily living (ambulation,
grooming, management of oral medications), and those that are physiologic
(e.g., pain, dyspnea, speech, urinary tract infection), emotional/behavioral (e.g.,
anxiety, behavioral problem frequency), and cognitive (e.g., confusion frequency)
in nature.
Program Activity
Reporting Tool
(PARTner)

PARTner allows QIOs to collect the information requested by CMS
for identified participants in quality improvement activities, hospital
payment monitoring activities, deliverables, and narratives for the tasks
in the 7th SOW. Information about publications and about collection
activities is recorded in PARTner. Users are able to access and track
information in the modules for which they have been granted access
only.

Program Resource
System (PRS)

PRS is the storage area for all physician, health service provider,
beneficiary, MA and FI/Carrier information for every state. It is
considered the center of the QualityNet data system because all other
applications link to PRS as a data source. QIOs have the ability and
responsibility to update fields in the PRS to keep it as current as
possible. PRS Task 2b HGD (Hospital Generated Data) allows QIOs
to record and track Hospital Generated Data Survey results.

QIO Clinical Data
Warehouse

The information repository for the clinical quality-of-care measures
collected and submitted by hospitals. The QIO Clinical Data
Warehouse contains data uploaded from hospitals across the nation.

E-2

Data Source

Description

QIO Survey about
Quality Improvement
Organization Support
Centers (QIOSCs)

A survey administered to QIOs about the performance of QIOSCs.
Respondents are asked about their level and mode of interaction with
specific QIOSCs; their perception of data, materials, instructions,
information, products delivered, strategies, and other tools and
resources that are produced by the QIOSCs; and their perception of
QIOSC expertise.

Stakeholder Survey
Questionnaire

Administered by CMS, a voluntary survey of organizations and
agencies that work with medical providers or patients with Medicare
coverage. The survey elicits information from respondents related to
their knowledge and degree of interaction with their state QIO.

Standard Data
Processing System
(SDPS)

The information system for the Quality Improvement Organization
(QIO) program, it contains many data and reporting tools and was
designed and developed in response to the ongoing information
requirement of the QIOs and other affiliated partners to fulfill their
contractual requirements with CMS. This system interfaces with CMS,
41 QIOs, and Clinical Data Abstraction Centers.

E-3


File Typeapplication/pdf
File TitleEvaluation Priorities
Authorpc-user
File Modified2007-01-26
File Created2007-01-26

© 2024 OMB.report | Privacy Policy