MPR 8th SOW

MPR 8th SOW draft rpt.pdf

Program Evaluation of the Ninth Scope of Work Quality Improvement Organization Program (CMS-10294)

MPR 8th SOW

OMB: 0938-1104

Document [pdf]
Download: pdf | pdf
Contract No.:
HHSM-500-2005-00025I
MPR Reference No.: 6514-170

Assessment of the Eighth
Scope of Work of the
Medicare Quality
Improvement
Organization Program

Draft Report
March 18, 2009

Andrew Clarkwest
Sue Felt-Lisk
Sarah Croake
Arnold Chen

Submitted to:
Centers for Medicare and Medicaid Services
Room S3-10-04
7500 Security Blvd,
Baltimore, Maryland 21244

Government Task Leader:
Robert Kambic
Project Officer:
Cynthia Pamon

Submitted by:
Mathematica Policy Research, Inc.
P.O. Box 2393
Princeton, NJ 08543-2393
Telephone: (609) 799-3535
Facsimile: (609) 799-0005
Project Director:
Myles Maxfield

CONTENTS

Chapter

Page
EXECUTIVE SUMMARY........................................................................................... ix

I

INTRODUCTION ..........................................................................................................1
A. BACKGROUND AND POLICY CONTEXT ........................................................2
B. BRIEF DESCRIPTION OF THE EIGHTH SCOPE OF WORK............................3
1.
2.
3.
4.

Nursing Homes ................................................................................................4
Home Health Agencies ....................................................................................5
Hospitals ..........................................................................................................6
QIOs’ Activities ...............................................................................................7

C. OVERVIEW OF STUDY COMPONENTS AND RESEARCH
QUESTIONS ...........................................................................................................7
1.
2.
3.

Analysis of Medicare Compare Data ...............................................................8
Mechanisms and Case Study Analysis for Improvements in Hospital
SCIP Measures .................................................................................................9
Analysis of Provider Survey Data....................................................................9

D. CHALLENGES TO THE CURRENT ASSESSMENT OF THE EIGHTH
SOW .......................................................................................................................10
E. THE REMAINDER OF THIS REPORT...............................................................10
II

RESULTS OF DESCRIPTIVE AND IMPACT ANALYSES .....................................11
A. HAS THE QUALITY OF CARE RECEIVED BY PATIENTS SERVED BY
MEDICARE PROVIDERS IMPROVED NATIONWIDE? .................................14
1.
2.
3.

Nursing Homes ..............................................................................................14
Home Health Agencies ..................................................................................16
Hospitals ........................................................................................................19

B. DID MOST STATES IMPROVE OVER THE EIGHTH SOW? .........................21
1.
2.
3.

DRAFT

Nursing Homes ..............................................................................................21
Home Health Agencies ..................................................................................23
Hospitals ........................................................................................................23

iii

CONTENTS (continued)
Chapter

Page

II (continued)
C. WHICH STATES DID WELL IN MEASURES FOR ONE SETTING AND
FOR MULTIPLE SETTINGS? .............................................................................23
1.
2.
3.
4.

States with High Performance in Nursing Home Measures ..........................23
States with High Performance in Home Health Agency Measures ...............27
States with High Performance in Hospital Measures.....................................27
States with High Performance in Measures for More Than One Setting.......30

D. DO PROVIDERS AND STATES THAT DID WELL IN ONE SET OF
MEASURES ALSO DO WELL IN OTHERS? ....................................................30
1.
2.

Provider Level ................................................................................................30
State Level .....................................................................................................34

E. WAS THERE AN IMPACT OF QUALITY IMPROVEMENT
ORGANIZATIONS’ WORK WITH IDENTIFIED PARTICIPANT
GROUP PROVIDERS ON IMPROVEMENT IN QUALITY MEASURES? .....36
1.
2.
3.

Nursing Homes ..............................................................................................37
Home Health Agencies ..................................................................................42
Hospitals ........................................................................................................44

F. CONCLUSIONS....................................................................................................49
III

MECHANISMS AND CASE STUDY ANALYSIS ....................................................53
A. SUMMARY ...........................................................................................................53
B. RESULTS BY RESEARCH QUESTION .............................................................55
1.
2.

3.
4.
5.

DRAFT

Did QIO actions play a role in some states’ rates of dramatic
improvement? ................................................................................................55
Were there differences in the timing of how hospitals participated in or
viewed other national-level surgical infection-prevention initiatives
that might help explain the different pattern of improvement? .....................58
Did hospitals in the states with low initial rates face barriers
to improvement that were overcome? ............................................................59
What other factors might explain the different patterns of improvement
in the high-improving versus high-baseline states? .......................................59
What factors do the QIOs and hospital associations say are associated
with improvements at the hospital level? .......................................................61

iv

CONTENTS (continued)
Chapter
IV

Page
PROVIDER SATISFACTION .....................................................................................63
A. TOPIC AREAS AND GROUPING OF PROVIDERS .........................................64
B. RESULTS ..............................................................................................................67
1.
2.
3.
4.
5.

Awareness of the Local QIO and of Other CMS Initiatives ..........................67
Providers’ Satisfaction with Their QIOs........................................................72
Perceived Value of QIO Assistance Among Providers .................................72
Providers’ Preferences for Interactions with Their QIO ................................77
Providers’ Sources for Quality Improvement Information ............................77

C. DISCUSSION ........................................................................................................81
V

CONCLUSIONS ........................................................................................................83
A. SUMMARY OF RESULTS ..................................................................................83
B. POTENTIAL LIMITATIONS ...............................................................................85
C. CONCLUSIONS....................................................................................................85
REFERENCES..............................................................................................................87

DRAFT

APPENDIX A:

COMPARE DATA METHODS FOR ANALYSES
OF MEDICARE...........................................................................A.1

APPENDIX B:

METHODS FOR CASE STUDY ANALYSIS ........................... B.1

APPENDIX C:

METHODS FOR ANALYSIS OF PROVIDER
SATISFACTION SURVEY ........................................................ C.1

APPENDIX D:

SUPPLEMENTAL TABLES FOR CHAPTER II
(ANALYSES OF MEDICARE COMPARE DATA)..................D.1

APPENDIX E:

SUPPLEMENTAL TABLES FOR CHAPTER IV
(ANALYSES OF PROVIDER SURVEY) .................................. E.1

v

TABLES

Tables

Page

I.1

REQUIRED NUMBER OF HOME HEALTH AGENCY IPGs,
EIGHTH SOW ............................................................................................................... 6

I.2

REQUIRED NUMBER OF HOSPITAL IPGS, EIGHTH SOW................................... 7

II.1 STUDY QUALITY MEASUREs ................................................................................ 12
II.2 NATIONAL AVERAGES OF NURSING HOME QUALITY
MEASURES, QIO 8TH SOW ..................................................................................... 15
II.3 NATIONAL AVERAGES OF HOME HEALTH QUALITY
MEASURES, QIO 8TH SOW (PERCENTAGES) ..................................................... 17
II.3 NATIONAL AVERAGES OF HOME HEALTH QUALITY
MEASURES, QIO 8TH SOW (PERCENTAGES) ..................................................... 17
II.4 PROVIDER-LEVEL NATIONAL AVERAGES OF HOSPITAL
QUALITY MEASURES, QIO 8TH SOW (PERCENTAGES) .................................. 20
II.5 STATE-LEVEL AVERAGES OF NURSING HOME QUALITY
MEASURES, QIO 8TH SOW (PERCENTAGES) ..................................................... 22
II.6 STATE-LEVEL AVERAGES OF HOME HEALTH QUALITY
MEASURES, QIO 8TH SOW (PERCENTAGES) ..................................................... 24
II.7 STATE-LEVEL AVERAGES OF HOSPITAL QUALITY
EASURES, QIO 8TH SOW (PERCENTAGES) ......................................................... 25
II.8 ADJUSTED z-SCORES OF QUALITY CHANGE IN CONSISTENTLY
HIGH IMPROVING STATES, 8TH SOW NURSING HOME OUTCOMES........... 26
II.9 ADJUSTED z-SCORES OF QUALITY CHANGE IN
CONSISTENTLY HIGH IMPROVING STATES, 8TH SOW HHA
OUTCOMES ................................................................................................................ 28
II.10 ADJUSTED z-SCORES OF QUALITY CHANGE IN
CONSISTENTLY HIGH-IMPROVING STATES, 8TH SOW
HOSPITAL OUTCOMES ........................................................................................... 29
II.11 HIGH-PERFORMING STATES IN MORE MULTIPLE PROVIDER
SETTINGS AND IN ONE SETTING ONLY ............................................................. 31
II.12 8TH SOW NURSING HOME OUTCOMES .............................................................. 32
DRAFT

vii

TABLES (continued)
Table

Page

II.13 PROVIDER-LEVEL CORRELATIONS OF CHANGE ACROSS 8TH
SOW HOME HEALTH OUTCOMES ........................................................................ 33
II.14 STATE-LEVEL CORRELATIONS OF CHANGE ACROSS 8TH
SOW NURSING HOME OUTCOMES ...................................................................... 35
II.15 IPG PENETRATION RATES FOR PROVIDER SETTINGS AND
MEASURES ................................................................................................................ 38
II.16 SUR ESTIMATES OF ASSOCIATION BETWEEN IPG
PENETRATION AND CHANGE IN NURSING HOME OUTCOMES
DURING THE 8TH SOW ........................................................................................... 40
II.17 OLS ESTIMATES OF ASSOCIATION BETWEEN IPG
PENETRATION AND CHANGE IN ACUTE CARE
HOSPITALIZATION RATES OF HHAs, 8TH SOW ................................................ 43
II.18 AVERAGE IMPROVEMENT ON ELECTIVE HOME HEALTH
OUTCOMES, BY QIOS’ MEASURE SELECTED FOR STATEWIDE
IMPROVEMENT ........................................................................................................ 45
II.19 SUR ESTIMATES OF EFFECTS OF QIO STATEWIDE WORK ON
HHA OUTCOMES ...................................................................................................... 46
II.20 ESTIMATES OF ASSOCIATION BETWEEN IPG PENETRATION
AND CHANGE ON THE APPROPRIATE CARE MEASURE INDEX ................... 48
IV.1 NUMBER OF RESPONSES AND RESPONSE RATES
(PERCENTAGES), BY PROVIDER TYPE AND IPG STATUS .............................. 62
IV.2 PROVIDER SATISFACTION SURVEY TOPICS AND QUESTIONS .................... 65
IV.3 PROVIDERS’ REPORTED RECEIPT OF QIO ASSISTANCE AND
USE OF INTERNET TO ACCESS QUALITY INFORMATION ............................. 67
IV.4 PROVIDERS’ SATISFACTION WITH THEIR LOCAL QIOS ................................ 70
IV.5 PROVIDERS’ SATISFACTION WITH THEIR LOCAL QIOS ................................ 73
IV.6 PROVIDERS’ PERCEPTIONS OF QIOS’ VALUE .................................................. 75
IV.7 PERCENTAGES OF PROVIDERS EXPRESSING PREFERENCES
FOR TYPES OF INTERACTIONS WITH QIOS ....................................................... 78
IV.8 PROVIDERS’ SOURCES OF INFORMATION FOR QUALITY
IMPROVEMENT (PERCENTAGES)......................................................................... 79

DRAFT

viii

EXECUTIVE SUMMARY

The Quality Improvement Organization Program of the Centers for Medicare & Medicaid
Services is a key component of the agenda of the Centers for Medicare & Medicaid Services
(CMS) for assuring and improving quality of care for Medicare beneficiaries. CMS executes
three-year contracts (each called a Scope of Work or SOW) with a nationwide network of
independent Quality Improvement Organizations (QIOs) to help health care providers deliver
high-quality care to Medicare beneficiaries. The Eighth SOW ended in July 2008; the Ninth
SOW began on August 1, 2008. With budgets of roughly $1.2 and $1.1 billion for the Eighth and
Ninth SOWs, respectively, the QIO program is the single largest investment in quality
improvement infrastructure—public or private—in the nation.
As evidenced by several recent federal agency reports, members of Congress and federal
policy analysts have become increasingly interested in studying the effectiveness and value of
the QIO Program. These reports include a congressionally mandated assessment of the QIO
program by the Institute of Medicine (IOM) published in 2006; CMS’ response to the IOM
report in a 2006 Report to Congress; a Government Accountability Office (GAO) review
requested by Congress of the QIO Program’s efforts to improve nursing home quality during the
Seventh SOW; and a report sponsored by the Assistant Secretary for Planning and Evaluation
(ASPE) on future directions for evaluating the QIO Program published in 2007. In addition,
researchers have published many articles studying various aspects of the QIO Program in the
academic medical and health policy literature.
CMS has contracted with Mathematica Policy Research, Inc. (MPR) to conduct an
independent limited study of the already completed Eighth SOW, followed by the design and
implementation of a full evaluation of the Ninth SOW. This report presents the findings of our
study of the Eighth SOW.
STUDY METHODS
The study included three components: (1) an analysis of Medicare Compare data, (2) case
studies of five states’ experiences with the Surgical Care Improvement Project (SCIP), and (3) an
analysis of survey data on provider satisfaction with QIOs.
Analyses of Medicare Compare Data
We analyzed Medicare Compare data for nursing homes, home health agencies (HHAs), and
hospitals. The data available to us encompassed these three provider settings. Federal regulations
on QIO data precluded our access to any data that identified providers’ involvement with QIOs;
only the proportions of providers participating with each state’s QIO were available. For the
upcoming Ninth SOW evaluation we are working on accessing these data in a way that satisfies
these regulations.

DRAFT

ix

Descriptive Trend Analyses. We produced descriptive statistics of magnitudes and
directions of changes in quality measures in the Medicare Compare data between the beginning
and the end of the Eighth SOW. The baseline and follow-up periods for the three provider
settings varied because of differences in the data collection schedules for Medicare Compare; for
nursing homes data collection occurred in the second quarter of 2005 and the first quarter of
2008; for home health agencies it took place from September 2004 through August 2005 and
March 2007 through February 2008; and for hospitals data collection occurred from July 2004
through June 2005 and October 2006 through September 2007. We performed analyses at both
the provider and state levels, weighting by provider size.
Correlation Analyses. We assessed the correlations between quality measures within each
of the three measure sets (for nursing homes, HHAs, and hospitals) at both the provider and state
levels. If measures are highly correlated with one another, quality improvement efforts might be
able to focus on the limited number of providers and states that tend to perform poorly in several
measures. There might also be a group of providers performing well in several measures that
have developed a set of best practices worth replicating. If there is little correlation across
measures, however, quality improvement efforts will need to work with a larger set of providers
and be prepared to assist each provider with its specific handful of measures that need
improvement.
Impact Analyses. Using multiple regression we estimated the association between changes
in quality measures and the IPG penetration rate, the percentage of providers in each state that
participated with QIOs as Identified Participant Groups (IPGs). The percentage of providers that
each QIO was to recruit as IPGs under the Eighth SOW was a function of the numbers of
providers in each state. QIOs in states with few providers were expected to work with a high
percentage of providers; QIOs in large states were expected to work with only a fraction of all
providers. We can reasonably assume IPG penetration to be independent of underlying provider
capabilities or likelihoods of good or bad outcomes, and thus to represent a measure of QIO
intervention that is not confounded by unobserved provider characteristics or selection bias.
(That is, when the very providers with the strongest interests and capacity for quality
improvement are those most likely to sign up to work with QIOs, we run the risk of mistakenly
attributing their improvements to the QIO, when the QIO might in fact have had very little
effect.) IPG penetration rates ranged from 8 percent to 100 percent for nursing homes, 14 percent
to 55 percent for home health agencies, and 12 percent to 100 percent for hospitals. Where
possible, we controlled for changes in quality measures that were not the focus of QIO
interventions, thus controlling for underlying trends separate from any QIO effects. To assess the
impacts of selected statewide efforts by QIOs to improve specific HHA measures for all
providers in the state (as opposed to efforts focused on IPG providers), we compared states in
which QIOs had chosen to pursue such projects with states in which QIOs had not.
Case Study Analysis
We selected for study three states that started out with low SCIP measures and had large
improvements, and two states that started out with high SCIP measures and had modest
improvements. Following a prespecified discussion guide, we interviewed national experts in
hospital quality improvement as well as staff in both QIOs and state hospital associations in the
selected states.
DRAFT

x

Provider Satisfaction Survey Analysis
We analyzed a nationally representative survey of all providers on their perceptions of
QIOs, conducted by Westat, Inc. in mid-2007, under contract to CMS. We analyzed results for
nursing homes, HHAs, and hospitals, categorizing providers into (1) IPGs, (2) non-IPGs that
reported receiving quality improvement assistance from their local QIO, and (3) non-IPGs that
said they had received no such help.
SUMMARY OF FINDINGS
Descriptive Analyses of Medicare Compare Data
National averages for nearly all of the quality measures studied showed improvement. In
national averages of provider-level changes in quality measures, three of the four nursing home
measures (the exception being the percentage of residents with worsening depression or anxiety),
all home health measures, and both of the hospital indexes improved.
Most states also showed improvement in most measures. State-level averages of the
changes in the nursing home measures showed that all states improved in the pain measure, all
but two improved in the physical restraint measure, and more than two-thirds improved in the
pressure ulcer measure, but there was little movement in the anxiety/depression measure. Fortyeight states improved in the home health patient functioning index and 35 improved in the acute
care hospitalization (ACH) measure. All states improved in both of the hospital indexes.
Improvements in most states on the nursing home and home health measures were modest
(generally one or two percentage points); the improvements in the hospital index scores were
somewhat larger. No states showed noteworthy worsening in any of the quality measures.
In general, few states had large improvements in measures for more than one provider type.
One state did well on both nursing home and home health measures, and two did well on both
nursing home and hospital measures. Two other states did well on both home health and hospital
measures. Only one state did well in all three sets of measures.
Correlational Analyses of Medicare Compare Data—Providers and States
Home health agencies tended to do well across several measures; this was not true for
nursing homes and hospitals. For HHAs, the correlation coefficients for the seven patient
functioning measures were moderately sized, ranging from 0.26 to 0.57 with a mean of 0.43. The
patient functioning measures did not correlate highly with the discharge to community or ACH
measures, however. The four nursing home measures had small correlation coefficients (all less
than 0.06), as did the two hospital indexes (around 0.25).
States also tended to do well on several home health measures, but not across several
nursing home measures nor across several hospital measures. State-level correlations exhibited
patterns similar to the provider-level ones described above.

DRAFT

xi

Analysis of Medicare Compare Data—Impact Results
Higher rates of IPG penetration are associated with larger improvements in quality
measures for nursing homes and home health agencies, but not for hospitals. For example, for
the nursing home measures, a one percentage point increase in the IPG penetration rate (the
percentage of all providers in a state that are IPGs) was associated with a roughly 0.03
percentage point greater reduction in pressure ulcers, a 0.02 percentage point greater reduction in
physical restraints, and a 0.01 percentage point greater reduction in chronic pain (with no
association for the depression or anxiety measure). Among HHAs, a one percentage point
increase in the IPG rate was associated with a 0.13 percentage point larger reduction in the ACH
measure. However, there was no significant association between IPG penetration and the hospital
appropriate care measure (ACM) index.
QIO statewide efforts on home health measures are associated with larger improvements.
States in which the QIO opted to focus on the home health measure of dyspnea saw a nearly two
percentage point greater improvement in dypsnea than states that did not. Likewise, states
focusing on management of oral medications had a one percentage point larger improvement in
that measure than other states.
Table 1 summarizes these findings for the impact analyses and also summarizes those for
the descriptive analyses discussed above. Although nearly all measures improved nationwide,
not all improvements could be tied to QIO efforts.
Case Study Results
The quality improvement environment is complex, and both QIO and non-QIO activities
might contribute to observed improvements. Among the three states that started out with low
SCIP measures and had large improvements and the two states that started out with high SCIP
measures and had modest improvements, QIOs undertook a wide range of activities to improve
perioperative care. These included engaging the dominant local health system to foster
improvement; convening a hospital collaborative; and pursing a complex intervention of
intensive site visits, regional in-person meetings, and a letter from an influential surgeon to all
surgeons statewide to encourage support for the SCIP initiative. States also noted many other
concurrent influences, however, including the 100,000 Lives Campaign conducted by the
Institute for Health Improvement (IHI); the Reporting Hospital Quality Data for Annual Payment
Update (RHQDAPU) initiative; public reporting of the SCIP measures; and quality improvement
efforts by state hospital associations, health system organizations, and the Voluntary Health
Association (VHA). Finally, many of the states noted how Seventh SOW activities and their
results could affect both the starting points for the Eighth SOW and the effectiveness of Eighth
SOW interventions. These case studies demonstrate the complexity of the quality improvement
environment and the challenges of disentangling QIO effects from the multitude of other
influences, as well as distinguishing the effects from any specific SOW from preceding SOWs.
The case studies also pointed out specific QIO interventions that might warrant further study in
the evaluation of the Ninth SOW.

DRAFT

xii

TABLE 1
SUMMARY OF STATISTICALLY SIGNIFICANT RESULTS FROM DESCRIPTIVE
AND IMPACT ANALYSES OF MEDICARE COMPARE DATA
Average
Improvement

Estimated Impact
of IPG Penetration

Estimated Impact of
Statewide Efforts

Nursing Homes
Pressure Ulcers
Physical Restraints
Worsening Depression/Anxiety
Moderate to Severe Pain

-0.6***
-2.0***
0.1**
-1.5**

-0.034***
-0.022***
--b
-0.011**

n.a.a
n.a.
n.a.
n.a.

Home Health Agencies
Acute Care Hospitalization
Improvement in Bathing
Improvement in Transferring
Improvement in Ambulation/Locomotion
Improvement in Management of Oral Medications
Improvement in Pain Interfering with Activity
Improvement inDyspnea
Improvement in Urinary Incontinence
Discharge to Community

-0.5***
2.4***
1.4***
5.4***
6.5***
3.0***
2.7***
1.7***
0.6***

-0.13***
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.

n.a.
n.a.
n.a.
n.a.
1.3***
--b
1.8***
n.a.
n.a.

Hospitals
ACM Index
SCIP Index

7.1***
16.6***

--b
n.a.

n.a.
n.a.

Source:

Medicare Compare Data.

Note:

IPG penetration is the proportion of providers in a state who are identified participant group (IPG)
providers working with their QIO to improve quality. Estimated impacts are the coefficients on the
IPG penetration variable from multiple regression analyses in which changes in the quality measures
are regressed upon the IPG penetration rate.
Estimates of statewide efforts (relevant only for home health care measures) compare states in which
QIOs chose to work with all home heath agencies in the state on improving a quality measure, to states
in which QIOs did not choose to do so.

a

Statewide analyses could only be conducted for selected measures for home health agencies—this was the only
setting in which states could choose a quality measure on which to focus in a statewide project, and in which the
design of the Eighth SOW permitted these analyses.
b

Statistically insignificant result.

*Significantly different from zero at the .10 level, two-tailed test
**Significantly different from zero at the .05 level, two-tailed test.
***Significantly different from zero at the .01 level, two-tailed test.

DRAFT

xiii

Provider Satisfaction Results
IPG providers are highly satisfied with QIOs, non-IPGs that received no QIO assistance
are least satisfied, and non-IPGs that received QIO assistance fall between these two. In the
provider survey, all three types of IPG providers (nursing homes, HHAs, and hospitals) were
highly satisfied with their state QIOs and found QIO assistance valuable. Non-IPG providers that
had received no QIO assistance generally showed little awareness of their QIOs and of CMS
initiatives such as pay for performance (P4P); they were generally neutral toward their QIOs
(neither satisfied nor dissatisfied, and neither strongly agreeing nor disagreeing with various
positive statements about QIOs). Respondents also named many different sources of information
for quality improvement, confirming the case study findings that there are many influences on
providers’ quality of care.
A substantial portion of non-IPG providers received QIO assistance, complicating
comparisons between IPGs and non-IPGs. The percentages of non-IPG providers that reported
receiving assistance from their QIOs was 71 percent for nursing homes, 82 percent for HHAs,
and 94 percent for hospitals. This means that the control, comparison, or “untreated” provider
group has actually received some of the intervention being evaluated (QIO assistance) and thus
complicates any interpretation of the IPG indicator as a “treatment” indicator. If this engagement
of non-IPG providers with QIOs improves their performance, this spillover or contamination
effect will lead to underestimates of QIOs’ real impact.
CONCLUSIONS
Our limited assessment of the Eighth SOW found overall improvements in quality measures
over the period of the Eighth SOW, and some evidence that QIO efforts contributed to these
improvements. Interviews of key stakeholders made clear the complicated context in which
QIOs operate and highlighted specific potential QIO interventions for further study. Providers,
especially IPG providers, were generally quite satisfied with QIOs, although we found that the
IPG and non-IPG distinction is quite indistinct. These preliminary findings point out the
importance of detailed data, both qualitative and quantitative, for the upcoming evaluation of the
Ninth SOW, and will help guide us in the design of the evaluation.

DRAFT

xiv

I. INTRODUCTION

The Quality Improvement Organization Program of the Centers for Medicare & Medicaid
Services (CMS) is a key component of the agenda of the Centers for Medicare & Medicaid
Services (CMS) for assuring and improving quality of care for Medicare beneficiaries. As
required by Sections 1152 through 1154 of the Social Security Act, CMS contracts with a
nationwide network of independent Quality Improvement Organizations (QIOs) to help health
care providers deliver high quality care to Medicare beneficiaries. 1 The contracts last for three
years, with each contract cycle called a scope of work, or SOW. The Eighth SOW ended in July
2008, and the Ninth SOW began on August 1, 2008. With budgets of roughly $1.2 billion and
$1.1 billion for the Eighth and Ninth SOWs, respectively, the QIO program is the single largest
investment in quality improvement infrastructure—public or private—in the nation.
CMS has contracted with Mathematica Policy Research, Inc. (MPR) to independently
conduct a limited study of the Eighth SOW, followed by the design and implementation of a full
evaluation of the Ninth SOW. This report presents the findings of our study of the Eighth SOW.
In conducting this study of the Eighth SOW, we encountered the same statutory and
regulatory restrictions on release of QIO data that others have identified as a major challenge to a
full evaluation of the QIO Program (Institute of Medicine 2006; U. S. Government
Accountability Office 2007; Sutton et al. 2007). These restrictions prohibit a QIO from
disclosing any data identifying individual health care providers to any outside organization
(including CMS or even other QIOs) except under a very limited set of circumstances (Section
1

The current report focuses on the impacts of the QIO Program on quality improvement.Other missions of the
QIO Program include protecting beneficiaries’ rights by reviewing and investigating complaints and appeals and
protecting the Medicare Trust Funds by ensuring that Medicare pays only for services and goods that are reasonable,
necessary, and provided in the most appropriate setting.

DRAFT

1

1160 of the Social Security Act; 42 CFR Parts 480; Chapter 10 of the QIO Manual). The
proscription thus applies to information on whether a provider was working with the QIO on any
quality improvement projects. Our Eighth SOW study thus uses either publicly available or
aggregated and de-identified data. As described later, we are working with CMS and the QIO
community on how we can access QIO data with identifiable provider-level data in a way that
satisfies the regulatory requirements.
A. BACKGROUND AND POLICY CONTEXT
The importance of the QIO Program’s functions and the magnitude of its budget make
evaluation of its effectiveness essential. Understanding the program’s overall effectiveness and
identifying its most successful components or activities are prerequisites to improving the
program as a whole. Moreover, given the influence of the Medicare program on the Amercian
health care system, the QIO Program can lead to better care not only for Medicare beneficiaries
but for all Americans.
In the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (P.L. 108173), Congress mandated the Institute of Medicine (IOM) to conduct an overview of the QIO
Program, including a review of “the extent to which quality improvement organizations improve
the quality of care for Medicare beneficiaries” (Institute of Medicine 2006). Following an
extensive review of scientific literature published between 1995 and 2005, the IOM concluded
that “although the quality of care received by Medicare benficiaries has improved somewhat,
researchers have been unable to attribute these changes to the QIO program.” The IOM could not
determine whether this lack of evidence for QIO impacts was due to the methodological
limitations of many of the studies reviewed, to the difficulty of disentangling the effects of QIO
activities from the many other concurrent quality improvement efforts, or to a true lack of
program effectivness (IOM 2006). The IOM report also recommended that CMS periodically
DRAFT

2

commission independent, external evaluations of the QIO Program, and in its 2006 Report to
Congress responding to the IOM’s recommendations, CMS agreed on the need for strengthened
methods of program evaluation.
At about the same time that IOM was preparing its report, the Assistant Secretary for
Planning and Evaluation (ASPE) was studying options for evaluating the effectiveness of the
QIO Program. ASPE contracted with the National Opinion Research Center (NORC) to develop
a richer inventory and description than previously available of QIOs’ activities and strategies,
and to assess alternative designs for potential future evaluations of the QIO Program. NORC’s
literature review for this project on the impacts of the QIO program reached the same
conclusions as IOM’s, namely, that the literature is ambiguous on the effectiveness of the
program and that previous studies have suffered from a variety of methodological problems.
NORC’s report concluded with several design options and recommendations for further research
on the QIO Program (Sutton et al. 2007).
B. BRIEF DESCRIPTION OF THE EIGHTH SCOPE OF WORK
QIO Program activities in the Eighth SOW were carried out by 43 QIO contractors under 53
contracts (one for each of the 50 states, the District of Columbia, Puerto Rico, and the U.S.
Virgin Islands). 2 The QIOs’ mandate was to help health care providers in their catchment areas
improve quality of care through technical assistance—assisting them with “root-cause analysis,
implementation of interventions and systems changes, … knowledge transfer, … data collection,
and [coordination of] efforts with other stakeholders” (IOM 2006).

2

Throughout the remainder of this report we will use the term “QIO” interchangeably with “state,” even
though a few QIOs in the Eighth SOW held contracts for more than one state, and the District of Columbia

DRAFT

3

With the data available to us, we were able to study three of the quality improvement areas
that the QIOs worked on during the Eighth SOW: (1) nursing homes (NHs), (2) home health
agencies (HHAs), and (3) hospitals. 3 For each of these areas, the QIOs were to work with two
main groups of providers: (1) all providers statewide on certain quality measures; and (2)
“identified participant group providers” (IPGs or IPs), which were recruited by the QIOs and
voluntarily agreed to receive intensive technical assistance and engage in a number of quality
improvement projects. 4 The SOW specified the numbers of IPG providers the QIOs were
expected to recruit, which were based mainly on the number of providers in a state.
1.

Nursing Homes

a.

Statewide Activities
Each QIO had to set statewide improvement targets for two required quality measures—

pressure ulcers and use of physical restraints—but could choose to set targets for more. The
QIOs were also to assist all nursing homes statewide in setting their own improvement targets for
at least the same two quality measures, and for more if the nursing homes desired. The assistance
consisted of information on how to set targets and referral to the Nursing Home STAR (Setting
Targets, Achieving Results) website (http://www.nhqi-star.org/STAR_index.aspx) for submitting
targets.

3

We will not study the subtasks of Critical Access & Rural Hospitals, Physician Practices, Underserved
Populations, and Part D Prescription Drug Benefits, either because they were not a focus of the our evaluation or
because data are not available.
4

QIOs were also required to help any providers that requested assistance in improving quality in any of the
topics covered by the SOW. We call providers that were not IPGs “non-IPGs.”

DRAFT

4

b. IPGs
IPG nursing homes could be recruited by QIOs through channels such as previous
collaborations in prior SOWs or professional or trade associations.5 The QIOs were to work with
the IPG facilities on four measures: (1) depressive symptoms management, (2) pain
management, (3) pressure ulcers among high-risk residents, and (4) physical restraints. 6 The
required number of IPG nursing homes was: in states with 30 or fewer nursing homes, all
nursing homes; in states with 31 to 300 nursing homes, 30 to 45 nursing homes; and in states
with more than 300 nursing homes, 10 to 15 percent of homes.
2.

Home Health Agencies

a.

Statewide Activities
The Eighth SOW listed statewide targets in improvement for all HHA quality measures. All

QIOs had to work on the Acute Care Hospitalizations measure (the percentage of patients in
home health care who had to be admitted to the hospital) and one other of their choosing.
b. IPGs
QIOs were also to work with individual IPG HHAs on the Acute Care Hospitalizations
measure (in addition to the QIOs’ statewide efforts on this measure) plus one other measure
selected by each agency. 7 Table I.1 shows the required numbers of IPG home health agencies for
QIOs to recruit. 8

5

There was actually a second nursing home IPG, called IPG2, which consisted of a much smaller group (a
minimum of one to three per state, depending on the number of facilities in the state) of “persistently poor
performing nursing homes” that was not to overlap with IPG1. QIOs were to identify these nursing homes in
collaboration with state survey agencies.
6

In addition, QIOs were also expected to work with IPG nursing homes on staff/resident satisfaction,
employee turnover, resident experience of care/satisfaction, and staff experience of care/satisfaction.
7

There was also a second IPG, called Systems Improvement and Organizational Culture Change (SIOC),
which QIOs were to help with implementation and use of home telehealth and promotion of organizational culture

DRAFT

5

TABLE I.1
REQUIRED NUMBER OF HOME HEALTH AGENCY IPGs, EIGHTH SOW
Number of HHAs in the State
14 or fewer
15 to 25
26 to 45
46 to 65
66 to 90
91 or more

Number of IPG HHA QIOs Required to Work With
6
8
10
14
16
20 percent

HHA = Home Health Agency; IPG = identified participant group providers; SOW = scope of work; QIO = quality
improvement organization.

3.

Hospitals

a.

Statewide Activities
The QIOs were to work with all acute care hospitals to (1) increase hospital participation in

clinical performance measurement reporting and (2) improve hospital performance on clinical
performance measurement results. For the reporting subtask, the QIOs were to encourage
hospitals statewide to submit data on a set of 22 process-of-care measures for four clinical
conditions: (1) acute myocardial infarction, (2) heart failure, (3) pneumonia, and (4) surgical
care.
b. IPGs
The QIOs were required to recruit and work with two main IPGs: (1) one focused on the
first three clinical conditions listed above—acute myocardial infarction, heart failure, and
pneumonia—also called the Appropriate Care Measure or ACM IPG; and (2) one focused on the

(continued)
change. Interested HHAs could choose to participate in the clinical IPG only (focusing on the clinical quality
measures), the SIOC IPG only, or both.
8

There were also requirements for the distribution of small, medium, and large agencies.

DRAFT

6

fourth condition, surgical care (the Surgical Care Improvement Project or SCIP IPG). 9 Each IPG
was to be the same size. Table I.2 shows the required IPG sizes.
TABLE I.2
REQUIRED NUMBER OF HOSPITAL IPGS, EIGHTH SOW
Number of Hospitals in the State/Jurisdiction
6 or fewer
7 to 40
41 to 240
241 or more

ACM IPG
All hospitals
6
15 percent
36

SCIP
All hospitals
6
15 percent
36

ACM = acute care measure; IPG = identified participant group providers; SCIP = Surgical Care Improvement
Project; SOW = scope of work.

4.

QIOs’ Activities
Both the IOM and NORC noted that the activities that QIOs pursue to achieve both

statewide and IPG-level quality improvement targets are not well documented (Institute of
Medicine 2006; Sutton et al. 2007). The two reports developed descriptions of QIO activities
only by sifting through many websites, brochures, and so on, and by conducting telephone
interviews and site visits. The reports described the wide range of activities that QIOs perform to
accomplish their quality improvement objectives—developing and disseminating educational
publications to providers; conducting conferences or workshops for providers in person, by
telephone, or online; organizing provider quality improvement collaboratives; and providing
one-on-one technical assistance consultations.
C. OVERVIEW OF STUDY COMPONENTS AND RESEARCH QUESTIONS
We pursued three main study components: (1) an analysis of Medicare Compare data, (2) a
case study analysis of telephone interviews with key informants, and (3) an analysis of de-

9

There was a third hospital IPG, the organization culture IPG, whose membership could overlap with those of
the first two IPGs.

DRAFT

7

identified data from a national survey of providers. The Medicare Compare datasets (Nursing
Home Compare, Home Health Compare, and Hospital Compare) are publicly available and
contain data on the performance of individual providers on quality measures that are also used by
the QIO program. Under contract to CMS, Westat surveyed providers in 2007 on their
perceptions of their local QIOs. We outline the broad research questions addressed by each of
these components.
1.

Analysis of Medicare Compare Data
Our analysis of Medicare Compare data included both descriptive analyses and impact

analyses. Some of the analyses study all providers nationally; others aggregate providers to the
state level and present these state-level results.
a.

Descriptive Study Questions
1. Have quality-of-care measures improved nationwide? Given the nationwide
scope of the Medicare program and of the quality measure reporting system put in
place over the past several years, it is important to document that quality measures
are, in fact, improving over time, regardless of whether we can tie any such changes
to the QIO Program.
2. Did most states improve over the Eighth SOW? The QIO Program operates
through state-level organizations, and it is important to know whether most states
are, in fact, improving.
3. Does provider- and state-level performance in one set of measures correlate
with performance in other measures? This question is important for two reasons.
First, the answer to this question might be useful in designing and targeting quality
improvement initiatives. If several different sets of measures are highly correlated
with one another, providers and states with low performance in one group of
measures will tend to perform poorly in all measures. Quality improvement efforts
will then need to (1) focus on that limited set of poor performers and (2) address the
entire range of measures. Conversely, if there is little correlation, quality
performance efforts will need to (1) work with a larger set of providers and (2) assist
each provider with the specific handful of measures with which a provider needs
help. Second, if different measures are highly correlated, high performers in one set
of measures will also tend to do well across all measures, and studying these high
performers might help to identify best practices that can then be disseminated to
providers and states.

DRAFT

8

b. Impact Analyses: Do QIO efforts appear to lead to greater quality improvements?
This is the basic program impact evaluation question—whether or not a specific program or
initiative leads to the intended favorable effects. Lacking provider-level data on providers’
participation with QIOs, we addressed this question using cross-state variations in measures of
QIO activities.
2.

Mechanisms and Case Study Analysis for Improvements in Hospital SCIP Measures
For five states with two contrasting patterns of state-level improvement over the Eighth

SOW in the hospital SCIP measures, we explored possible QIO contributions to these patterns.
We selected states that either had (1) relatively poorer baseline performance but had among the
largest improvements (three states) or (2) high baseline performance but had relatively small
improvements (two states). We examined the following questions:
1. Did states’ rates of improvement appear to be associated with QIO actions?
2. Were there differences in the timing of how hospitals participated in or viewed other
national-level surgical infection-prevention initiatives that might help explain the
different pattern of improvement?
3. Did hospitals in the states with poorer baseline performance face barriers to
improvement that were overcome?
4. What other factors might explain the different patterns of improvement in the two
groups of states (those with poorer baseline performance but larger improvements
versus those with strong baseline performance and modest improvements?)
5. What factors do the QIOs and hospital associations say are associated with
improvements at the hospital level?
3.

Analysis of Provider Survey Data
Finally, the analysis of provider survey data addressed questions on providers’ awareness

and knowledge of their local QIOs and providers’ perceptions of their QIOs:
1. Are providers aware of their QIOs, and have they worked with a QIO?

DRAFT

9

2. Are providers satisfied with their QIOs, and do they view QIO services and
information as valuable?
3. What types of training or assistance do providers desire from their QIOs?
4. Besides QIOs, what other resources on quality improvement do providers consult
and which do they prefer?
D. CHALLENGES TO THE CURRENT ASSESSMENT OF THE EIGHTH SOW
Although there were limitations to each of the study components, we note two major
challenges to the impact analyses of the Medicare Compare data. First, our lack of individual
provider-level data on which providers worked or did not work with their QIOs precluded many
analytic approaches for isolating program impacts from quasi-experimental data. We were able
to develop estimates of QIO impacts from state-level variations in QIO efforts, however, that are
valid under certain assumptions. Second, as described in the chapter on the provider survey
results (Chapter IV), a substantial proportion of non-IPGs reported receiving assistance from
QIOs, in effect “contaminating” the group of providers against which the IPGs are being
compared. This contamination complicates interpretation of estimated IPG and non-IPG effects.
E. THE REMAINDER OF THIS REPORT
Chapter II presents the results of the analyses of the Medicare Compare data. Chapter III
describes our findings from the mechanisms and case study analysis. Chapter IV contains the
descriptive analysis of provider satisfaction data. Chapter V summarizes our conclusions. Each
chapter briefly outlines the study methods for that component; the appendixes contain detailed
descriptions of study methodology.

DRAFT

10

II. RESULTS OF DESCRIPTIVE AND IMPACT ANALYSES

In this chapter we present results by provider type for each of the research questions for the
descriptive and impact analyses of the Medicare Compare data. We analyzed changes in quality
measures (that is, the difference between follow-up values collected toward the end of the Eighth
Scope of Work (SOW) and initial values collected near the beginning of the Eighth SOW
contract). 1 There were four quality measures for nursing homes; nine measures for home health
agencies (seven measures of patient functioning, one measure for whether the home health
episode ended with an acute care hospitalization (ACH), and one measure for whether the home
health episode ended with a discharge from home health care with the patient at home); and 12
measures of hospital care (10 Appropriate Care Measures [ACMs] for treatment of heart attacks,
heart failure, and pneumonia, and two measures of perioperative care from the Surgical Care
Improvement Project or SCIP). For home health agencies, the seven patient functioning
measures were combined into a single index. For hospitals, the 10 ACM measures and two SCIP
measures were each combined into two indexes by averaging measures in each set together.
Table II.1 lists the measures.
Provider-level results for nursing homes and hospitals were weighted by the number of
patients in each facility (information contained in the Nursing Home Compare and Hospital
Compare datasets). Home health agency results are not weighted, because Home Health
Compare contains no information on numbers of patients served or numbers of home health

1

The baseline and follow-up periods for the three provider settings varied because of differences in the data
collection schedules for Medicare Compare; for nursing homes data collection occurred in the second quarter of
2005 and the first quarter of 2008; for home health agencies it took place from September 2004 through August
2005 and March 2007 through February 2008; and for hospitals data collection occurred from July 2004 through
June 2005 and October 2006 through September 2007.

DRAFT

11

TABLE II.1
STUDY QUALITY MEASURES
Measure Name
Nursing Homes
Percent of High-Risk Long-Stay Residents
who have Pressure Ulcers
Percent of Long-Stay Residents who were
Physically Restrained
Percent of Long-Stay Residents who
Experience Depression

Percent of Long-Stay Residents who
Experience Chronic Pain
Home Health Agencies
Acute Care Hospitalization
Improvement in Bathing
Improvement in Transferring
Improvement in Ambulation/Locomotion
Improvement in Management of Oral
Medications
Improvement in Pain Interfering with
Activity
Improvement in Dyspnea
Improvement in Urinary Incontinence
Discharge to Community
Hospitals
Heart Attack (ACM Measures)
Aspirin at Arrival
Aspirin Prescribed at Discharge
ACE Inhibitor or ARB for LVSD

Beta Blocker Prescribed at Discharge
Beta Blocker at Arrival
Heart Failure (ACM Measures)
Evaluation of LVS Function

DRAFT

Description
Pressure sores are skin wounds that usually develop on bony parts of
the body. They may be painful, and may take a long time to heal or
cause other complications, such as skin and bone infections.
A physical restraint is any device, material, or equipment that keeps a
resident from moving freely. A resident who is restrained daily can
become weak and develop other medical complications.
Depression is a medical problem of the brain that can affect how you
think, feel, and behave. Anxiety is excessive worry and can include
trembling, muscle aches, and irritability. Nursing home residents are at
a high risk for developing depression and anxiety for many reasons,
such as loss of a spouse, separation from family members, illness,
chronic pain, difficulty adjusting to the nursing home, and frustration
with memory loss.
Residents in pain may become depressed or have an overall poor
quality of life. In most cases, a resident in pain can be made more
comfortable.
Percentage of patients who were admitted to the hospital
Percentage of patients who get better at bathing
Percentage of patients who get better at getting in and out of bed
Percentage of patients who get better at walking or moving in a
wheelchair safely
Percentage of patients who get better at taking their medicines
correctly (by mouth)
Percentage of patients who have less pain when moving around
Percentage of patients whose level of shortness of breath has improved
Percentage of patients who get better at getting to and from the toilet
Percentage of patients who are discharged and continue to live at
home

Patients without aspirin contraindications who received aspirin within
24 hours before or after hospital arrival.
Patients without aspirin contraindications who are prescribed aspirin at
hospital discharge.
Patients with left ventricular systolic dysfunction (LVSD) and without
both ACE inhibitor and ARB contraindications who are prescribed an
ACE inhibitor or ARB at hospital discharge. For purposes of this
measure, LVSD is defined as chart documentation of a LVEF less than
40% or a narrative description of left ventricular systolic (LVS)
function consistent with moderate or severe systolic dysfunction.
Patients without beta blocker contraindications who are prescribed a
beta blocker at hospital discharge.
Patients without beta blocker contraindications who received a beta
blocker within 24 hours after hospital arrival.
Patients with documentation in the hospital record that LVS function
was evaluated before arrival, during hospitalization, or is planned for
after discharge.

12

TABLE II.1 (continued)

Measure Name
ACE Inhibitor or ARB for LVSD

Pneumonia (ACM Measures)
Oxygenation Assessment
Pneumococcal Vaccination
Initial Antibiotic Received within 4
Hours of Hospital Arrival
Surgical Care Improvement
(SCIP)/Surgical Infection Prevention
Receipt of Prophylactic Pre-operative
Antibiotic
Discontinuation of Prophylactic Preoperative Antibiotic
Source:

Description
Patients with LVSD and without both ACE inhibitor and ARB
contraindications who are prescribed an ACE inhibitor or ARB at
hospital discharge. For purposes of this measure, LVSD is defined as
chart documentation of a LVEF less than 40% or a narrative
description of LVS function consistent with moderate or severe
systolic dysfunction.
Patients whose arterial oxygenation was assessed by arterial blood,
gas, or pulse oximetry within 24 hours prior to or after hospital arrival.
Patients, age 65 and older, who were screened for pneumococcal
vaccine status and vaccinated prior to discharge, if indicated.
Patients who received their first dose of antibiotics within 4 hours after
arrival at the hospital.

Percent of surgery patients who were given an antibiotic at the right
time (within one hour before surgery) to help prevent infection
Percent of surgery patients whose preventive antibiotics were stopped
at the right time (within 24 hours after surgery)

Centers for Medicare and Medicaid Services. (2006, January). Medicare Quality Improvement
Organization Program Priorities.

ACE = angiotensin-converting enzyme; ARB = angiotensin receptor blocker; LVEF = left ventricular ejection
fraction; ACM = appropriate care measure.

DRAFT

13

episodes (each home health agency is thus given a weight of one). We calculated state level
descriptive results by averaging the provider-level results, weighted by number of patients in the
case of nursing homes and hospitals. Appendix A contains a detailed description of the study
sample, study measures, and analytic approaches.
A. HAS THE QUALITY OF CARE RECEIVED BY PATIENTS SERVED BY
MEDICARE PROVIDERS IMPROVED NATIONWIDE?
As detailed below, there was substantial improvement nationwide across the three provider
settings in the great majority of measures.
1.

Nursing Homes
Three of the four nursing home change measures—reduction in the prevalence of pressure

ulcers, reduction in the use of physical restraints, and reduction in the percentage of patients
experiencing chronic pain—improved nationwide during the period covered by the Eighth SOW
(Table II.2). A negative change in any of these measures is an improvement, as it means there is
a smaller percentage of patients with one of these adverse outcomes in the follow-up period.
At the beginning of the SOW (baseline), relatively small proportions of patients—between 5
and 14 percent—were experiencing these outcomes. The largest reductions occurred for use of
physical restraints and prevalence of chronic pain. Those raw reductions were 2.0 and 1.5
percentage points respectively, from corresponding baseline levels of 6.5 percent and 5.4
percent. The reduction in pressure ulcer prevalence was substantially smaller—0.56 percentage
points—from a baseline level of 13.3. No reduction was observed in the fourth measure, the
percentage of residents experiencing worsening depression or anxiety. In fact, that figure
increased slightly, about 0.15 percentage points, from a baseline level of 13.6 percent. 2
2

All results were produced by weighting providers by the total number of residents in their facilities. Results
are similar if each provider is weighted equally, regardless of size.

DRAFT

14

DRAFT

TABLE II.2
NATIONAL AVERAGES OF NURSING HOME QUALITY MEASURES, QIO 8TH SOW

Quality Measure—
Percentage of Long-Stay
Residents:
Who Have Pressure Ulcersa
(St. Dev.)
Number

Baseline
13.33
(7.05)
7,594

Followup
12.77
(6.88)
7,594

Change
-0.56***
(7.31)
7,594

Percentage
of Providers
Improved/
Declinedb
50.7/43.0

10th
Percentile
-9

25th
Percentile
-5

Median
-1

75th
Percentile
4

90th
Percentile
8

15

Who Were Physically Restrained
(St. Dev.)
Number

6.54
(7.34)
10,676

4.51
(5.68)
10,676

-2.04***
(6.02)
10,676

53.0/25.0

-9

-4

-1

1

3

With Worsening Depression/
Anxiety
(St. Dev.)
Number

13.62
(8.47)
10,537

13.76
(8.83)
10,537

0.14**
(8.94)
10,537

47.0/47.0

-10

-5

0

5

11

Who Experience Moderate to
Severe Pain
(St. Dev.)
Number

5.40
(5.09)
10,548

3.85
(3.88)
10,548

-1.54***
(5.10)
10,548

56.5/29.2

-7

-3

-1

1

3

Source:

Nursing Home Compare, second quarter of 2005 (baseline) and first quarter of 2008 (followup).

Note:

Results weighted by each nursing home’s total number of patients. Negative changes are improvements; for example, a -2.0 percentage point
change in the restraint measure means a smaller proportion of patients were physically restrained at followup than at baseline. Percentiles refer to
the distributions of the change variables.

**p<.05; ***p<.01 (Two-tailed tests of change between baseline and follow-up.)
a

High-risk residents only.

b

Figures do not sum to 100 percent because some providers had no change between baseline and followup. The percentage with no change is particularly high for
physical restraints. Most of those reported no use of physical restraints at either point in time. Rates in Nursing Home Compare are rounded to the nearest one percent.

QIO = quality improvement organization.
SOW = statement of work.
St. Dev. = standard deviation.

To provide a better sense of the magnitudes of change and the proportion of nursing homes
experiencing change, Table II.2 also presents selected quantiles of the change distributions. For
example, for the pain measure, 10 percent of nursing homes had improvements of seven
percentage points or more, 25 percent had improvements of three points or more, half had
improvements of (the median) one point or more, 25 percent had worsening of one percentage
point or more, and 10 percent had worsening of three percentage points or more. Because the
median was at a one percentage point improvement, slightly more than half (57 percent) of the
facilities had any improvement (that is, a non-zero change).
2.

Home Health Agencies
Nationwide, improvement occurred on all measures studied: acute care hospitalization

(ACH), discharge to community from the home health agency (HHA), and patient functioning
measures (Table II.3). Note that each of the patient functioning measures represents the
percentage of patients who improved in the specified area of functioning between admission and
discharge from home health care. We then analyzed these agency-level data reported by agencies
during the study baseline and follow-up periods (that is, toward the beginning and end of the
Eighth SOW). 3 To say that an HHA improved on the improvement in bathing measure, for
example, means that a higher proportion of the agency’s patients in the study follow-up period
made progress in bathing while in home health care than did so in the study baseline period.
Average improvements were larger for the functioning measures than for ACH and
discharge to community (note that lower scores are better for ACH, and higher scores are better
for the remaining measures), ranging from 1.4 to 5.4 percentage points. Those scores represent
gains of 0.11 to 0.58 standard deviations relative to the baseline distribution. The improvements
3

The study baseline period was September 2004 through August 2005. The study follow-up period was March
2007 through February 2008.

DRAFT

16

DRAFT

TABLE II.3
NATIONAL AVERAGES OF HOME HEALTH QUALITY MEASURES, QIO 8TH SOW (PERCENTAGES)

17

Percentage
of Providers
Improved/
Declinedb

10th
Percentile

25th
Percentile

Median

75th
Percentile

90th
Percentile

-0.48***
(8.67)
6,308

48.0/46.1

-11

5

0

4

9

62.57
(10.87)
5,767

2.39***
(10.95)
5,767

59.0/36.9

-10

-4

2

8

15

50.26
(12.31)
5,532

51.69
(12.38)
5,532

1.43**
(12.82)
5,532

52.5/43.9

-13

-6

1

8

17

Improvement in
Ambulation/Locomotion
(St. Dev.)
Number

36.84
(9.28)
5,751

42.40
(8.84)
5,751

5.36***
(9.97)
5,751

72.1/23.9

-6

0

5

11

17

Improvement in Management of
Oral Medications
(St. Dev.)
Number

37.79
(10.48)
5,329

41.26
(10.98)
5,329

3.46***
(11.59)
5,329

62.0/34.4

-10

-3

3

10

17

Improvement in Pain Interfering
with Activity
(St. Dev.)
Number

60.05
(12.46)
5,603

63.01
(13.01)
5,603

2.96***
(13.22)
5,603

58.0/37.9

-12

-4

2

10

19

Improvement in Dyspnea
(St. Dev.)
Number

55.76
(13.60)
5,539

58.44
(14.07)
5,539

2.68***
(13.82)
5,539

57.9/38.3

-13

-5

2

10

19

Improvement in Urinary
Incontinence
(St. Dev.)
Number

45.68
(14.88)
4,869

47.39
(14.95)
4,869

1.70***
(15.73)
4,869

50.8/46.1

-17

-8

1

11

21

Quality Measure—
Percentage of Long-Stay
Residents:

Baseline

Followup

Change

Acute Care Hospitalization
(St. Dev.)
Number

31.06
(11.00)
6,308

30.58
(9.46)
6,308

Improvement in Bathing
(St. Dev.)
Number

60.18
(10.60)
5,767

Improvement in Transferring
(St. Dev.)
Number

TABLE II.3 (continued)

DRAFT

Quality Measure—
Percentage of Long-Stay
Residents:

Baseline

Followup

Change

Discharge to Community
(St. Dev.)
Number

64.71
(11.89)
6,298

65.31
(10.28)
6,298

0.60***
(9.21)
6,298

Percentage
of Providers
Improved/
Declinedb

10th
Percentile

25th
Percentile

Median

75th
Percentile

90th
Percentile

49.0/45.9

-10

-5

0

5

12

Source:

Nursing Home Compare, second quarter of 2005 (baseline) and first quarter of 2008 (followup).

Note:

Results weighted by each nursing home’s total number of patients. With the exception of Acute Care Hospitalization, positive changes are
improvements; for example, a 2 percentage point change in the improvement in bathing measure means that more patients improved in their ability
to bathe during their home health care episode in the follow-up period than in the baseline period. In contrast, a negative change in Acute Care
Hospitalization measure is an improvement, as it means fewer patients were hospitalized in the follow-up period than in the baseline period.

*p<.10; **p<.05; ***p<.01 (Two-tailed tests of change between baseline and follow-up.)
a

High-risk residents only.

b

18

Figures do not sum to 100 percent because some providers had no change between baseline and followup. The percentage with no change is particularly high for physical
restraints. Most of those reported no use of physical restraints at either point in time. Rates in Nursing Home Compare are rounded to the nearest one percent.

QIO = quality improvement organization.
SOW = statement of work.
St. Dev. = standard deviation.

in ACH and discharge were each only about 0.5 percentage points—gains of roughly 0.05
standard deviations relative to the baseline distributions of each.
Although average performance did improve on all measures, outcomes for a substantial
number of HHAs did not improve during the Eighth SOW, as shown in the far right-hand
column of Table II.3. Nearly three-quarters of HHAs reported higher levels of the patient
ambulation/locomotion measure at followup. But on each of the other measures, at least onethird of HHAs reported lower levels at followup than they had at baseline. For acute care
hospitalization, 48 percent of HHAs improved, 46 percent declined, and 6 percent reported no
change; as expected, the median is around zero.
3.

Hospitals
Table II.4 presents data on quality of care as measured by the 10-item index of ACM items

and the two item SCIP index—timely provision of antibiotics before and after surgery.
Substantial improvement occurred nationwide on both the ACM and SCIP indexes, with the
magnitude of the average change during the three-year period equaling roughly one standard
deviation of the baseline distribution (Table II.4). The variance in levels of performance on each
outcome also narrowed between baseline and followup. This is partially due to ceiling effects, as
performance on several of the component items surpassed 95 percent by the end of the SOW,
and performance on all items was at least 80 percent (see Appendix Table D.4 for descriptive
statistics on individual items). Improvement was widespread across providers, with nearly 95
percent reporting a better ACM index at followup than at baseline. The corresponding figure for
the SCIP index was 93 percent.
These numbers are calculated by weighting providers by the total number of relevant
patients. In analyses in which hospitals are weighted equally, the levels at both baseline and

DRAFT

19

DRAFT

TABLE II.4
PROVIDER-LEVEL NATIONAL AVERAGES OF HOSPITAL QUALITY MEASURES, QIO 8TH SOW (PERCENTAGES)
Quality
Measure
ACM
(St.
Dev.)
Number
SCIP
(St.
Dev.)
Number

Baseline
85.71
(6.20)

Followup
92.79
(4.46)

2,377

2,377

70.65
(14.53)

87.21
(8.31)

1,257

1,257

Change
7.09***
(5.19)

Percentage of
Providers
Improving/Declining
94.8/4.6

10th
Percentile

25th
Percentile

Median

1.5

4.7

9.4

85.0

98.0

93.1/6.0

0.0

6.5

15.0

27.5

61.0

75th
Percentile

90th
Percentile

2,377
16.56***
(12.74)
1,257

20

Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected October 2006-September 2007.

Note:

The ACM (Appropriate Care Measure) scale is a summative index of the percentage of relevant patients receiving each of 10
treatments, respectively, for acute myocardial infarction, heart failure, and pneumonia (see list in Table II.1). The SCIP (Surgical Care
Improvement Project) index is the average of two measures: the percentage of cases in which an antibiotic was provided within an
hour prior to surgical incision, and the percentage in which antibiotics were discontinued in a timely manner after the end of surgery.
Providers are weighted by the average number of patients whose records were used to create the component outcome measures of the
each index.

***p<.01 (two-tailed tests).
a

Figures do not sum to 100 percent because some providers had no change between baseline and followup. Rates in Hospital Compare are
rounded to the nearest one percent.
ACM = appropriate care measure
SCIP = Surgical Care Improvement Project
SOW = statement of work
St. Dev. = standard deviation.

followup are slightly (one or two percentage points) lower, suggesting that larger hospitals might
have had somewhat higher performance.
B. DID MOST STATES IMPROVE OVER THE EIGHTH SOW?
Most states did, in fact, improve in state-level averages of the measures although
improvement was not universal.
1.

Nursing Homes
The state-level changes show improvement on the same three measures as in the preceding

provider-level analyses, and slight deterioration in worsened depression or anxiety, where statelevel changes were calculated by averaging across all providers in a state (Table II.5). The
average improvements differ slightly from the averages presented in Table II.2 because each
state is weighted equally, so smaller states have relatively more influence; however the results
are similar in both cases.
The largest average improvements were observed in the rate of moderate-to-severe pain and
use of physical restraints. Nursing homes in all states improved on the pain measure, and
facilities in all but two states lowered the prevalence of physical restraint use. In both measures
there was relatively little state-to-state variation around the average value, as the interquartile
range was only one to two percentage points. Two-thirds of the states (35 of 51) experienced a
reduction in pressure ulcers, again with a relatively narrow interquartile range of one percentage
point reduction to a quarter point increase, suggesting that few states made any substantial
progress.
There was lack of overall progress across states in reducing worsened depression and
anxiety among nursing home residents, with only 24 of 51 states improving. States tended to
show little movement in either direction, with all states in the interquartile range experiencing
changes of less than one percentage point in either direction.
DRAFT

21

DRAFT

TABLE II.5
STATE-LEVEL AVERAGES OF NURSING HOME QUALITY MEASURES, QIO 8TH SOW (PERCENTAGES)

RFR
Averagec

25th Percentile
of Change
Distribution

75th Percentile of
Change
Distribution

Number of
States
Improving
(Out of 51)b

-0.29
(1.61)

0.02

0.23

-1.05

35

3.97
(2.16)

-2.01**
(1.66)

0.34

-0.96

-2.19

49

14.93
(0.95)

14.95
(5.09)

0.02
(1.64)

-0.001

0.81

-0.56

24

5.79
(1.88)

3.99
(1.33)

-1.80**
(0.85)

0.31

-1.06

-2.35

51

Baseline Level,
Average

Follow-Up
Level,
Average

Change,
Average

Pressure Ulcersa
(St. Dev.)

12.24
(2.73)

11.96
(2.39)

Physical Restraintsa
(St. Dev.)

5.98
(3.46)

Depression/Anxietya
(St. Dev.)
Paina
(St. Dev.)

Quality
Measure

22

Source:

Nursing Home Compare, second quarter of 2005 (baseline) and first quarter of 2008 (follow-up).

Note:

Sample size is 51 (50 states plus the District of Columbia). State averages are calculated as averages of provider-level change, weighted by number
of patients. States are weighted equally in national averages across states. As a result of the equal weighting, providers in smaller states have
proportionally greater influence on the averages, causing the figures to differ slightly from the provider-level calculations in Table III.1.

**p<.05 (two-tailed tests).
a

b
c

Pressure Ulcers = percentage of high-risk long-stay residents with pressure sores; Physical Restraints = percentage of long-stay residents who were physically
restrained; Depression/Anxiety = percentage of long-stay residents who have become more depressed or anxious; Pain = percentage of long-stay residents who
have moderate to severe pain.
Reflects the number of states in which average performance on the measure was higher at followup than at baseline.
RFR is the reduction in the failure rate. It represents the average change as a proportion of the total possible improvement. For outcomes such as those in this
table in which lower levels are better, it is calculated as change divided by baseline level and multiplied by (-1).

St. Dev. = standard deviation.

2.

Home Health Agencies
Quality generally improved across states, as measured by the HHA summary index of

patient functioning (that combined the seven individual items) and by the ACH measure (Table
II.6). The improvements were widespread across states, especially for the patient functioning
index, which improved in 48 of 51 states. Across the interquartile range, increases were between
1.9 and 4.0 percentage points. Most states experienced improvements in the ACH measure,
although roughly one-third—16 of 51—did not.
3.

Hospitals
When aggregated to the state level, hospitals’ performance improved on both the ACM and

SCIP indexes in all 51 states (Table II.7). Improvement did vary somewhat across states, but
substantial progress was observed in nearly all. On average, performance on the ACM index rose
by 6.7 percentage points, with the 25th and 75th percentiles for growth being 5.9 and 7.5
percentage points, respectively. Average improvement on the SCIP index was 15.4 percentage
points, with an interquartile range of 11.9 to 18.6 percentage points. Results are similar whether
or not providers are weighted by size to create state-level aggregates (results not shown). When
the aggregate figures were produced for each state, all states were weighted equally when
averaging across states. That equal weighting causes the means in Table II.7 to differ slightly
from those in Table II.4 (the provider-level national averages for hospitals).
C. WHICH STATES DID WELL IN MEASURES FOR ONE SETTING AND FOR
MULTIPLE SETTINGS?
1.

States with High Performance in Nursing Home Measures
Although the states’ performance across areas was generally not strongly correlated, some

states did establish a fairly strong record of improvement. Table II.8 lists the 12 states that

DRAFT

23

TABLE II.6
STATE-LEVEL AVERAGES OF HOME HEALTH QUALITY MEASURES,
QIO 8TH SOW (PERCENTAGES)

RFR,
Averagea

25th
Percentile
Distribution
of Change

75th
Percentile
Distribution
of Change

Number of
States
Improving
(Out of 51)c

-0.58*
(2.27)

0.02

-1.47

1.10

35

2.78**
(1.99)

0.06

1.88

4.02

48

Quality Measure

Baseline
Level,
Average

FollowUp
Level,
Average

Change,
Average

Acute Care
Hospitalization
(St. Dev.)

29.42
(5.05)

28.84
(4.07)

Patient Functioningb
(St. Dev.)

49.55
(2.75)

52.33
(2.68)

Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up data
collected March 2007-February 2008.

Note:

These figures might differ from the provider-level calculations. All states are weighted equally; as a
result, smaller states have proportionally more influence than they do on the averages presented in
Table II.3.

*p<.10; **p<.05 (two-tailed tests).
a

RFR is the reduction in the failure rate. It represents the average change as a proportion of the total possible
improvement. The RFR for acute care hospitalization equals the average change divided by the average baseline
level, multiplied by (-1). The RFR average for patient functioning equals the average change level divided by 100
minus the baseline average level.
b

Patient functioning is a scale capturing the mean of the items measuring improvement in bathing, transferring,
ambulation/locomotion, management of oral medications, pain interfering with activity, dyspnea, and urinary
incontinence.
c

Reflects the number of states whose average performance on the measure was higher at followup than at baseline.

St. Dev. = standard deviation.
QIO = quality improvement organization.

DRAFT

24

TABLE II.7
STATE-LEVEL AVERAGES OF HOSPITAL QUALITY MEASURES, QIO 8TH SOW
(PERCENTAGES)

Quality
Measure
ACM
(St. Dev.)

Baseline
Level,
Average
86.23
(2.56)

Follow-Up
Level,
Average
92.88
(1.96)

Change,
Average
6.65
(1.34)

RFR,
Averagea
0.48**

SCIP
(St. Dev.)

70.82
(7.65)

86.26
(4.27)

15.44
(4.85)

0.53**

25th
75th
Number of
Percentile Percentile
States
Distribution Distribution Improving
of Change of Change (Out of 51)b
5.91
7.47
51
11.89

18.59

51

Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected
October 2006-September 2007.

Note:

Sample size equals 51 (50 states plus the District of Columbia). State averages are calculated
as averages of provider-level change, weighted by numbers of patients. States are weighted
equally in national averages across states. As a result of the equal weighting, providers in
smaller states have proportionally greater influence, causing the figures to differ slightly from
the provider-level calculations in Table II.4.

**p<.05 (two-tailed tests).
a

RFR is the “reduction in the failure rate.” It represents the average change as a proportion of the total
possible improvement. For outcomes such as SCIP and ACM, in which higher scores represent more
positive outcomes, the RFR is calculated as the average change level divided by 100 minus the baseline
average level.
b

Reflects the number of states whose average performance on the measure was higher at followup than at
baseline.
ACM = appropriate care measure; SCIP = Surgical Care Improvement Project; SOW = statement of work;
St. Dev. = standard deviation.

DRAFT

25

TABLE II.8
ADJUSTED z-SCORES OF QUALITY CHANGE IN CONSISTENTLY HIGH
IMPROVING STATES, 8TH SOW NURSING HOME OUTCOMES

State
Nevada
Delaware
North Carolina
Arkansas
Nebraska
Arizona
New Hampshire
South Carolina
New Mexico
Wyoming
Alabama
Connecticut

With
Pressure
Ulcersa
-0.97
-2.29
-0.44
0.78
-0.97
-0.08
-0.93
0.06
-0.40
-1.12
-0.45
-0.74

Improvement (Standardized) in Percentage
of Long-Stay Residents:
Who Were
Physically
With Worsening
Restrained
Depression/ Anxiety
-1.60
-2.43
-0.36
-1.22
-2.25
-0.16
-2.00
-0.89
-0.38
0.35
-2.02
-0.38
-1.34
0.39
-0.59
-0.14
1.79
-1.66
-0.93
0.65
-0.35
0.17
-0.56
-0.25

With
Moderate to
Severe Pain
0.17
-0.84
-1.03
-1.75
-1.30
0.22
-0.37
-1.08
-1.40
-0.21
-0.81
0.17

Average
Across
Measures
-1.21
-1.18
-0.97
-0.97
-0.57
-0.56
-0.56
-0.44
-0.42
-0.40
-0.36
-0.34

Source:

Nursing Home Compare, second quarter of 2005 (baseline) and first quarter of 2008 (followup).

Note:

Performance is calculated using three-year change adjusted for baseline. The adjusted change measure
is then standardized to have a mean of zero and a standard deviation of one. “Consistently high
improving” is defined as performing above the mean on at least three of the four outcomes of pressure
ulcers, physical restraints, depression, and chronic pain, and whose average improvement across all
four was at least one-fifth of a standard deviation better than the mean (average z-score of -0.2 or
below). State-level means are calculated by weighting nursing homes by their total number of
residents.

a

High-risk residents only.

SOW = statement of work.

DRAFT

26

(1) had above-average improvement (across all states) on at least three of the four measures and
(2) saw average improvement at least one-fifth of a standard deviation above the mean for all
states across the four measures. The table also shows the standardized performance of these 12
states relative to the mean on each of the four measures, as well as the overall average. The states
vary in size and region of the country. Several mountain west states (Arizona, Nevada, New
Mexico, and Wyoming) are near the top of the list, however. Five states—Arizona, Florida,
Nevada, North Carolina, and Tennessee—experienced above-average improvement on every
measure.
2.

States with High Performance in Home Health Agency Measures
Table II.9 lists 11 states whose improvement was above the mean for all states’ averages on

both the patient functioning index and ACH. The average performance on the two measures was
also at least one-fifth of a standard deviation above the cross-state mean. The states are
geographically dispersed and vary from large to small in population. The District of Columbia
stood out with particularly large improvement on both indicators, moving from a middle-to-lowranked performer at baseline to the top 10 at followup.
3.

States with High Performance in Hospital Measures
Table II.10 lists consistently high-improving states in the hospital measures. The fifteen

states each experienced above-average growth—adjusted for baseline—on both the ACM and
SCIP indexes, and their mean growth on the two was at least one-fifth of a standard deviation
above average. States on the list represent a range of regions. Large states—including California,
Florida, Illinois, Pennsylvania, and Texas—are disproportionately represented on the list. South
Carolina is on the list of highest-improving states for hospitals and is the only state to be among
the consistently high improvers for all three provider types.

DRAFT

27

TABLE II.9
ADJUSTED z-SCORES OF QUALITY CHANGE IN CONSISTENTLY HIGH
IMPROVING STATES, 8TH SOW HHA OUTCOMES

State
District of Columbia
Utah
South Carolina
Georgia
New Jersey
Arkansas
North Dakota
Massachusetts
Idaho
Missouri
Michigan

Acute Care
Hospitalizationa
-4.29
-2.01
-0.83
-0.73
-0.02
-0.35
-0.88
-0.44
-0.84
-0.28
-0.10

Improvement (Standardized)
Patient
Functioningb
2.30
1.10
1.52
1.46
1.29
0.82
0.27
0.70
0.19
0.64
0.34

Average Across
Measures
3.29
1.56
1.18
1.10
0.66
0.58
0.57
0.57
0.51
0.46
0.22

Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up
data collected March 2007-February 2008.

Note:

State-level means are calculated weighting all providers equally. Performance is calculated
using change adjusted for baseline. The adjusted change measure is then standardized to have
a mean of zero and standard deviation of one. “Consistently high improving” is defined as
performing above the mean on both outcomes, and having average improvement on both
outcomes of at least one-fifth of a standard deviation above the mean (average z-score of 0.2
or above).

a

Acute care hospitalization is reversed (multiplied by -1) in calculating the average, so that a higher figure
represents greater improvement on both measures and on the average.

b

Patient functioning is an index capturing the mean of the items measuring improvement in bathing,
transferring, ambulation/locomotion, management of oral medications, pain interfering with activity,
dyspnea, and urinary incontinence.

HHA = home health agency
SOW = statement of work
QIO = quality improvement organization.

DRAFT

28

TABLE II.10
ADJUSTED z-SCORES OF QUALITY CHANGE IN CONSISTENTLY
HIGH-IMPROVING STATES, 8TH SOW HOSPITAL OUTCOMES
Improvement (Standardized)
State

ACM

SCIP

Average Across
Measures

Vermont
Minnesota
California
Texas
New Hampshire
Pennsylvania
Florida
North Carolina
Massachusetts
South Carolina
Oregon
Oklahoma
Illinois
Georgia
Ohio

2.24
0.17
1.28
1.65
0.10
0.31
0.97
0.47
0.17
1.06
0.37
0.61
0.33
0.03
0.11

1.19
2.16
0.85
0.25
1.78
1.20
0.40
0.82
1.09
0.06
0.41
0.07
0.32
0.58
0.33

1.72
1.16
1.07
0.95
0.94
0.76
0.68
0.64
0.63
0.56
0.39
0.34
0.32
0.31
0.22

Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected
October 2006-September 2007.

Note:

State-level means are calculated by weighting hospitals by their average total number of
patients used to report each component item in the measure. Performance is calculated using
three-year change regression-adjusted for baseline levels. The adjusted change measure is
then standardized to have a mean of zero and standard deviation of one. “Consistently high
improving” is defined as (1) improving more than the nationwide average on both the ACM
and SCIP indexes and (2) having average improvement across the two that was at least onefifth of a standard deviation greater than the mean (average z-score of 0.2 or above).

ACM = appropriate care measure.
SCIP = Surgical Care Improvement Project.
SOW = statement of work.

DRAFT

29

4.

States with High Performance in Measures for More Than One Setting
Among the states identified above that were high performing in one of the settings, five had

high performance in two settings, and one was a high-performing state in all three (Table II.11).
Table II.11 also lists the states that were high performing in only of the settings.
D. DO PROVIDERS AND STATES THAT DID WELL IN ONE SET OF MEASURES
ALSO DO WELL IN OTHERS?
Correlations across measures depended on provider setting and measure. For some
measures, there were at most moderate-sized correlations; other measures exhibited little
correlation.
1.

Provider Level

a.

Did nursing homes that did well in one measure also do well in others?
Improvement on one measure does little to predict improvement on others. Although not

strong, the correlations of improvement across the four measures are positive in all but one case
(Table II.12). The average correlation is 0.03 and the largest (between pressure ulcers and
depression) is 0.057. This suggests that the measures are essentially independent of each other.
Measurement error or imprecision in the measures can also reduce correlations between the
measures. 4
b. Do home health agencies that did well in one measure also do well in others?
Improvement on one measure of patient functioning was moderately associated with
improvement on other functioning measures. All pairs of measures are correlated in the direction
expected, with an average magnitude of 0.43 (Table II.13). The ACH and patient discharge
measures are very highly correlated (r = -0.93). This might be expected because a patient is not
4

Results are substantively unchanged by the choice to weight or not weight providers by size.

DRAFT

30

TABLE II.11
HIGH-PERFORMING STATES IN MORE MULTIPLE PROVIDER SETTINGS
AND IN ONE SETTING ONLY
State
AR
NC
NH
GA
MA
SC
AL
AZ
CT
DE
NE
NM
NV
WY
DC
ID
MI
MO
ND
NJ
UT
CA
FL
IL
MN
OH
OK
OR
PA
TX
VT

High-Performing in Nursing
Home Outcomes
X
X
X

High-Performing in Home
Health Outcomes
X
X
X
X

X

High-Performing in
Hospital Outcomes
X
X
X
X
X

X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X

Source:

Medicare Compare data

Note:

State-level means of measures for each provider setting were calculated weighting all
providers equally. Performance was calculated using change adjusted for baseline. The
adjusted change measures were then standardized to have a mean of zero and standard
deviation of one. “Consistently high improving” was defined as performing above the mean
on all outcomes for the provider setting, and having average improvement on all outcomes
of at least one-fifth of a standard deviation above the mean (average z-score of 0.2 or
above).

DRAFT

31

TABLE II.12
PROVIDER-LEVEL CORRELATIONS OF CHANGE ACROSS
8TH SOW NURSING HOME OUTCOMES
Percentage of
High-Risk
Residents with
Pressure Ulcersa
Pressure Ulcers

Percentage of
Residents Physically
Restraineda

Percentage of
Residents with
Worsening
Depression/ Anxietya

Percentage of
Residents who
Experience Chronic
Paina

1.00

Physical Restraints

-0.005

1.00

Depression/ Anxiety

0.057

0.019

1.00

Pain

0.042

0.030

0.055

1.00

Source:

Nursing Home Compare, second quarter of 2005 (baseline) and first quarter of 2008 (followup).

Note:

Sample size for each pair depends on the number of nonmissing observations. The least reported
measure is rressure ulcers; sample sizes for correlations between pressure ulcers and other measures
range from 7,579 to 7,591. Sample sizes used to produce the other correlations range from 10,492 to
10,676. Providers are weighted by total numbers of patients.

a

All outcomes are for long-stay residents.

SOW = statement of work.

DRAFT

32

DRAFT

TABLE II.13
PROVIDER-LEVEL CORRELATIONS OF CHANGE ACROSS
8TH SOW HOME HEALTH OUTCOMES

Ambulation

Transferring

Urinary
Incontinence

Pain

Bathing

Oral
Meds

Dyspnea

Discharge

33

Improvement in Ambulation/
Locomotion

1.00

Improvement in Transferring

0.55

1.00

Improvement in Urinary
Incontinence

0.26

0.35

1.00

Improvement in Pain Interfering
with Activity

0.37

0.36

0.34

1.00

Improvement in Bathing

0.59

0.51

0.37

0.44

1.00

Improvement in Management of
Oral Medications

0.47

0.42

0.39

0.35

0.57

1.00

Improvement in Dyspnea

0.41

0.43

0.43

0.48

0.50

0.41

1.00

Discharge to Community

0.13

0.12

0.11

0.09

0.16

0.12

0.11

1.00

Acute Care Hospitalization

-0.09

-0.09

-0.10

-0.01

-0.12

-0.09

-0.09

-0.93

ACH

1.00

Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up data collected March 2007-February 2008.

Note:

Sample size for each pair depends on the number of nonmissing observations, and ranges from 5,145 to 6,027. All providers are weighted equally.

ACH = acute care hospitalization.
SOW = statement of work.
QIO = quality improvement organization.

discharged from HHA care to home if he or she is hospitalized. However, the functioning
measures are only weakly correlated with the ACH and discharge measures, with coefficients
around 0.10. This suggests the ACH and discharge measures and the patient functioning
measures assess two distinct domains of HHA quality, with performance on one not necessarily
translating to performance on the other.c.

Did hospitals that performed well in one measure

also do well in others?
There was only a modest association across hospitals in improvement among different
measures of quality of care. The correlation between improvement on the ACM index and the
SCIP index was 0.25. Correlations between the individual items from both indexes were also
nearly all positive, though generally not large (see Appendix Table D.5).
2.

State Level

a.

Nursing homes: Did states that did well in one domain of quality also do well in others?
There was also little correlation at the state level between improvement in one measure and

improvement in others (Table II.14). These are state-level correlations between rates of
improvement for the different measures adjusted for baseline levels, weighted by providers’ total
number of patients. None of the correlations are large and some are negative, demonstrating that
there was little consistency across measures in which states improved most. Equal weighting of
providers produces slightly larger correlation coefficients (bottom of Table II.14), 5 but the
magnitudes all are still less than 0.25, with an average coefficient of only 0.14. Overall, the
results suggest that, as with the provider-level results, state-level improvement in one area is not
a strong predictor of improvement in others.

5

We present the unweighted results here to show the differences from the weighted results. In other cases we
omit the unweighted results because they are qualitatively the same as the weighted results.

DRAFT

34

TABLE II.14
STATE-LEVEL CORRELATIONS OF CHANGE ACROSS
8TH SOW NURSING HOME OUTCOMES
Weighted Correlations
Percentage of HighRisk Long-Stay
Residents Who
Have Pressure
Ulcers

Percentage of LongStay Residents Who
Were Physically
Restrained

Percentage of
Long-Stay
Residents with
Depression/
Anxiety

Pressure Ulcers

1.00

Physical Restraints

0.164

1.00

Depression/ Anxiety

-0.090

-0.076

1.00

Pain

0.113

0.115

0.117

Percentage of LongStay Residents with
Moderate to Severe
Pain

1.00

Unweighted Correlations
Pressure Ulcers

1.00

Physical Restraints

0.166

1.00

Depression/ Anxiety

0.132

0.100

1.00

Pain

0.248

0.137

0.072

1.00

Source: Nursing Home Compare, second quarter of 2005 (baseline) and first quarter of 2008 (followup).
Note:

Sample size is 51 (50 states plus the District of Columbia). State averages used in top panel are calculated
as averages provider-level change, weighted by total numbers of patients. Those in the bottom panel weight
all providers equally when creating state averages. States are weighted equally in calculating the
correlations in both cases.

SOW = statement of work.

DRAFT

35

b. Did states that did well in one domain of home health quality also do well in others?
Across states, increases in patient functioning were modestly associated with reductions in
ACH. The state-level correlation between the two measures was -0.28. State-level correlations
between the individual items were broadly similar to the correlations at the provider level in
Table II.13 and are not shown. The fact that the composite measures are more highly correlated
than the individual items suggests that the composite measures have less variability than the
individual items.
c.

Did states that improved more in one domain of quality also improve more in others?
As with the correlations at the provider level, improvement on ACH and SCIP between

baseline and followup were positively, but not strongly, correlated (r = 0.21). This suggests that
gains in one area of care do not necessarily translate to gains in others.
E. WAS THERE AN IMPACT OF QUALITY IMPROVEMENT ORGANIZATIONS’
WORK WITH IDENTIFIED PARTICIPANT GROUP PROVIDERS ON
IMPROVEMENT IN QUALITY MEASURES?
We find that results, again, vary by provider setting. There is some evidence that Quality
Improvement Organizations (QIOs) had favorable effects on nursing home and HHA measures,
but we found no evidence for a QIO effect on hospital measures.
The IPG penetration rate for each state is the percentage of providers in that state that have
agreed to work with the QIO on quality improvement; the providers are called identified
participant group (IPG) providers. The IPG penetration rates differ by provider setting (nursing
homes, home health agencies, and hospitals), and for nursing homes, by measure, since nursing
homes could agree to work with the QIO on pressure ulcers and physical restraints, or on
depression and pain (or both).
The Eighth SOW specified the minimum numbers of IPG providers each QIO was to recruit
in each provider setting and group of quality measures; these were a function of the numbers of
DRAFT

36

providers in each state. QIOs in smaller states were thus to work high a higher proportion (or
even all) providers whereas QIOs in larger states were to work with only a fraction of providers.
The IPG penetration rate might then provide an indicator of the intensity of QIO activity not
confounded by underlying provider willingness or ability to improve quality of care, and
possibly not fraught with the selection problems bias inherent in direct comparisons of IPGs with
non-IPGs. 6 However, the validity of the IPG penetration rate as a treatment indicator, though
reasonable, remains an untestable assumption. Table II.15 shows the IPG penetrations of
individual states for the different settings and measures.
1.

Nursing Homes
Higher identified participant group provider (IPG or IP) penetration 7 is associated with

significantly greater reductions in pressure ulcers, physical restraints, and chronic pain among
nursing home residents (Table II.16). Those estimates are obtained from seemingly unrelated
regression analyses (SUR) controlling for a range of provider- and county-level characteristics,
including provider performance on nonfocus outcomes. 8 A one percentage point increase in IPG
penetration is associated with a 0.034 percentage point decrease in pressure ulcers, a 0.022
percentage point decrease in use of physical restraints, and a 0.011 percentage point reduction in
the prevalence of moderate-to-severe pain. No significant association is observed with worsening
depression and anxiety. The joint test of association between IPG penetration and all four
outcome measures is, however, statistically significant (p < 0.01).
6

In addition, we did not have access to individual providers’ IPG status for this analysis.

7

As noted in Chapter II, the IPG penetration rate for nursing homes ranged from roughly 10 percent to 100
percent across states, with the majority clustered between 10 and 20 percent.
8

Details on the associations among the various control variables and nursing home quality measures are
provided in Appendix Table D.1. SUR is a technique for jointly estimating several regression models which reduces
the likelihood of Type II or false positive results from multiple statistical tests. Appendix Table D.9 shows the
means of the control variables used in the regressions.

DRAFT

37

TABLE II.15
IPG PENETRATION RATES FOR PROVIDER SETTINGS AND MEASURES

Nursing
Homes

Pressure
Ulcers and
Physical
Restraints

State

IPG
Penetration
Rate

AK
DC
DE
VT
HI
NV
WY
NM
ID
NH
ND
UT
RI
MT
WV
SD
AZ
ME
OR
SC
MS
OK
MA
PA
VA
CO
FL
TX
AL
NE
CT
AR
TN
MD
WA
NJ
LA
MN
NC
KS
GA
KY
IN
IA
MI

DRAFT

100.0
85.0
78.6
78.1
71.7
68.1
56.4
42.3
40.0
39.0
38.6
38.0
35.9
32.0
30.5
28.8
28.4
27.6
25.4
19.9
18.6
16.6
15.7
15.6
15.5
15.4
15.3
15.3
14.9
14.2
14.2
14.0
13.7
13.4
13.4
13.0
12.8
12.7
12.6
12.6
12.4
12.2
11.9
11.4
11.4

Nursing
Homes

Depression
and Pain

State

IPG
Penetration
Rate

AK
DC
DE
VT
HI
NV
WY
NM
ID
NH
ND
UT
RI
MT
WV
SD
AZ
ME
OR
SC
MS
OK
MA
PA
VA
CO
FL
TX
AL
NE
CT
AR
TN
MD
WA
NJ
LA
MN
NC
KS
GA
KY
IN
IA
MI

100.0
75.0
73.8
73.2
67.4
63.8
51.3
38.5
37.5
36.6
36.1
35.9
32.6
30.0
29.0
27.0
26.1
25.9
23.2
18.8
17.7
15.8
15.2
15.0
14.9
14.9
14.8
14.5
13.4
13.3
13.1
13.1
12.8
12.6
12.6
12.2
12.0
11.7
11.6
11.5
11.3
10.9
10.9
10.8
10.5

Home
Health
Agencies

ACH
Measure

State

IPG
Penetration
Rate

DE
VT
HI
AK
DC
WY
ND
RI
ME
NH
MT
ID
NJ
NM
WA
SD
OR
SC
WV
NE
NV
MD
UT
AZ
MS
GA
TN
AR
MN
PA
IN
VA
IA
NY
FL
AL
CA
CO
KY
OH
OK
TX
IL
MI
NC

38

54.6
50.0
47.1
46.2
46.2
40.0
38.5
38.1
34.5
31.3
29.4
28.6
28.0
26.9
26.4
25.0
24.6
23.9
23.7
23.3
23.3
22.7
22.7
21.9
20.7
20.4
20.3
20.2
20.2
20.2
20.1
20.1
20.1
20.1
20.1
20.0
20.0
20.0
20.0
20.0
20.0
20.0
19.9
19.9
19.9

Hospitals

State
VT
WY
SD
AK
ND
DE
MT
DC
HI
NH
RI
NE
ME
AR
NV
OK
SC
MN
ID
WV
IA
KS
MO
TX
MS
AL
MD
NM
UT
IL
IN
PA
WA
AZ
FL
GA
MA
NJ
TN
CO
MI
OH
OR
VA
NY

IPG
Penetration
Rate
100.0
92.0
90.0
84.0
80.0
77.0
69.0
66.0
56.0
53.0
51.0
49.0
43.0
38.0
38.0
38.0
36.0
35.0
33.0
33.0
32.0
31.0
31.0
31.0
30.0
29.0
29.0
29.0
29.0
28.0
28.0
28.0
28.0
27.0
27.0
25.0
24.0
23.0
22.0
21.0
21.0
21.0
21.0
21.0
20.0

TABLE II.15 (continued)

Nursing
Homes

Pressure
Ulcers and
Physical
Restraints

State

IPG
Penetration
Rate

IL
MO
NY
WI
CA
OH

11.4
11.0
10.8
10.8
10.6
10.4

Nursing
Homes

Depression
and Pain

State

IPG
Penetration
Rate

IL
MO
NY
WI
CA
OH

10.4
10.4
10.3
10.1
10.0
8.2

Home
Health
Agencies

ACH
Measure

State

IPG
Penetration
Rate

CT
MO
KS
WI
MA
LA

19.8
19.7
19.7
19.6
19.6
13.7

Hospitals

State
WI
KY
LA
NC
CA
CT

IPG
Penetration
Rate
20.0
19.0
18.0
17.0
13.0
12.0

Source:

CMS de-identified QIO Dashboard Data.

Note:

The IPG penetration rate for each state is the percentage of providers in that state that have agreed to
work with the QIO on quality improvement; the providers are called identified participant group (IPG)
providers. The IPG penetration rates differ by provider setting (nursing homes, home health agencies,
and hospitals), and for nursing homes, by measure, since nursing homes could agree to work with the
QIO on pressure ulcers and physical restraints, or on depression and pain (or both).

ACH=Acute Care Hospitalization

DRAFT

39

TABLE II.16
SUR ESTIMATES OF ASSOCIATION BETWEEN IPG PENETRATION AND CHANGE IN NURSING HOME
OUTCOMES DURING THE 8TH SOW
(1)

(2)

Baseline Outcome
Control Only

All Provider and County
Baseline Controlsa

(3)
Add Controls for
Nonfocus Quality
Measuresb

Pressure Ulcersc
Β
(St. Err)
[R2]
Number

-0.034**
(0.012)
[0.29]
7,594

-0.032**
(0.008)
[0.32]
7,507

-0.034**
(0.009)
[0.34]
6,854

Physical Restraintsc
Β
(St. Err)
[R2]
Number

-0.022**
(0.007)
[0.43]
10,676

-0.023**
(0.006)
[0.43]
10,574

-0.022**
(0.007)
[0.44]
8,547

Depression/Anxietyc
Β
(St. Err)
[R2]
Number

0.006
(0.023)
[0.24]
10,537

0.009
(0.015)
[0.28]
10,434

-0.011
(0.016)
[0.32]
8,545

Chronic Painc
Β
(St. Err)
[R2]
Number

-0.007
(0.007)
[0.50]
10,548

-0.008
(0.005)
[0.51]
10,445

-0.011*
(0.006)
[0.52]
8,546

0.0014***

0.0000***

0.0000***

Quality Measure (Dependent
Variable)

Joint Statistical Significance

Source: Nursing Home Compare, second quarter of 2005 (baseline) and first quarter of 2008 (followup).
Note:

Results weighted by facilities’ total number of patients. Results were estimated simultaneously using
seemingly unrelated regression (SUR). All analyses controlled for baseline levels of the measure. Inclusion
of other controls varies across models. The coefficients indicate the average percentage point change in the
dependent variable for a one percentage point change in IPG penetration. If that association is causal then
the results imply that QIO work with IPs led to an average change in the outcome for each IP equal to the
coefficient multiplied by 100.

*p<.10; **p<.05; ***p<.01 (two-tailed tests).
a

Controls for binary indicators of ownership type (for-profit; government; nonprofit, corporate; nonprofit, religious;
nonprofit, other—for-profit is omitted category); facility size (indicator for being in the largest quartile of nursing
homes); whether situated within a hospital; and presence of both resident and family councils. Specification also
controls for county-level characteristics: whether or not the provider’s county is part of a metropolitan area; number
of active physicians per 1,000 population; number of nurses per 1,000 population; (log) per capita income; poverty
rate; and percentage of the population ages 0 to 19, ages 65 or more, with four or more years of college, Hispanic,
non-Hispanic black, and without health insurance, respectively.

b

Specification includes all controls from the preceding model in the second column, plus baseline and change
measures of percentage of residents (1) whose ability to move about independently has declined, (2) whose daily

DRAFT

40

TABLE II.16 (continued)

need for help with daily activities has increased, (3) who spent most of their time in a bed or chair; and who had a
urinary tract infection, respectively.
c

Pressure Ulcers = percentage of high-risk long-stay residents with pressure sores; Physical Restraints = percentage
of long-stay residents who were physically restrained; Depression/Anxiety = percentage of long-stay residents who
have become more depressed or anxious; Chronic pain = percentage of long-stay residents who have moderate to
severe pain.

B = Beta coefficient on the IPG penetration variable from the regression model.
IP = Identified participant.
IPG =Identified participant group.
SOW = statement of work.
St. Err. = standard error.
QIO = quality improvement organization.

DRAFT

41

If the observed associations do, in fact, reflect causal impacts of QIO work with IPs, the
results suggest that, for individual providers, working with a QIO reduces the prevalence of
pressure ulcers by more than three percentage points and the use of physical restraints by more
than two percentage points—each a substantial fraction of the mean baseline rates (13.3 percent
and 6.5 percent on those two measures, respectively) (Table II.2). The implied per-IP impact on
reduction in chronic pain is somewhat smaller, 1.1 percentage points, though still a substantial
fraction of the 5.4 percent baseline average rates.
2.

Home Health Agencies

a.

Effects of IPG Penetration
Changes in ACH (Table II.17) were strongly and negatively associated with IPG penetration

rates. ACH rates are estimated to decline by about 0.13 percentage points for every one
percentage point increase in the proportion of HHAs that are in IPGs. 9 Thus, all else being equal,
a state with a 20 percentage point higher IPG penetration than another state would be expected to
have experienced 2.6 percentage points fewer hospitalizations. If the observed association does
reflect a causal impact, then this implies that, for individual HHAs, being an IP led to an average
reduction in ACHs of 13 percentage points over what their ACH rates would have been
otherwise. This seems like a large point estimate. The confidence interval is fairly wide,
however, ranging from 0.066 to 0.200, and includes values that are substantially smaller.
b. Impacts of QIOs’ Statewide Efforts on Improvements in Home Health Quality
Measures
The design of the Eighth SOW for the home health care setting also permitted a comparison
of QIOs that chose a quality measure for which to undertake statewide efforts to improve care

9

Descriptive statistics for the control variables included in the regression models are in Appendix D.10.

DRAFT

42

TABLE II.17
OLS ESTIMATES OF ASSOCIATION BETWEEN IPG PENETRATION AND CHANGE IN ACUTE
CARE HOSPITALIZATION RATES OF HHAs, 8TH SOW
Quality Measure (Dependent
Variable)
Acute Care Hospitalization
(St. Err)
R2
N

(1)
Controls Only for Baseline ACH
Level
-0.156**
(0.037)
0.32
6,308

(2)
All Provider and County
Controlsa
-0.133**
(0.034)
0.33
6,265

Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up
data collected March 2007-February 2008.

Note:

Results are estimated using ordinary least squares (OLS). Providers are weighted equally.
Coefficients reflect percentage point change in outcome associated with a one percentage
point change in IPG penetration. If that association is causal then the results imply that QIO
work with IPs led to an average change in the outcome for each IP equal to the coefficient
multiplied by 100.

a

Includes controls for binary indicators of ownership type, date of certification (pre-1990, 1990s, 2000s),
and provision of medical social services. The specification also includes controls for county-level
characteristics, including whether or not the provider’s county is part of a metropolitan area; active
physicians per 1,000 population; number of nurses per 1,000 population; per capita income; the poverty
rate; and the percentage of the population: ages 19 or younger, ages 65 and over, with four or more years
of college, who are Hispanic, who are black, and without health insurance, respectively.
**p<.05 (two-tailed tests).
HHA = home health agency
IP = Identified participant
IPG Penetration= Percent of Providers in a State Who are in an Identified Participant Group (i.e., who are
IPs)
St. Err. = standard error.

DRAFT

43

with QIOs that did not choose to do so. We also found evidence that QIOs’ statewide efforts
were associated with improvement in at least some of the quality measures we were able to
examine. As described in Chapter I, QIOs were required to engage in statewide efforts to
improve one selected measure; each QIO chose one measure from among the nine available. We
estimated impacts of those efforts for the most commonly selected outcomes: management of
oral medications (selected by 30 QIOs), pain interfering with activity (10 QIOs), and dyspnea (9
QIOs). Table II.18 suggests that there is at least a simple bivariate relationship between QIO
selection of an outcome and greater improvement on that outcome. The greatest improvements
on each outcome were achieved by HHAs in states in which that outcome was selected for
improvement.
Holding constant provider- and county-level characteristics—including improvement on other
quality outcomes—QIOs’ statewide efforts were associated with a statistically significant
improvement in both dyspnea and management of oral medications. As shown in the third
column of Table II.19, HHAs in states in which the QIO selected dyspnea for statewide
improvement averaged 1.8 percentage points greater improvement than HHAs in states without
that focus, adjusting for all other factors. For management of oral medications the estimated
impact is 1.3 percentage points. We observed no impact of statewide efforts on improvement in
pain interfering with activity. The association of QIO measure selection with improvement
across the three measures is jointly statistically significant.
3.

Hospitals
Although hospital quality measures improved across all states, we found no evidence of

QIOs being effective in improving appropriate care through their work with IP providers.
Associations between the IPG penetration and the ACM index were small and not significantly
different from zero (Table II.20), regardless of the set of control variables included. Control
DRAFT

44

TABLE II.18
AVERAGE IMPROVEMENT ON ELECTIVE HOME HEALTH OUTCOMES, BY QIOS’ MEASURE
SELECTED FOR STATEWIDE IMPROVEMENT

Statewide Measures Selected

Improvement
Management
of Oral
Pain Interfering
Medications
with Activity

Dyspnea

Average

Management of Oral Medications

2.12

3.60

2.68

2.80

Pain Interfering with Activity

2.12

3.53

3.71

3.12

Dyspnea

4.12

3.05

3.21

3.46

Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up
data collected March 2007-February 2008.

Note:

The number of providers in states selecting each of the above measures is 1,916, 4,278, and
761 for dyspnea, management of oral medications, and pain, respectively. Of the providers in
states for which the QIOs selected dyspnea, improvement data were available for and
calculated on 1,513 providers for dyspnea; 1,474 providers for oral medications; and 1,526
providers for pain. Of the providers in states for which the QIOs selected management of oral
medications, improvement data were available for and calculated on 3,238 providers for
dyspnea; 3,148 providers for oral medications; and 3,324 providers for pain. Of the providers
in states for which the QIOs selected management of oral medications, improvement data
were available for and calculated on 530 providers for dyspnea; 519 providers for oral
medications; and 542 providers for pain.

QIO = quality improvement organization.

DRAFT

45

TABLE II.19
SUR ESTIMATES OF EFFECTS OF QIO STATEWIDE WORK ON HHA OUTCOMES
Quality Measure
(Dependent Variable)
Management of Oral
Medications
Βa
(St. Err)
[R2]
Number

(1)

(2)

(3)

0.79
(0.72)
[0.26]
5,539

1.17**
(0.041)
[0.28]
5,498

1.25**
(0.048)
[0.58]
5,490

Pain Interfering With
Activity
Βa
(St. Err)
[R2]
Number

-0.03
(0.64)
[0.24]
5,329

0.45
(0.44)
[0.25]
5,289

0.31
(0.39)
[0.46]
5,287

Dyspnea
Βa
(St. Err)
[R2]
Number

2.02*
(0.92)
[0.23]
5,603

2.09**
(0.76)
[0.26]
5,562

1.83**
(0.37)
[0.54]
5,562

Yes

Yes

Yes

No

Yes

Yes

No

No

Yes

0.0026***

0.0000***

0.0001***

Model Includes Controls for:
Baseline Level of Selected
Quality Measure
All Provider and County
Baseline Characteristicsb
Nonfocus Quality Measuresc

Joint Statistical
Significanced
Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up
data collected March 2007-February 2008.

Note:

Results weighted all providers equally. Results estimated simultaneously using seemingly
unrelated regression (SUR). Coefficients indicate the average percentage point difference in
change in the outcome measure between providers in states in which that measure was
selected for statewide improvement, compared with states in which the measure was not
chosen (holding other observables constant). All specifications control for baseline levels of
the outcome measure.

a

SUR coefficient for the indicator of whether the provider is in a state where the QIO selected the
outcome in question to focus on for statewide improvement efforts. It captures the average difference in
baseline-to-follow-up change in the outcome between providers in states where the outcome was selected
by the QIO for quality improvement vis-à-vis providers in states that did not, holding other covariates
constant.

b

Controls for binary indicators of ownership type (for-profit; government; nonprofit, private; nonprofit,
religious; nonprofit, other); date of certification (pre-1990, 1990s, 2000 or later); and whether the agency

DRAFT

46

TABLE II.19 (continued)

provides medical social services. The specification also includes controls for county-level characteristics:
whether or not the provider’s county is part of a metropolitan area; active physicians per 1,000
population; number of nurses per 1,000 population; per capita income; the poverty rate; and the
percentage of the population ages 19 or younger, ages 65 or over, with four or more years of college, who
are Hispanic, who are black, and without health insurance, respectively.
c

Includes all controls in Model 2, plus average baseline and change in nonselected measures of
percentages of patients improving in bathing, transferring in and out of bed, urinary incontinence, and
moving around independently; and the percentage discharged from HHA care and living at home.

d

Joint Significance = P-value of joint significance of the three statewide measure selection indicator
coefficients.

*p<.10; **p<.05; ***p<.01 (two-tailed tests).
IP = Identified Participant
IPG Penetration = Percent of providers in a state who are in an Idendified Participant Group (i.e., who are
IPs)
B = Beta coefficient on the IPG penetration variable from the regression model.
HHA = home health agency
QIO = quality improvement organization.
St. Err. = standard error.

DRAFT

47

TABLE II.20
ESTIMATES OF ASSOCIATION BETWEEN IPG PENETRATION AND CHANGE ON THE
APPROPRIATE CARE MEASURE INDEX
(1)
Quality Measure
ACM
(St. Err.)

Controls Only for Baseline ACM
-0.0008
(0.0102)

(2)
All Provider and County
Controlsa
0.0013
(0.0097)

0.499
2,377

0.532
2,353

R2
Number
Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected
October 2006-September 2007.

Note:

Results estimated using ordinary least squares (OLS). Providers weighted by total numbers of
patients. Coefficients indicate the average percentage point change in the dependent variable
for a one percentage point change in IPG penetration. If that association is causal then the
results imply that QIO work with IPs led to an average change in the outcome for each IP
equal to the coefficient multiplied by 100.

a

Controls for binary indicators of ownership type (for-profit; government; nonprofit, private; nonprofit,
religious; nonprofit, other); hospital type (acute care or critical access); and hospital size (indicator for
being in the largest quartile of facilities). The specification also includes controls for county-level
characteristics: whether or not the provider’s county is part of a metropolitan area; active physicians per
1,000 population; number of nurses per 1,000 population; per capita income; the poverty rate; and the
percentage of the population ages 19 or younger, ages 65 and over, with four or more years of college,
who are Hispanic, who are black, and without health insurance, respectively.
*p<.05; **p<.01; ***p<.10 (two-tailed tests).
ACM = appropriate care measure.
IP = identified participant.
IPG = identified participant group.
St. Err. = standard error.

DRAFT

48

variable descriptive statistics are presented in Appendix Table D.11. 10 Note that in addition to
the lack of an association between IPG penetration and the composite ACM scale, we also found
no association between IPG penetration and any of the individual ACM items (see Appendix
Table D.7).
F. CONCLUSIONS
Nursing homes’ performance improved during the Eighth SOW for three of the four focus
outcomes—pressure ulcers, physical restraints, and pain. The percentage of patients with the
fourth outcome, becoming more anxious or depressed, changed little. At the state level, nearly all
states saw improvements in the physical restraint and pain measures, and about two-thirds of
states had reductions in the prevalence of pressure sores. As in the provider level results, there
was little change across states in the depression/anxiety measure. The IPG penetration analyses
provided evidence that QIOs contributed to the improvements in the pressure ulcer, physical
restraints, and pain measures.
Home health agencies improved performance on most publicly reported outcomes during the
Eighth SOW. The provider level improvements translated to the state level as well, with nearly
all states experiencing improvements in the index of patient functioning measures and more than
two thirds in the ACH measure. Increasing IPG rates were associated with larger improvements
in the ACH measure. We also found evidence to support favorable effects from QIOs’ statewide
efforts; these were associated with improvements in two of the three outcomes examined (the
measures of oral medication management and of dyspnea, the exception being the pain measure).

10

Appendix Table D.6 contains ordinary least squares (OLS) regression results for associations between those
outcomes and change in ACM during the Eighth SOW. Results are also provided for SCIP improvement.
Corresponding results for predictors of improvement on individual ACM and SCIP items are presented in Appendix
Tables D.7 and D.8, respectively.

DRAFT

49

Hospitals made large gains in hospital care as gauged by the indexes of the ACM and SCIP
measures. Improvement occurred in all states and in all but a small fraction of facilities.
However, we found no evidence that those improvements were attributable to QIOs’ work with
identified participants.
There are potential limitations to our analyses. Providers missing data at either the baseline
or followup points could introduce bias of unknown size or magnitude into both the descriptive
and impact analyses. The rates of incomplete data ranged from 40 to 50 percent for nursing
homes, and roughly 10 to 30 percent for home health agencies. For hospitals, rates of missing
data were very low for the ACM measures, but somewhat higher for the SCIP measures, as these
had only been recently introduced during the baseline period. The nursing homes and hospitals
missing data tended to be smaller providers, and thus represented fewer patients.
Although we had measures of provider size for the nursing home and hospital analyses, we
lacked such weights for the home health analyses. Performing unweighted analyses (that is,
weighting each agency with a weight of one) tends to give more weight to smaller agencies; the
effect of this is unknown. The results for the home health and hospital analyses did not differ
much between the weighted and unweighted analyses.
Another potential limitation is the validity of the assumption underlying the impact analyses
based on IPG penetration rates, namely that they are measures of QIO involvement that are
independent of other unobserved factors that might also influence quality improvement.
Table II.15 showed that, as expected, smaller and more sparsely populated states had higher IPG
penetration rates; whether such states also tend to have providers that are more highly motivated
and capable of improving quality, or have healthcare enviroments more conducive to quality
improvement, is unclear. If such associations exist, it is also unclear whether the several regional
characteristic variables in our regressions would completely control for such effects (though it

DRAFT

50

seems unlikely). In most nonexperimental study designs the key assumptions may be highly
plausible or reasonable, but generally cannot be confirmed with certainty.
A limitation of our data is that there are only 51 observations for variation in IPG
penetration rates. Table II.15 reveals little variability in IPG penetration rates at lower rates of
IPG penetration some of the provider settings; for example, for home health agencies, there are
18 states with IPG penetration rates ranging from 20.0 to 20.7, and at higher rates of IPG
penetration, a few states with very high rates. Our estimates are thus subject to these distributions
of the IPG penetration rates.
Finally, as described further in the discussion of the provider survey analysis in Chapter IV,
providers’ IPG status is not a sharp indicator of involvement with their QIOs. Many non-IPGs in
the provider survey indicated they had worked with their QIO. Others have noted the potential
for measurement error when using IPG status as a binary indicator of exposure to the QIO
program (Jencks 2005). Although the effects of substantial QIO involvement on non-IPGs on our
results are unknown, in general, such “contamination” or “spillover” of the intervention to the
control or comparison group tends to bias the estimated effects downward (that is, the
intervention effects appear smaller than they truly are).
In summary, our analyses of the Medicare Compare data found widespread and substantial
improvement in the quality measures nationwide for providers and states. Although the
magnitudes of our estimates may be affected by study and data limitations, we do find evidence
that QIOs have contributed to at least some of these improvements.

DRAFT

51

III. MECHANISMS AND CASE STUDY ANALYSIS

The key research question addressed in this chapter is “What mechanisms might underlie
performance improvement in selected states?” Case studies of five states with different patterns
of improvement during the Eighth Scope of Work (SOW) on two measures of hospital surgical
infection prevention provide insights on this question (details of how the states were selected are
contained in Appendix B). We present our results by specific research questions after a brief
summary.
In analyzing the discussions with the Quality Improvement Organizations (QIOs) and
hospital associations that participated in the case studies, the reported timing of actions in
relation to a state’s pattern of improvement was a critical factor that enabled us to distinguish
more and less likely explanations for a state’s pattern of improvement. In other words, states
with high baseline performance and relatively less improvement during the Eighth SOW (highbaseline states) should be able to point to actions prior to the start of the Eighth SOW that likely
contributed to the high baseline we observed. States with low baselines (low-baseline
performance) but substantial improvements of 14 percentage points or more (high-improving
states) should be able to point to reasons for their substantial improvements during the Eighth
SOW period.
A. SUMMARY
Although the story in each state was unique, respondents helped us identify several factors
likely to have influenced the state-level trends in the surgical care measures, namely QIO
actions, hospital association activities, and actions of large health systems. More active use of
Institute for Health Improvement (IHI) resources, such as educational teleconferences during its
100,000 Lives Campaign, in the high-improving states also might have been a factor. In addition,
DRAFT

53

many of the respondents noted that inclusion of the two surgical infection prevention measures
of interest in the set of measures required to be publicly reported in order for hospitals to receive
their full Medicare payment update led to near-universal reporting by the middle of the Eighth
SOW. Respondents believed that this public reporting had boosted improvement both in their
states and nationally during the Eighth SOW.
Each QIO pointed to some actions it took that logically might have contributed to
improvement on the relevant time frame (pre-Eighth SOW for the high-baseline states and
during the Eighth SOW for the high-improving states), but the nature of the actions varied
widely. These actions ranged from encouraging the dominant local health system to use its own
relatively sophisticated quality improvement infrastructure to foster improvement on surgical
measures; to convening a hospital collaborative; to an effort consisting of several parts, including
intensive site visits to hospitals, regional in-person meetings with/open to all hospitals, and
instigating letters to be sent from a highly respected surgeon to all surgeons in the state urging
their support for the measures.
QIO actions were not the only relevant factors contributing to improvement in these states,
we were told. All five had active hospital associations, and in three cases hospital association
activities had likely played a role in the measure improvement—that is, one or more of the
respondents told us of relevant hospital association activities targeted to these measures that
preceded the high rate. Also, health system organizations and/or the Voluntary Health
Association (VHA) were credited with a role in measure improvement during the relevant
periods in three states. In two cases, the hospital associations did not believe the QIOs’ efforts
were likely to have been a major reason for the improvement; although this casts some doubt on
the QIOs’ belief that their actions contributed to the improvement, the hospital associations did
not seem to be specifically aware of the activities that the QIOs told us about that they believed

DRAFT

54

had contributed. Therefore we draw no conclusion either way and all the activities that were
mentioned by respondents as potentially contributing to the improvements are considered below.
The relatively low baselines in the high-improving states were not likely due to differences
in the barriers they faced to improvement. Rather, it seems likely that the high baselines in those
states were the result of quality improvement-related activities by QIOs and others that occurred
earlier than activities in other states.
The discussions might provide helpful insights into possible reasons for improvement, but
they are not foolproof, as some other perceptions of the respondents do not seem to be reflected
in the data. For example, one hospital association from a high-baseline state explained that statebased public reporting pushes hospitals to improve more than the national public reporting effort
because the media more frequently reports on quality on the basis of state-produced data.
However, given the timing of the state’s public reporting initiative, we would have expected to
see more improvement during 2005-2007 than we did (the state’s rate of improvement was
unremarkable during this period). A high-improvement state’s QIO described with great pride
the Surgical Care Improvement Project (SCIP) collaborative undertaken in the Seventh SOW,
when in fact their baseline prior to the Eighth SOW was relatively low.
B. RESULTS BY RESEARCH QUESTION
1.

Did QIO actions play a role in some states’ rates of dramatic improvement?
A QIO role in improvement is plausible (although not proven) in each of the states; that is,

some QIO actions were consistent in timing with improvements prior to the high baseline or
consistent with improvement from the low baseline in all five states. The more modest
improvements of high-baseline performers corresponded with a shift of QIO efforts toward
fostering improvement on other quality measures in the Eighth SOW.

DRAFT

55

a.

High-Improving States: Actions During the Eighth SOW
During the Eighth SOW, the QIO in high-improving State A made site visits to 30 hospitals

with relatively high surgical volume (at least 200 cases per year) and in which staff performed a
concurrent chart review on the surgical infection prevention measures. The visits were meant to
convince the hospitals of the value of this type of review and to discuss the measures that
represent current performance and are based on patients whom the relevant staff can still recall.
Although this QIO also engaged in other quality improvement activities around the surgical
measures, the other activities were similar to those in the Seventh SOW, which might not have
been very effective because the state’s baseline at the start of the Eighth SOW was relatively
low. The QIOs actions are not the only factor that might have encouraged improved performance
on these measures in the state; for example, the hospital association’s efforts might also have
played a role.
The QIO in high-improving State B took advantage of the unusual structure of the state’s
health care system, in which more than 40 percent of the state’s hospitals are owned by a single
health system. At the start of the Eighth SOW, the QIO met with officials at the major health
system to request that they work with their hospitals on improving performance on the surgical
infection-prevention measures. The health system agreed and established a bimonthly surgical
care improvement program work group that began in 2006 and continues today. This particular
health system is nationally known for its improvement capabilities. Nevertheless, the QIO’s
communication with the health system at the start of the Eighth SOW might have been key to its
quality improvement emphasis on the surgical measures. It accounts for two-thirds of the
hospitals that were included in our analysis.
The strategy for improvement used by the QIO in high-improving State C during the Eighth
SOW combined several components: site visits to 40-45 hospitals, regional meetings, and a letter

DRAFT

56

to convince the state’s surgeons to support the surgical infection-prevention measures. The QIO
indicated that physician resistance to the measures was a major barrier to improvement.
Therefore the QIO brought a prominent physician (a former president of the American Medical
Association) with them on the site visits and reported that the visits were fairly successful in
enticing physicians as well as quality improvement personnel in the hospitals to attend the
presentation they made. They also suggested to a well-respected surgeon that he write a letter to
all the surgeons in the state, asking them to support the measures. He did so. The QIO also
worked with its hospital association to hold four regional meetings in the state; the meetings
were reportedly well-attended and participants received notebooks of best practices and toolkits
for improvement. Of the 40-45 hospitals visited, two-thirds are included in our analysis.
b. High-Baseline States: QIO Actions Prior to the Eighth SOW
The QIO in high-baseline State D began asking hospitals to abstract data on the surgical care
improvement measures of interest as well as other measures during the Sixth SOW. It is
plausible that hospitals’ early experience with seeing their performance on the measures might
have better prepared them to make meaningful changes sooner than hospitals in other states.
Meaningful changes were reported to be observed by the middle of the Seventh SOW. The idea
that this early data-abstraction effort might have contributed to the state’s high baseline prior to
the start of the Eighth SOW is supported by the fact that State D’s baseline scores were in the top
quartile for 10 of 15 measures. However, other factors in this state were also likely to have been
important (discussed below).
The QIO in high-baseline State E convened a collaborative in 2002-2003 focused on
surgical improvement, including the two measures of interest. Because of the state’s small size,
the collaborative included all 10 large hospitals in the state. The QIO developed the collaborative
just after receiving training by the IHI (through its “Breakthrough Series College”) on how to
DRAFT

57

convene effective collaboratives for improvement. (Other QIOs nationally also received this
training.) The idea that this effort focused on surgical improvement might have contributed to the
state’s high baseline is supported by the fact that State E was not high at baseline across all the
hospital measures; rather, it was below the national median for 4 of 15 measures.
2.

Were there differences in the timing of how hospitals participated in or viewed other
national-level surgical infection-prevention initiatives that might help explain the
different pattern of improvement?
It is possible that some of the improvement in high-improving states could have been related

to more active use of IHI resources during its 100,000 Lives Campaign, which ran for 18 months
ending in June 2006. One of the campaign’s six components was preventing surgical site
infections, including specific actions to improve perioperative antibiotic timing (captured by the
two measures of interest). Respondents in each of the three high-improving states mentioned
connecting hospitals to IHI speakers (two by encouraging hospitals to participate in upcoming
IHI web-based seminars and one that cited surgical-specific teleconferences with area hospitals).
In addition, the campaign “node” in one of the high-improving states was said to be quite active,
and the QIO even found hospitals citing their involvement with IHI as they expressed reluctance
to participate in yet another improvement activity with the QIO. Respondents in the highbaseline states noted that most hospitals participated, but participation was sometimes in name
only.
The national-level SCIP is perceived to have focused on the quality measurement aspects of
surgery rather than on quality improvement, and no respondent thought it played a role in
improvement.

DRAFT

58

3.

Did hospitals in the states with low initial rates face barriers to improvement that were
overcome?
Hospitals in states with initially low baselines did not appear to face any unique barriers that

were then overcome to result in the high improvement. High-baseline State D and highimproving State C both reported some closures and serious financial difficulties in some of their
hospitals over the past three years, which continue.
All states reported that physician disagreement with guidelines on which the measures were
based was a barrier and remains so to some degree, particularly with respect to the Prophylactic
Antibiotic Discontinued After Surgery measure. 1 It appears that actions to overcome this barrier
might have occurred at a later time in the states that had low baselines. Consistent with later
timing of improvement in the low-baseline, high-improving states, one state’s letters from the
prominent surgeon to all the surgeons in the state (noted above) might have had an effect; in
another state, the hospital association’s quality expert has held five to seven calls with key
anesthesiologists over the past five years to try to persuade them to support the measures. In
contrast, high-baseline State D reports it was lucky to have had surgeons who were “on the
cutting edge” early on and advocated for antibiotics to be given one hour prior to surgery.
4.

What other factors might explain the different patterns of improvement in the highimproving versus high-baseline states?
Health System Organizations. Two of the three high-improving states credited some of the

improvement to the actions of one or more health system firms or the VHA within their state. In
high-improving State B, actions by a single health system representing a high proportion of the
state’s hospitals were said to explain most of the improvement. In high-improving State C,

1

For example, physicians were reportedly concerned about the possibility, albeit remote, that patients could
develop infections if antibiotics were stopped on the recommended time line.

DRAFT

59

respondents noted that hospitals often belonged to the VHA, or were Hospital Corporation of
America (HCA) hospitals, both of which were said to have significant ongoing quality
initiatives. One of the high-baseline states also pointed to VHA and other system-run actions as
potentially important, although the timing of these groups’ activities targeting surgical infection
prevention was unknown.
Media Showing Poor Quality. A report in 2001 that ranked quality in high-baseline State D
hospitals 48th in the country likely contributed to statewide motivation to improve early this
decade; the hospital association set up an institute for patient quality and safety in 2002 which
has since been active in working with hospitals to improve quality on all the relevant measures.
Active Hospital Associations. In addition to the hospital association in a high-baseline state
setting up an institute for quality and safety as just noted, two of the three high-improving states
had hospital associations that were active in attempting to foster improvement on the surgical
infection-prevention measures as well as others; in one, the hospital association’s senior director
for Quality and Research Initiatives, a registered nurse by background, works full time with
hospitals to help them improve their quality performance, primarily on Centers for Medicare &
Medicaid Services (CMS) core measures. In another, the QIO worked with the hospital
association to achieve high attendance at regional meetings held in four locations around the
state and to disseminate best practices and other quality improvement information; in addition,
the hospital association holds an annual awards program at which it recognizes quality
improvement achievements at top hospitals in the state. (The two other states had hospital
associations that were active in quality improvement in specific niches, but their actions were not
relevant to explaining the patterns discussed here.)
Individual Physician Champions. In two states a prominent physician champion for the
surgical infection prevention measures might have contributed to the improvements. In one case,

DRAFT

60

the physician champion sent a letter to the other surgeons in the state;, in another case the
physician champion met with physicians, sometimes at their offices, for the purpose of
persuading them to follow the guidelines on which the measures were based and encouraging
others at their organizations to do so.
5.

What factors do the QIOs and hospital associations say are associated with
improvements at the hospital level?
Factors mentioned by respondents 2 as important influences on improvement on the surgical

measures at the hospital level included:
Motivation
• Public reporting. Reports indicate that when the measures became part of the set that
hospitals had to publicly report in order to receive their full payment update, a great
deal of hospital attention to improving on the measures resulted.
• Talking about the cost of not discontinuing antibiotics in a timely manner after
surgery was said to get hospitals’ attention toward improving on this measure.
• Quality awards programs might boost some hospitals’ efforts toward improving on
the measures.
• Leadership commitment. Although some hospitals’ leaders might be motivated by
public reporting, cost discussions, and the potential for a quality award, others might
have been motivated by other factors; for example, critical access hospitals are said to
be fiercely interested in protecting against the idea that they might be second-rate
hospitals because of their limited services and size.
Resources
• Adequate staff resources. Staff resources must be available to facilitate
improvements; most hospitals in serious financial difficulties were not included in our
analysis because they did not report these measures in the baseline period.
• Access to best practices and helpful information resources. One hospital association
runs a mentoring program that links high performers to those that need more help.

2

All of these items were discussed by both hospital associations and QIOs.

DRAFT

61

Physician Support
• Presence of a physician champion. One respondent noted that in her state, hospitals
that had a physician champion with a good relationship with hospital “C-suite”
leadership improved the most. The QIOs required hospitals that participated in an
identified participant group provider (IPG) with them to designate a team to work on
improvement, in which the team must include a physician champion (from a surgical
specialty or anesthesiology).
• Anesthesiologists who support the timely initiation of antibiotics measure. One
respondent said the chair of anesthesiology needs to provide guidance to the others,
and if the anesthesiologists agree to take ownership of the antibiotic administration
process, that is a method that works to support the timely initiation of antibiotics
measure.
• Support from all the various surgical specialties. Respondents often noted certain
specialties that were resistant to stopping antibiotics within 24 hours after surgery; the
specifics varied by state, but orthopedics, general surgeons, and colorectal surgeons
were mentioned in at least one state as particularly resistant to discontinuation on this
time frame.
Solving System Issues and Ensuring Reliability
• Operational system issues vary by hospital and must be analyzed and solved. For
example, the location and process of getting an antibiotic to the bedside at the right
time is often a reason for failing to administer antibiotics in a timely manner prior to
surgery.
• Protocols are often helpful to solving system issues and ensuring the reliability of
measure compliance As one respondent noted, performance tends to plummet when
measurement stops (or key individuals leave) in hospitals that rely on people paying
close attention to measure compliance rather than on changing a standard process.

DRAFT

62

IV. PROVIDER SATISFACTION

As explained in Chapter I, we focus only on hospitals, nursing homes, and home health
agencies (HHAs), although Westat also surveyed physician practices, Medicare Advantage
health plans, beneficiaries, and stakeholder organizations. Furthermore, among hospitals, we
analyzed only those listed under the Eighth Scope of Work (SOW) task in which the Quality
Improvement Organizations (QIOs) helped hospitals with care for heart attacks, heart failure,
pneumonia, perioperative patients, and systems and organizational change (Task 1c1). We did
not analyze hospitals listed under the Rural Organization Safety Culture Change task (Task 1c2).
As described in Appendix C, we lacked information with which to calculate standard survey
response rates; we thus defined a completed survey as one in which at least one question was
answered and calculated response rates using this definition. As with the analyses of the
Medicare Compare data, we limited the sample to providers in the 50 states and the District of
Columbia, excluding providers in Puerto Rico, Guam, and so on.
Nationwide survey response rates varied by provider type and identified participant group
provider (IPG or IP) status. Across all provider types at the national level, IPGs responded at a
higher rate than non-IPGs (Table IV.1). Despite the constraints on our ability to calculate
standard response rates, the nonresponse gaps between IPGs and non-IPGs in Table IV.1 are
similar to those reported by Westat. 1

1

The response rates reported by Westat for IPGs and non-IPGs, respectively, were: nursing homes, 93 percent
and 83 percent; home health agencies, 97 percent and 87 percent; and hospitals, 94 percent and 86 percent.

DRAFT

63

TABLE IV.1
NUMBER OF RESPONSES AND RESPONSE RATES (PERCENTAGES),
BY PROVIDER TYPE AND IPG STATUS

Nursing Homes
Home Health Agencies
Hospitals

Number of IPG Responses
(Percentage);
2,388 (96)
1,866 (99)
1,495 (96)

Number of Non-IPG
Responses (Percentage)
2,853 (83)
2,397 (88)
1,843 (89)

Source:

Westat de-identified survey of providers May-September 2007; dataset provided to
MPR by CMS.

Note:

A complete survey is defined as one having a response to at least one survey
question; the response rate is calculated as the number of completes divided by the
number of providers in the dataset.

CMS = Centers for Medicare & Medicaid Services; IPG = identified participant group provider;
MPR = Mathematica Policy Research, Inc.
A. TOPIC AREAS AND GROUPING OF PROVIDERS
We examine the individual survey questions within each of the six main survey topics
developed by Westat, which covered providers’ (1) use of email and the internet to receive,
circulate, or access quality information and QIO resources; (2) knowledge of Centers for
Medicare & Medicaid Services (CMS) programs; (3) satisfaction with their local QIO; (4)
perceptions of the value of their local QIO; (5) interactions with their QIO; and (6) sources of
quality information (Table IV.2). For the topic areas with many questions (for example,
Providers’ Satisfaction with Local QIO and Providers’ Perceptions of QIO’s Value), we present
results for selected questions here with results for the remaining questions in Appendix E.
We used responses to the question “Since August 2005, have you received assistance from
[your state QIO]?” to organize providers into three groups: (1) IPGs, (2) non-IPGs that reported
receiving QIO help, and (3) non-IPGs that reported no help (the number of IPGs that reported
receiving no help was very small and is grouped together with IPGs that received help). We

DRAFT

64

TABLE IV.2
PROVIDER SATISFACTION SURVEY TOPICS AND QUESTIONS
Survey Topics and Questions
Providers’ Use of Internet to Access Quality Information

Response Categories

Use of e-mail to receive or circulate quality improvement
information
Use of internet to access information from QIO websites

Yes/no
Yes/no

Providers’ Knowledge of QIO and CMS Programs
Heard of the local QIO
Aware of CMS pay-for-performance programs
Aware that QIOs work with many different health care
providers and organizations
Heard of Medicare Compare (Nursing Home, Home Health,
and Hospital Compare)

Yes/no or not sure
Yes/no or not sure
Yes/no or not sure
Yes/no or not sure

Providers’ Satisfaction with Local QIO
Whether since August 2005 had received assistance from their
QIOa
Whether since August 2005 had received information from
their QIO
Whether since August 2005 had contacted their QIO for
assistance
Usefulness of information from QIO
Satisfaction with way in which information presented

Methods by which QIO provided help
Frequency of interactions with QIO
Satisfaction with amount of contact with QIO
Satisfaction with ease of access to QIO
Ability to get through to QIO
Satisfaction with QIOs’ timeliness of response to requests for
help
Satisfaction with professionalism, courtesy, and respectfulness
of QIO staff
Overall satisfaction with QIO

Yes/no
Yes/no
Yes/no
Very useful, useful, somewhat useful, not at
all useful
Satisfaction scale—very satisfied, somewhat
satisfied, neither satisfied nor dissatisfied,
somewhat dissatisfied, very dissatisfied
Endorsement with each of a list of itemsb
Once a week or more, once every two weeks,
once per month, less than once per month
Satisfaction scale as above
Satisfaction scale as above
Always, usually, sometimes, never
Satisfaction scale as above
Satisfaction scale as above
Satisfaction scale as above

Providers’ Perceptions of QIO’s Value
Whether assistance was key to providers’ quality
improvement efforts
Used the information provided by the QIO
Service from QIO was worth time or effort of provider staff
Provider feels better off for having received QIO services
Whether provider feels it could not have gotten to current state
of quality improvement without the QIO

DRAFT

65

Agreement scale—strongly agree, somewhat
agree, neither agree nor disagree, somewhat
disagree, strongly disagree
Agreement scale as above
Agreement scale as above
Agreement scale as above
Agreement scale as above

TABLE IV.2 (continued)
Survey Topics and Questions
Rating of QIO’s contribution

Response Categories
Zero to 10 scale, 0 = “QIO did not contribute
at all” and 10 = “QIO contribution
indispensible”

Provider’s Preferred Means of Interacting with QIO
Endorsement with each of a list of itemsb
Selection of one of the listed items

Preferred methods of receiving QIO help
Most preferred method of receiving QIO help
Provider’s Sources of Quality Improvement Information
Prefer another source of information or assistance besides
QIO
Which sources does the provider use

Yes/no/would depend on cost and other
factors
Endorsement of each of a list of potential
sourcesc

Which source does the provider find most useful
Source:

Westat survey instruments for nursing homes, home health agencies, and hospitals.

Note:

The survey also asked providers whether since August 2005 they had received help from their QIO,
had received information from their QIO, or whether they had contacted their QIO for help.

a

Assistance from the QIO was defined in the survey to include site visits, one-on-one telephone communication,
conference calls, training workshops, emails, or listservs.
b

Such as site visits, training workshops or seminars, one-to-one communication, or telephone conferences.

c

For example, the Agency for Healthcare Research and Quality (AHRQ), the Institute for Health Improvement
(IHI), or provider or trade associations.
CMS = Centers for Medicare & Medicaid Services
QIO = Quality Improvement Organization.

DRAFT

66

focus on national-level averages because state-level sample sizes for many states are relatively
small.
B. RESULTS
1.

Awareness of the Local QIO and of Other CMS Initiatives
As mentioned, nearly all (94 percent to 99 percent) IPG providers reported receiving

assistance from their QIO, but large proportions of non-IPG providers also reported receiving
help, ranging from 70 percent for nursing homes to 94 percent for hospitals (Table IV.3), so that
the sample sizes of non-IPG providers receiving no help were relatively small, especially for
hospitals. Providers that received QIO assistance, whether IPG or non-IPG, were more likely
than those receiving no assistance to use email to send and receive quality improvement
information and to visit the website of their local QIO for such information.
Furthermore, nursing homes and home health agencies exhibited a gradient, with IPGs more
likely to use email and the web than non-IPGs who received QIO help; non-IPGs receiving help
were in turn more likely to use these electronic resources than non-IPGs receiving no help
(Table IV.3). Non-IPG hospitals receiving QIO help had about the same rate of using these tools
as IPG hospitals, however.
Compared with non-IPGs who received no QIO help, IPGs and non-IPGs receiving help had
greater familiarity with the local QIO, CMS’ pay-for-performance (P4P) programs, and
Medicare Compare tools (Table IV.4). Awareness of CMS’ P4P programs was lowest among
nursing homes (ranging from 43 percent to 70 percent across the three groups). Awareness of
P4P was high among HHAs and hospitals that were IPGs and non-IPGs receiving help (91
percent to 96 percent for HHAs and 95 percent to 98 percent for hospitals), but considerably
lower for non-IPGs receiving no help (60 percent for HHAs and 76 percent for hospitals).

DRAFT

67

TABLE IV.3
PROVIDERS’ REPORTED RECEIPT OF QIO ASSISTANCE AND USE
OF INTERNET TO ACCESS QUALITY INFORMATION

Number

Percentage
Answering Yes

2,277
2,751

94.3
70.5

Home Health Agencies
IPG
Non-IPG

1,802
2,273

98.3
81.4

Hospitals
IPG
Non-IPG

1,417
1,756

98.8
94.1

2,261
1,933
809

92.5
83.3
66.3

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,787
1,845
419

92.8
86.1
56.3

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,408
1,647
103

99.2
97.5
80.6

2,260
1,932
810

92.5
90.0
66.3

Question
Received Assistance from the Local QIO
Nursing Homes
IPG
Non-IPG

Use E-mail to Receive or Circulate Information About Quality
Improvement
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Use Internet to Access Information from QIO Website About
Quality Improvement
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

DRAFT

68

TABLE IV.3 (continued)

Question
Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help
Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Number

Percentage
Answering Yes

1,789
1,753
305

97.3
95.2
73.1

1,408
1,646
102

97.4
97.6
70.6

Source: Westat de-identified survey of providers May-September 2007; dataset provided to
MPR by CMS.
Note:

All differences statistically significant at p<0.001, chi-squared test.

CMS = Centers for Medicare & Medicaid Services
IPG = identified participant group provider
MPR = Mathematica Policy Research, Inc.
QIO = Quality Improvement Organization.

DRAFT

69

TABLE IV.4
PROVIDERS’ KNOWLEDGE OF QIO AND CMS PROGRAMS

Number

Percentage
Answering Yes

2,268
1,932
809

98.4
96.7
85.9

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,800
1,848
421

99.1
98.0
80.1

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,414
1,652
103

99.9
99.7
89.3

2,268
1,933
809

71.1
63.9
43.3

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,799
1,844
421

96.1
91.4
60.1

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,415
1,648
103

97.7
94.7
77.7

2,264
1,933
809

92.7
89.6
72.2

1,799
1,846

97.6
92.4

Question
Heard of the Local QIO
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Aware of CMS Pay-for-Performance Programs
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Heard of Medicare Compare (Nursing Home, Home Health,
and Hospital Compare)
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help
Home Health Agencies
IPG
DRAFT

70

TABLE IV.4 (continued)

Question
Non-IPG received QIO help
Non-IPG received no QIO help
Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Number
418

Percentage
Answering Yes
64.4

1,415
1,650
103

96.3
92.4
59.2

Source: Westat de-identified survey of providers May-September 2007; dataset provided to
MPR by CMS.
Note:

All differences statistically significant at p<0.001, chi-squared test.

CMS = Centers for Medicare & Medicaid Services
IPG = identified participant group provider
MPR = Mathematica Policy Research, Inc.
QIO = Quality Improvement Organization.

DRAFT

71

2.

Providers’ Satisfaction with Their QIOs
Frequency of contacts and satisfaction with QIO information and with QIO relations were

highest among IPGs, next highest among non-IPGs receiving QIO help, and lowest among nonIPGs with no QIO assistance (Table IV.5). About a quarter of IPG nursing homes, nearly half of
IPG hospitals, and 38 percent of IPG HHAs reported having contact with their QIO at least once
every two weeks. The majority of providers across all three groups, even among non-IPGs
receiving no help, felt information from the QIO was either useful or very useful, and those
providers reported satisfaction with ease of access to the QIO and with their overall relationship
with the QIO (Table IV.5).
3.

Perceived Value of QIO Assistance Among Providers
The same order—most favorable ratings among IPGs, next most favorable among non-IPGs

receiving help, and least favorable among non-IPGs with no help—held again for providers’
perceptions of the value of QIO services (Table IV.6). HHAs responded most favorably to the
statement “we could not have gotten to where we are with quality improvement without [our
state] QIO’s help,” with 90 percent of IPG agencies either somewhat or strongly agreeing. About
one quarter of IPG hospitals and nursing homes did not agree with this statement.
When asked to rate QIO contributions to their own quality improvement efforts on a scale
ranging from 0 to 10, with 10 being the greatest contribution, more than three-quarters of IPG
nursing homes and hospitals and 90 percent of IPG HHAs gave ratings of 7 or greater
(Table IV.6). The proportion of non-IPG home health agencies receiving QIO help who gave a
rating of 7 or more was also relatively high (79 percent); the proportions for non-IPG nursing
homes and hospitals receiving help were somewhat lower (59 percent and 71 percent,
respectively). Finally, non-IPG providers receiving no help had the lowest rates of giving a score

DRAFT

72

DRAFT

TABLE IV.5
PROVIDERS’ SATISFACTION WITH THEIR LOCAL QIOs
(Percentages Unless Otherwise Noted)
Question
Number

Once a Week or
More

Once Every Two
Weeks

Once per Month

Less than Once per
Month

2,043
1,739
498

9.7
0.5
0.0

15.9
1.1
0.2

48.6
25.9
10.8

25.8
72.5
89.0

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,544
1,611
236

18.0
10.5
3.4

20.0
10.0
1.7

43.6
35.1
13.1

18.5
44.4
81.8

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,111
1,414
58

21.3
12.1
3.5

27.2
19.2
5.2

38.4
36.9
6.9

13.1
31.8
84.5

Very Useful

Useful

Somewhat Useful

Not at All Useful

2,225
1,932
498

61.7
40.3
13.1

25.7
39.3
40.6

11.3
18.9
39.6

1.4
1.5
6.8

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,787
1,848
234

77.5
62.0
19.2

16.8
27.7
43.6

5.4
9.7
34.6

0.3
0.7
2.6

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,412
1,648
61

68.6
61.4
13.1

22.1
28.2
37.7

8.9
10.1
41.0

0.5
0.3
8.2

Very Satisfied

Somewhat
Satisfied

Neither Satisfied
Nor Dissatisfied

Somewhat or Very
Dissatisfied

How Frequently in Contact with QIO
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

73

How useful was information received from QIO?
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

How Satisfied with Ease of Access to QIO
Nursing Homes

TABLE IV.5 (continued)

DRAFT

Question
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

2,213
1,885
478

76.0
49.4
20.5

17.3
29.9
27.4

5.2
18.3
47.3

1.6
2.3
4.8

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,783
1,820
234

82.5
67.0
22.7

13.6
22.0
28.6

2.9
8.4
38.5

1.1
2.6
10.3

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,408
1,647
65

75.9
68.0
29.2

19.8
23.1
27.7

2.3
5.3
32.3

2.0
3.6
10.8

Very Satisfied

Somewhat
Satisfied

Neither Satisfied
Nor Dissatisfied

Somewhat or Very
Dissatisfied

74

How Satisfied with Relationship with QIO
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

2,220
1,912
492

80.2
58.1
28.9

14.4
30.3
34.2

3.6
9.5
33.9

1.8
2.2
3.1

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,783
1,837
232

87.7
74.1
36.2

10.3
19.9
30.6

1.2
4.2
24.1

0.7
1.8
9.1

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,407
1,643
64

83.0
74.6
37.5

13.6
19.9
32.8

1.9
3.0
23.4

1.6
2.5
6.3

Source:

Westat de-identified survey of providers May-September 2007; dataset provided to MPR by CMS.

Note:

All differences statistically significant at p<0.001, chi-squared test. Percentages may not sum to 100 because of rounding.

CMS = Centers for Medicare & Medicaid Services
IPG = identified participant group provider
MPR = Mathematica Policy Research, Inc.
QIO = Quality Improvement Organization.

DRAFT

TABLE IV.6
PROVIDERS’ PERCEPTIONS OF QIOs’ VALUE
(Percentages Unless Otherwise Noted)

Number

2,215
1,914
501

46.7
26.5
13.2

39.9
48.9
34.5

8.2
17.7
39.3

5.2
7.0
13.0

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,783
1,832
243

65.3
49.2
20.2

29.2
38.2
38.7

3.5
9.0
26.8

1.9
3.7
14.4

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,404
1,644
64

43.3
36.3
10.9

40.8
43.6
31.3

10.6
14.1
39.1

5.3
6.1
18.8

2,216
1,908
494

65.0
45.0
21.1

24.6
39.5
36.2

5.1
10.7
33.6

5.3
4.7
9.1

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,782
1,827
240

78.8
64.9
25.8

17.3
27.3
37.9

1.9
5.1
29.2

2.0
2.7
7.1

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,403
1,640
64

66.1
58.1
20.3

26.0
31.4
31.3

5.4
6.8
34.4

2.5
3.8
14.1

2,218
1,916

32.6
18.1

42.9
41.7

13.2
23.5

11.4
16.7

Question
QIO Assistance Was Key to Efficient Implementation of Quality Improvement Projects
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

75

Service Received from QIO Was Worth Time/Effort on Part of Our Staff
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Could Not Have Gotten to Where We Are with Quality Improvement Without QIO’s Help
Nursing Homes
IPG
Non-IPG received QIO help

Somewhat Neither Agree
Agree
nor Disagree

Somewhat or
Strongly
Disagree

Strongly
Agree

TABLE 1V.6 (continued)

DRAFT

Number
502

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,784
1,828
243

58.0
44.9
18.1

31.8
35.3
30.5

5.9
11.5
30.9

4.3
8.2
20.6

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,404
1,640
65

33.1
30.6
7.7

40.8
38.9
27.7

15.5
18.2
29.2

10.6
12.3
35.4
Average
Rating
(numeric
average)

Question
Non-IPG received no QIO help

76

Average Rating of QIO Contributions to Quality Improvement Projects
(0 to 10 scale)
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Somewhat Neither Agree
Agree
nor Disagree
26.3
33.9

Somewhat or
Strongly
Disagree
29.1

Strongly
Agree
10.8

Rating of 7 Rating from Rating less
or Greater
4 to 6
than 4
(percentage) (percentage) (percentage)
2,215
1,911
496

77.6
58.5
30.0

15.7
30.7
35.3

6.7
10.9
34.7

7.5
6.4
4.5

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,783
1,828
235

90.0
78.7
41.7

8.1
15.9
27.2

1.9
5.5
31.1

8.4
7.6
5.2

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,403
1,639
65

75.5
70.8
32.3

19.3
21.2
26.2

5.2
8.0
41.5

7.5
7.1
4.5

Source:

Westat de-identified survey of providers May-September 2007; dataset provided to MPR by CMS.

Note:

All differences statistically significant at p<0.001, chi-squared test. Percentages may not sum to 100 because of rounding.

CMS = Centers for Medicare & Medicaid Services
IPG = identified participant group provider
MPR = Mathematica Policy Research, Inc.
QIO = Quality Improvement Organization.

of 7 or higher (30 percent, 42 percent, and 32 percent for nursing homes, HHAs, and hospitals,
respectively). Table IV.6 also shows numeric averages of the rating scale for the different
groups.
4.

Providers’ Preferences for Interactions with Their QIO
Site visits seemed to be the least popular mode of QIO assistance, and training workshops

the most popular, with interest in site visits decreasing from IPGs to non-IPGs receiving
assistance to IPGs without assistance. Even non-IPGs receiving no help were open to training
workshops, with 86 percent to 91 percent of providers expressing willingness to attend
workshops (Table IV.7).
When asked to state a favorite form of contact (including choices not asked about earlier,
such as conference calls, email, and the web) most providers chose email. Preference for site
visits and training workshops was highest among IPGs and lowest among non-IPGs without
help; conversely, preference for email was lowest among IPGs and highest among non-IPGs
without help.
5.

Providers’ Sources for Quality Improvement Information
Finally, the majority of providers, even non-IPGs without help, said they did not want to use

an alternative organization (rather than their QIO) as a source for quality improvement assistance
(Table IV.8). For all three provider types, more than 80 percent of IPGs and non-IPGs with
assistance said they would not want to seek help from another organization, regardless of cost
and other factors (Table IV.8). Substantial proportions of providers considered their local QIO
the most useful source of information and assistance, ranging from 13 percent among non-IPG
nursing homes without assistance to 50 percent among IPG nursing homes. Among nursing

DRAFT

77

TABLE IV.7
PERCENTAGES OF PROVIDERS EXPRESSING PREFERENCES FOR TYPES
OF INTERACTIONS WITH QIOs

Would Like to Receive or Continue to Receive Information or
Assistance from QIO Through
Nursing Homes
Site visits
Training workshops
One-to-one telephone calls
Home Health Agencies
Site visits
Training workshops
One-to-one telephone calls
Hospitals
Site visits
Training workshops
One-to-one telephone calls
Most Preferred Method
Nursing Homes
Number
Site visits
Training workshops
One-to-one telephone calls
Telephone conference calls
E-mail
Website
Other
Home Health Agencies
Number
Site visits
Training workshops
One-to-one telephone calls
Telephone conference calls
E-mail
Website
Other
Hospitals
Number
Site visits
Training workshops
One-to-one telephone calls
Telephone conference calls
E-mail
Website
Other

IPG (total
number)

Non-IPG
Received QIO
Help (total
number)

Non-IPG
Received No
QIO Help (total
number)

81.8 (2,153)
96.4 (2,205)
85.3 (2,131)

53.5 (1,843)
96.2 (1,905)
72.5 (1,861)

36.6 (470)
86.1 (490)
53.6 (476)

82.2 (1,711)
96.7 (1,760)
91.7 (1,723)

60.6 (1,753)
95.5 (1,823)
85.7 (1,779)

47.0 (232)
89.9 (238)
74.6 (236)

71.2 (1,321)
97.3 (1,385)
92.8 (1,351)

53.0 (1,546)
96.0 (1,621)
92.8 (1,602)

32.8 (58)
90.8 (65)
81.0 (58)

2,198
19.3
27.8
5.2
5.4
37.6
2.5
2.2

1,900
7.0
23.3
4.6
4.2
48.6
6.3
6.1

491
3.9
19.6
3.9
4.3
48.7
6.3
13.4

1,774
14.9
22.7
8.2
7.6
39.7
4.9
2.1

1,827
9.6
19.6
7.4
5.6
46.0
6.4
5.3

246
6.5
17.1
7.3
4.9
44.7
7.3
12.2

1,399
7.0
22.7
8.9
9.4
49.7
1.6
0.6

1,631
5.2
18.6
10.4
8.2
53.6
3.3
0.8

65
4.6
26.2
6.2
7.7
47.7
3.1
4.6

Source:

Westat de-identified survey of providers May-September 2007; dataset provided to MPR by CMS.

Note:

All differences statistically significant at p<0.001, chi-squared test. Percentages may not sum to 100 percent
due to rounding.

CMS = Centers for Medicare & Medicaid Services; IPG = identified participant group provider; MPR = Mathematica
Policy Research, Inc.; QIO = Quality Improvement Organization.

DRAFT

78

TABLE IV.8
PROVIDERS’ SOURCES OF INFORMATION FOR QUALITY IMPROVEMENT (PERCENTAGES)

IPG

Non-IPG Received
QIO Help

Non-IPG
Received No QIO
Help

Would Prefer to Use Alternate Organization for
Quality Improvement Assistance
Nursing Homes
Number
Yes
No
Would depend on cost and other factors
Don’t know

2,205
2.9
88.0
4.8
4.3

1,831
7.5
84.8
5.0
2.7

672
9.1
78.3
7.0
5.7

Home Health Agencies
Number
Yes
No
Would depend on cost and other factors
Don’t know

1,771
1.8
91.9
2.9
3.4

1,788
3.9
90.3
3.3
2.6

316
8.2
75.3
10.4
6.0

Hospitals
Number
Yes
No
Would depend on cost and other factors
Don’t know

1,384
6.8
80.3
8.6
4.3

1,615
8.5
80.1
7.7
3.7

87
20.7
59.8
10.3
9.2

90.1 (2,208)
86.9 (2,198)
95.0 (2,228)
40.0 (2,074)
25.7 (2,038)
21.4 (2,029)
34.2 (2,054)
25.5 (2,035)
69.2 (2,170)
45.4 (1,879)

92.5 (1,902)
86.2 (1,885)
90.3 (1,898)
21.4 (1,821)
24.0 (1,818)
20.9 (1,813)
34.5 (1,825)
26.2 (1,814)
68.9 (1,875)
46.8 (1,773)

81.6 (772)
72.6 (765)
58.2 (761)
9.1 (739)
10.6 (744)
10.9 (736)
20.7 (744)
12.1 (737)
49.4 (770)
41.1 (733)

90.7 (1,370)
48.3 (1,348)
95.5 (1,388)
57.0 (1,281)
81.0 (1,329)
91.0 (1,370)
55.4 (1,271)
73.3 (1,303)
41.4 (1,248)
40.8 (1,243)
40.9 (1,070)

92.0 (1,605)
39.6 (1,587)
94.6 (1,613)
42.3 (1,524)
71.0 (1,574)
80.6 (1,590)
52.8 (1,527)
75.3 (1,548)
30.7 (1,501)
33.0 (1,488)
41.4 (1,392)

83.5 (91)
22.2 (90)
65.9 (88)
15.7 (83)
44.4 (93)
51.7 (91)
44.4 (91)
63.3 (90)
17.9 (84)
18.6 (86)
55.6 (90)

What other organizations do you turn to when you
need information or assistance for quality
improvement initiatives?a
Nursing Homes, percentage (Total N)b
CMS
NH Compare
Local QIO
MedQIC
AHRQ
IHI
AHQA
NQF
Other association websites
Other organizations
Hospitals, percentage (number) b
CMS
NH Compare
Local QIO
MedQIC
AHRQ
IHI
AHQA
AHA
Premier
VHA
Other

DRAFT

79

TABLE IV.8 (continued)

IPG
Which organization provides the most useful
information and assistance?a
Nursing Homesb
Number
CMS
NH Compare
Local QIO
MedQIC
AHRQ, IHI, AHQA, or NQF
Other association websites
Other organizations
Hospitalsb
Number
CMS
Hospital Compare
Local QIO
MedQIC
AHRQ
IHI
AHQA
AHA
Premier
VHA

Other

Non-IPG Received
QIO Help

Non-IPG
Received No QIO
Help

2,133
13.4
7.1
49.9
4.3
1.6
9.9
13.7

1,766
19.4
11.3
29.4
1.8
2.0
13.8
22.3

649
26.8
15.9
13.3
1.2
2.3
14.0
26.7

1,316
8.3
1.4
35.0
3.4
4.6
27.8
0.8
1.4
3.8
3.0
10.4

1,551
11.0
1.9
33.6
3.2
3.1
24.8
0.9
1.5
2.0
2.8
15.2

84
23.8
3.6
14.3
0.0
8.3
15.5
2.4
2.4
1.2
2.4
26.2

Source:

Westat de-identified survey of providers May-September 2007; dataset provided to Mathematica
Policy Research, Inc. by Centers for Medicare & Medicaid Services (CMS).

Note:

All differences statistically significant at p<0.001, chi-squared test. Percentages may not sum to 100
percent because of rounding.

a

Home health agencies were not asked this question.

b

The nursing home and hospital questionnaires listed slightly possible sources of quality improvement information.

AHA = American Hospital Association
AHQA = American Health Quality Association
AHRQ = Agency for Healthcare Quality and Research
IHI = Institute for Health Improvement
MedQIC = Medicare Quality Improvement Community website (http://www.medqic.org)
NH Compare = Nursing Home Compare
NQF = National Quality Forum; Premier = Premier, Inc. (Premier Healthcare Alliance)
QIO = Quality Improvement Organization
VHA = VHA, Inc.

DRAFT

80

homes, other important sources of information were CMS itself, associations, other websites, and
other organizations. Among hospitals, the Institute for Health Improvement (IHI) and other
organizations were the other two choices most frequently selected, as well as the local QIO
(Table IV.8). CMS was the most commonly selected source among non-IPGs receiving no QIO
assistance.
C. DISCUSSION
IPGs and non-IPGs receiving assistance were generally very positive about their QIO,
reporting high degrees of satisfaction and providing high ratings of value of QIO services. NonIPG providers not receiving assistance were generally neutral or somewhat favorable toward
their QIOs. Few providers gave clearly negative ratings. Most providers, even non-IPGs without
assistance, were willing to receive assistance from QIOs, and considered the QIO a major
resource for quality improvement information. These results are consistent with previous results
(Bradley et al. 2005).
The survey results bolster one of the case study findings that the quality improvement
environment is complex, with many different sources of information and assistance in addition to
local QIOs. Different types of providers likely turn to their QIO or to other sources depending on
their prior experiences with various organizations, their network of contacts, characteristics of
the QIO, and features of the local health care market.
It is noteworthy that substantial proportions of non-IPG providers in all three settings
reported receiving assistance from their local QIO (especially hospitals); non-IPG providers who
received such help had more favorable perceptions of their QIOs than non-IPGs who had not
received help. We cannot tell from these survey data if IPG providers and non-IPGs who worked
with QIOs had greater satisfaction because of positive experiences with QIOs during the Eighth
SOW, or because providers who already held favorable views of their QIO (possibly as a result
DRAFT

81

of good relationships from prior SOWs) were more likely to agree to be IPGs or to work with
QIOs as non-IPGs.
The substantial involvement of QIOs with non-IPGs suggests that the IPG variable might
not be a straightforward binary indicator of exposure to QIO interventions, and consequently, the
meaning of our proxy variable for this study, the IPG penetration rate, also becomes unclear. In
general, if the untreated or comparison group is “contaminated” with the intervention under
study, the treatment estimates tend to be biased downward (that is, they underestimate the true
treatment effect).

DRAFT

82

V. CONCLUSIONS

We conducted a limited assessment of the Quality Improvement Organization (QIO) Eighth
Scope of Work (SOW) using a variety of methods. We analyzed Medicare Compare data for
nursing homes, home health agencies (HHAs), and hospitals to answer a series of descriptive and
impact questions; we conducted a case study analysis of a small number of states to explore
differences in improvement in hospital surgical care measures; and we performed descriptive
analyses of a national survey of providers.
A. SUMMARY OF RESULTS
The analyses of the Medicare Compare data document improvements across the three
settings on most measures focused on by QIOs during the Eighth SOW. Our results are
consistent with other descriptive studies in the academic or lay press documenting general
improvements in quality measures (Agency for Healthcare Research and Quality 2008 ; Jencks et
al. 2003; Appleby and Gillum 2009).
Previous nationwide studies of QIO impacts have had mixed results. Rollow et al. (2006)
performed a direct comparison between IPG and non-IPG providers in the Seventh SOW and
found that IPG providers had better performance on the quality measures. However, the authors
acknowledged the possibility that the results could have been due to selection, in which
providers that were more highly motivated, more capable of improving quality, and would have
performed better anyway were those that volunteered to be IPGs. Snyder and Anderson (2005),
in contrast, compared IPG and non-IPG hospitals in four states during the Sixth SOW and found
no evidence of a QIO effect; however, their study was criticized for measurement error in the
intervention or treatment measure (that is, the IPG indicator) and for lack of statistical power
(Jencks 2005). Measurement error in the treatment or IPG indicator remains a problem. The
DRAFT

83

literature reviews by the Institute of Medicine (2006) and the National Opinion Research Center
(NORC) (Sutton et al. 2007) concluded that there was insufficient evidence either for or against
the effectiveness of QIOs.
To our knowledge, the current study is the first to use the IPG penetration rate as a measure
of QIO exposure of effect. Our IPG penetration analyses suggest positive impacts of QIO
activities with IPGs on most quality measures in the nursing home and home health care settings.
In addition, we found evidence for favorable effects from QIOs’ statewide efforts to improve
care provided by HHAs. However, we found no evidence of impacts of QIOs activities on
reduction of worsening depression and anxiety among nursing home residents, and no evidence
that QIO activities improved performance on the heart attack, heart failure, and pneumonia
appropriate care measures (ACM) in hospitals.
Our case study respondents mentioned many possible factors that might have contributed to
state-level trends in the Surgical Care Improvement Project (SCIP) measures, including previous
activities in the Seventh SOW, QIO actions in the Eighth SOW, hospital association activities,
the actions of large health systems, public reporting of hospital quality measures, the
implementation of the Reporting Hospital Quality Data for Annual Payment Update
(RHQDAPU) program, and the 100,000 Lives Campaign conducted by the Institute for Health
Improvement (IHI). Specific QIO actions for the Eighth SOW perceived by our interviewees as
particularly effective included engaging a dominant local health system; convening a hospital
collaborative; and complex efforts consisting of intensive site visits, regional in-person meetings,
and letters from surgical opinion leaders sent statewide to all surgeons.
Our analyses of provider survey data indicated that most providers with experience working
with QIOs (IPGs and non-IPGs receiving QIO assistance) had highly favorable perceptions of
QIOs. Non-IPG providers receiving no assistance tended to be neutral toward their QIOs.

DRAFT

84

Providers of all types were interested in receiving technical assistance from QIOs, with particular
interest in training workshops and email contacts. A substantial proportion of non-IPG providers
reported receiving assistance from QIOs, blurring the distinction between IPGs and non-IPGs.
B. POTENTIAL LIMITATIONS
The main potential limitation of the impact estimates is the possibility that states with high
IPG penetration rates also have unobserved characteristics that are the true causes of their greater
gains in quality. We might then mistakenly attribute the larger observed gains in the quality
measures to the IPG penetration rate rather than to these other underlying factors. Although we
include a variety of control variables in the regressions, the assumption that the IPG penetration
rate is unrelated to other potential causes of quality improvement, though plausible, remains
essentially untestable. In addition, the data on IPG penetration for some of the measures and
provider settings are constrained by the sample size (the 51 states) and the limited variation of
the penetration rates across the states.
C. CONCLUSIONS
Our limited assessment of the Eighth SOW finds generally favorable results for quantitative
estimates of QIO impacts, qualitative analyses of interview data, and descriptive tabulations of
provider satisfaction survey data. However, for the reasons noted above, the quantitative
estimates should be viewed with caution.
Our results (as well as their limitations and the difficulties encountered interpreting them)
highlight the importance of having detailed quantitative provider-level data so that we are better
able to model and address the many potential biases. We are currently working with CMS and
the QIO community to execute agreements through which Mathematica Policy Research, Inc.
(MPR) will become a subcontractor under each of the 53 QIO contracts. It is clear from the
regulations that the QIOs are permitted to release provider-identified data to a subcontractor. Our
DRAFT

85

results also highlight the need for our planned, detailed interviews with CMS, QIOs, and
provider respondents; the results also demonstrate the need for our surveys of QIOs and
providers. These data collection efforts will help us understand the selection processes through
which providers become IPGs (or non-IPGs who do or do not receive QIO assistance); such an
understanding is in turn essential for our quantitative impact analyses.

DRAFT

86

REFERENCES

Agency for Healthcare Research and Quality. 2007 National Healthcare Quality Report. AHRQ
Pub. No. 08-0040. Rockville, MD: U.S. Department of Health and Human Services, AHRQ,
February 2008.
American Association for Public Opinion Research. “Standard Definitions: Final Dispositions of
Case Codes and Outcome Rates for Surveys.” Available online at
http://www.aapor.org/uploads/Standard_Definitions_07_08_Final.pdf. Accessed February
26, 2009.
Appleby, Julie, and Jack Gillum. “Fewer Care Facilities Use Restraints for Elderly Residents.”
USA Today, February 16 2009. Available online at http://www.usatoday.com/news/
nation/2009-02-16-nursing-home-restraints_N.htm. Accessed March 17, 2009.
Bradley, Elizabeth H., Melissa D.A. Carlson, William T. Gallo, Jeanne Scinto, Miriam K.
Campbell, and Harlan M. Krumholz. “From Adversary to Partner: Have Quality
Improvement Organizations made the Transition?” Health Services Research, vol. 40, no. 2,
2005, pp. 459-476.
Essey, Marian. “The QIO Program, Home Health, and the National Acute Care Hospitalization
Priority.” Home Health Care Management and Practice, vol 20, no. 2, February 2008,
pp. 110-116.
Giambo, Pamela, Vasudha Narayanan, Josh Rubin, Lauren Shrader, and Sushama Rajapaksa.
“Surveys of Quality Improvement: 2007 Provider Satisfaction Survey Methodology
Report.” Rockville, MD: Westat, Inc., November 29, 2007.
Institute of Medicine. Medicare’s Quality Improvement Organization Program: Maximizing
Potential. Washington, DC: National Academies Press, 2006.
Jencks, S.F., E.D. Huff, and T. Cuerdon. “Change in the Quality of Care Delivered to Medicare
Beneficiaries, 1998-1999 to 2000-2001.” JAMA, vol. 289, no. 3, 2003, pp. 305-312.
Jencks, S.F. Letter to the editor regarding “Quality Improvement Organizations and Hospital
Care.” JAMA, vol. 294, no. 16, October 26, 2005, p. 2028.
Narayanan, Vasudha, Pamela Giambo, Stephanie Fry, Sherman Edwards, Jennifer Kawata.
“Surveys of Quality Improvement: 2007 Provider Satisfaction Survey Analytic Report.”
Rockville, MD: Westat, Inc., March 14, 2008.
Rollow, William, Terry R. Lied, Paul McGann, James Poyer, Lawrence LaVoie, Robert T.
Kambic, Dale W. Bratzler, Allen Ma, Edwin D. Huff, and Lawrence D. Ramunno.
“Assessment of the Medicare Quality Improvement Organization Program.” Annals of
Internal Medicine, vol. 145, 2006, pp. 342-353.

DRAFT

87

Snyder, Claire, and Gerard Anderson. “Do Quality Improvement Organizations Improve the
Quality of Hospital Care for Medicare Beneficiaries?” JAMA, vol. 293, no. 23, 2005,
pp. 2900-2907.
Sutton, J., L. Silver, L. Hammer, and A. Infante. “Toward an Evaluation of the Quality
Improvement Organization Program: Beyond the 8th Scope of Work.” Final Report.
Washington, DC: U.S. Department of Health and Human Services, Office of the Assistant
Secretary for Planning and Evaluation, 2007.
U.S. Government Accountability Office. “Nursing Homes: Federal Actions Needed to Improve
Targeting and Evaluation of Assistance by Quality Improvement Organizations.”
GAO-07-373. Washington, DC: May 2007.

DRAFT

88

APPENDIX A
METHODS FOR ANALYSES OF MEDICARE
COMPARE DATA

The evaluation of the QIO program’s 8th SOW uses data from multiple sources to document
changes in measures of the quality of care over the three year course of the 8th SOW. The results
cover outcomes for three provider types: nursing homes, home health agencies, and hospitals.
The approaches vary somewhat across provider-types due to variation in the types of data
available and in the design of the QIO program across the three types. Table A.1 provides an
overview of the analytic approach.
A. SAMPLE AND DATA SOURCES
The analyses are based on provider-level data. For some analyses, those data are aggregated
to the state level. The primary data sources are CMS’s Compare databases (Nursing Home
Compare, Home Health Compare, and Hospital Compare), which contain data reported by
nursing homes, home health agencies (HHAs), and hospitals nationwide. As described further
below, nursing homes and home health agencies are required to report these data. Hospital
reporting is voluntary, though reimbursement rates are now tied to reporting, giving providers an
incentive to submit data. The sample for each analysis consists of all providers for which data are
available in the 50 states and the District of Columbia. 1
The study outcomes are all measures of change between the baseline period near the
beginning of the 8th SOW and the follow-up period towards the end of the 8th SOW. Baseline
and follow-up periods vary slightly across the three provider settings because of differences in
data collection schedules. The analyses include only providers with measurements at both time
points, from which we can calculate the change measures. Additional data sources include the

1

We will hereafter refer to all QIO jurisdictions as “states,” even though the District of Columbia is technically
not a state.

DRAFT

A.3

DRAFT

TABLE A.1
OVERVIEW OF ANALYSES OF MEDICARE COMPARE DATA
Nursing Homes

Home Health Agencies

Hospitals

Study measures:

Study measures:

Study measures:

Four focus measures of changea

A single patient functioning index created by averaging
seven individual patient functioning change measures, and
two measures of the disposition of the home health
episodeb

One ACM index created by averaging 10 individual
ACM measures and one SCIP index created by
averaging the 2 SCIP measuresc

Average of raw provider-level changes, unweighted (each
agency receives a weight of one)e

Average of raw provider-level changes, weighted by
provider sizef

Correlations of provider-level changes across measures,
unweighted

Correlations of provider-level changes across
measures, weighted by provider size

Average of state-level changes, where state-level changes
are state averages of provider-level change, unweighted.

Average of state-level changes, where state-level
changes are state averages of provider-level change,
weighted by provider size.

Has the quality of care received by patients improved
nationwide
Average of raw provider-level changes, weighted by
provider sized
Average of standardized, provider-level changes, weighted
by provider sizeg

A.4

Do providers who do well in one domain of measures
also do well in others?
Correlations of provider-level changes across measures,
weighted by provider size
Are most states improving over the Eighth SOW
Average of state-level changes, where state-level changes
are state averages of provider-level change, weighted by
provider size.
Presentation of interquartile ranges and reductions in
failure rate (RFR).

Presentation of interquartile ranges and reductions in
failure rate (RFR).

Do states that do well in one domain of quality also do
well in others
Correlations of state-level changes, where state-level
changes are state averages of provider-level change,
weighted by provider size. Adjustment for baseline.h

Correlations of state-level changes, where state-level
changes are state averages of provider-level change,
unweighted. Adjustment for baseline.

Presentation of interquartile ranges and reductions in
failure rate (RFR).

DRAFT

TABLE A.1 (continued)
Nursing Homes

Home Health Agencies

Hospitals

State-level means were calculated by aggregating home
health agency means to the state level, unweighted. These
means (which were of the changes in measures) were then
regression adjusted for baseline performance. The adjusted
change measures were then standardized to have a mean of
zero and a standard deviation of one (that is, they were
converted to z-scores). “Consistently high improving”
states were defined as those performing above the mean on
both the patient functioning composite index and the ACH
measure, and where the improvement was >0.2 s.d. above
the mean

State-level means were calculated by aggregating
hospital means to the state level, weighted by
numbers of patients. These means (which were of the
changes in measures) were then regression adjusted
for baseline performance. The adjusted change
measures were then standardized to have a mean of
zero and a standard deviation of one (that is, they
were converted to z-scores).
“Consistently high improving” states were defined as
those performing above the mean for both the ACM
and SCIP indexes, and whose improvement was >0.2
s.d. above mean

Which states do well in multiple domains
State-level means were calculated by aggregating
individual nursing home means to the state level,
weighting by their total number of residents. These means
(which were of the changes in measures) were then
regression adjusted for baseline performance. The adjusted
change measures were then standardized to have a mean of
zero and a standard deviation of one (that is, they were
converted to z-scores). Consistently high improving” states
were defined as those performing above the mean on at
least three of the four outcomes of pressure ulcers,
physical restraints, depression, and chronic pain, and
whose average improvement across all four was at least
one-fifth of a standard deviation better than the mean
(average z-score of -0.2 or below).

A.5

Is there an impact of QIOs’ work with IPs on
improvement in quality measures
Regression of provider-level changes (four change
measure) on IPG penetration rates (51 different values),
controlling for provider and region characteristics,
weighted by number of patients. Control for non-focus
measures.i Use seemingly unrelated regression (SUR) to
simultaneously estimate the four models and control Type
II error.

Regression of provider-level changes in ACH measure on
IPG penetration rates (51 different values), controlling for
provider and region characteristics, unweighted. Use
seemingly unrelated regression (SUR) to simultaneously
estimate four models and control Type II error.

Is there an impact of QIOs’ statewide efforts on
improvement in home health quality measures
Analyze changes in three home health measures chosen by
some states for statewide improvement projects.j Create a
new control variable of five other home health measuresk
by first standardizing each to means of zero and standard
deviations of one (that is, conversion to z-scores) and then
averaging them. Regress the three home health measures of
interest on average baseline levels, the new control
variable for the five other measures, other standard control
variables, and a dummy indicating whether the home
health agency is in a state where the state chose the
dependent variable as a statewide focus measure. Estimate
three regressions simultaneously using SUR.

DRAFT

TABLE A.1 (continued)
Note:

Outline of analysis of Medicare Compare data.

a

(1) Percent of High-Risk Long-Stay Residents who have Pressure Ulcers, (2) Percent of Long-Stay Residents who were Physically Restrained, (3) Percent of Long-Stay Residents
who Experience Depression, and (4) Percent of Long-Stay Residents who Experience Chronic Pain

b

Patient functioning measures: (1) Improvement in Bathing, (2) Improvement in Transferring, (2) Improvement in Ambulation/Locomotion, (3) Improvement in Management of
Oral Medications, (4) Improvement in Pain Interfering with Activity, (5) Improvement in Dyspnea, (6) Improvement in Urinary Incontinence. Disposition of home health episode
measures: (1) Acute Care Hospitalization and (2) Discharge to Community.

c

Acute care measures (ACM): (Heart Attack) (1) Aspirin at Arrival, (2) Aspirin Prescribed at Discharge, (3) ACE Inhibitor or ARB for LVSD, (4) Beta Blocker Prescribed at
Discharge, (5) Beta Blocker at Arrival; (Heart Failure) (6) Evaluation of LVS Function, (7) ACE Inhibitor or ARB for LVSD; (Pneumonia) (8) Oxygenation Assessment,
(9) Pneumococcal Vaccination, and (10) Initial Antibiotic Received within 4 Hours of Hospital Arrival. Surgical Care Improvement Project (SCIP) Measures: (1) Receipt of
Prophylactic Pre-operative Antibiotic, and (2) Discontinuation of Prophylactic Pre-operative Antibiotic

d

Nursing home measure of provider size is number of beds from Nursing Home Compare.

e

Measure of home health agency size not available in Home Health Compare or OSCAR.

f

Hospital measure of provider size is number of patients for whom the measure is reported, from Hospital Compare.

g

A.6

Standardized measures are standardized to have a mean of zero and a standard deviation of 1 (a z-score).

h

For each state, the adjusted levels are calculated by regressing the state-level changes in improvement on baseline performance, then taking the residual for each state (that is
subtracting the observed change from the predicted change).

i

The four non-focus measures were changes in (1) Improvement in Ambulation, (2) Improvement in Pain Interfering with Activity, (3) Improvement in Transferring, and (
4) Improvement in Urinary Incontinence.
j

The three measures were (1) Improvement in Management of Oral Medications, (2) Improvement in Pain Interfering with Activity, or (3) Improvement in Dyspnea. The number
of QIOs selecting each of these were, respectively, 30, 10, and 9.

k

These were improvement in (1) bathing, (2) transferring, (3) ambulation, and (4) incontinence; and (5) discharge to community.

RFR = Reduction in Failure Rate.

Area Resource File, OSCAR, and QIO administrative data. Details of the measures derived from
each dataset are provided below.
1.

Nursing Home Compare
The database includes information on nursing homes that are certified to participate in

Medicaid and/or Medicare and provide “skilled” care—meaning skilled nursing or rehabilitation
staff is required for care. Those data originate in the Minimum Data Set (MDS), a standardized
assessment that collects data on residents, their medical/functional condition, and the care they
receive. Nursing homes are required to report the information as part of Medicare's nursing home
prospective payment system. We used baseline data collected in the second quarter of 2005.
Follow-up values were collected in the first quarter of 2008. Quality measures included in the
database describe both measures of patient well-being and of care received. 15,979 providers are
in the baseline data and 15,773 providers in the follow-up data. There are 12,511 nursing homes
that have data for both baseline and follow-up and included in the analyses, and for which we
were able to calculate change in quality outcomes.
2.

Home Health Compare
The data set contains quality measures for home health patients whose care is covered by

Medicare or Medicaid and provided by a Medicare-approved Home Health Agency. Quality of
care data in Home Health Compare are drawn from the Outcome and Assessment Information
Set (OASIS). HHAs are required to report OASIS data to Medicare as part of its prospective
payment system. Baseline data were collected during the period September 2004 through August
2005. Follow-up data cover the period March 2007 through February 2008. The quality measures
describe various aspects of daily patient functioning and well-being, prevalence of needing to be
hospitalized, and prevalence of being able be discharged from HHA care and remain living at

DRAFT

A.7

home. At baseline, 7,740 providers are included in the database, and 9,143 providers are
included at follow-up. There are 7,275 HHAs that have data for both baseline and follow-up.
3.

Hospital Compare
Hospital baseline data come from the collection period July 2004 through June 2005 and the

follow-up data come from the collection period October 2006 through September 2007. For the
period we studied, the Compare database contained information on acute care general hospitals
and critical access hospitals. Hospitals, unlike or nursing homes and home health agencies,
volunteered to submit data to the Compare database, although since 2004, the year before the
start of the 8th SOW, reporting has been tied to Medicare reimbursement levels. 2 The quality
measures in Hospital Compare are measures of processes of care and focus on four clinical
conditions: heart attack, heart failure, pneumonia, and surgical infection prevention. The
measures are derived from individual patient records. Data are available for 4,238 hospitals at
baseline and 4,469 hospitals at follow-up. There are 4,027 hospitals that have data for both
baseline and follow-up.
4.

CMS/QIO Administrative Data
The analyses also use data provided by CMS related to QIO activities. Those include the

number of providers that the QIOs in each state recruited to collaborate with individually to
improve specific quality-related outcomes, and which measures they worked on. Those providers
are known as identified participants (IPs). The CMS measures are used to identify QIO impacts,
using methods described further on in this report.
2

The Reporting Hospital Quality Data for Payment Update (RHQDAPU) initiative was first included in the
Medicare Prescription Drug, Improvement and Modernization Act (MMA) of 2003—initially just incentive
payments, but starting in 2006, after the Deficit Reduction Act (DRA) of 2005, hospitals that did not participate in
the RHQDAPU initiative saw their Medicare payments reduced by two percent.

DRAFT

A.8

5.

Area Resource File
The Area Resource File (ARF) is published by the Health Resources and Services

Administration. It contains a wide range of county-level data, including information on health
care providers, personnel, and utilization, along with information on the economic and
demographic characteristics of the population. Because quality of care varies across different
populations and contexts, we adjust for several measures derived from the ARF in our impact
analyses.
B. QUALITY MEASURES USED AS OUTCOMES IN IMPACTS ANALYSES
The 8th SOW contract required QIOs to concentrate on working with providers to improve
specific measures (which we call “focus measures”). As described further on, we take advantage
of the differences in improvement between focus measures and the other quality measures that
QIOs were not required to concentrate on (“non-focus measures”) to tease out the QIOs’
contribution to observed improvement. Due to a lack of reporting of some measures at baseline,
certain focus outcomes could not be included in the analyses.
All measures are percentages of patients, either of eligible patients receiving a recommended
process of care (such as the percentage of hospital patients with a heart attack receiving aspirin)
or experiencing a health outcome (such as the percentage of a nursing home’s patients suffering
a pressure ulcer), thus range from zero to 100. The change in a measure (follow-up minus
baseline) can thus range from -100 to +100. The change measures, though theoretically bounded,
all have symmetrical distributions with tails that do not reach those bounds.
1.

Nursing Homes
QIOs were to work with all IPs on reducing the prevalence of pressure ulcers among high-

risk, long-stay patients and to reduce the number of patients who are physically restrained. They

DRAFT

A.9

also had the option of working with IP nursing homes on averting worsening psychological
distress (depression and anxiety) and chronic pain among long-stay residents. The measures
reflect the percentage of residents in the facility that experience the particular condition. Lower
values reflect better outcomes.
2.

Home Health Agencies
As described in Chapter I, QIOs were tasked to undertake activities both statewide and with

IPs to reduce the proportion of home health episodes that end with the adverse event of acute
care hospitalization (ACH). Statewide activities include disseminating information on methods
of improving care through conferences and printed materials. They engage with IPs in activities
such as setting targets for improvement, redesigning care processes, and increasing the use of
health information technology (CMS 2006; Essey 2008).
ACH are costly events that reflect deterioration in patients’ physical conditions that may be
preventable by high quality home health care. CMS also presented QIOs with a list of optional
outcomes to work on. Those outcomes are listed in Table II.1. Each QIO selected one of those
measures to work on in their statewide activities. They also selected one or more to work on with
IPs. For all but ACH, higher values represent better outcomes.
We describe statewide improvement using a summative scale combining seven measures of
patient functioning. Those include all measures other than ACH and discharge from home care.
Those items create a scale with strong internal consistency (α = .84). 3

3

α refers to Cronbach’s alpha. Internal consistency is measured at baseline. Results are similar for both followup and change.

DRAFT

A.10

3.

Hospitals
QIOs were required to work on improving hospital performance in two broad areas of care:

specific acute medical conditions [acute myocardial infarction (AMI), heart failure (HF), and
pneumonia (PN)] and surgical care safety (the two Surgical Care Improvement Project or SCIP
measures). The five AMI, two HF, and three PN measures were combined into a single
Appropriate Care Measure (ACM) score (see Table II.1 for the 10 component measures).
A hospital’s ACM score reflects whether every patient with any of the three conditions
received each one of the processes of care for which he or she was eligible (Nolan and Berwick
2006). Hospital ACM scores were not publicly reported in Hospital Compare during the 8th
SOW. Since the Hospital Compare data for the study period only contain provider-level
measures of the percent of patients who received each of the 10 procedures, respectively, among
those who should have received them, we could not compute hospital ACMs.
However, to reduce the number of separate measures analyzed we averaged the 10 ACM
component measures into a single index, which we call the average of ACM measures, to
distinguish it from the actual ACM score. Higher (more positive) values represent better
outcomes.
The SCIP seeks to reduce negative post-operative side-effects of surgical infections, adverse
cardiac events, and deep vein thrombosis. CMS began collecting surgical care measures
relatively recently and only two are available at both baseline and follow-up in the Compare
database: provision of antibiotic within an our prior to surgical incision, and discontinuing
antibiotics in a timely manner after the end of surgery. The proportion of hospitals reporting each
of those measures (≈30%) is relatively low however. We averaged those two measures to create
a single average SCIP score.

DRAFT

A.11

C. MEASURES OF QIO ACTIVITY
Our multiple approaches to estimating impacts—which are described further on in this
chapter—rely on two key types of measures of QIO involvement or activity. Both are derived
from QIO administrative data provided by CMS. They are state-level data—the lowest level of
data that were available for our analyses—on how many providers QIOs worked with as IPs and,
in the case of HHA outcomes, the specific measures that QIOs selected to work on in their
statewide activities.
1.

IPG Penetration
We call the first measure, IPG penetration (%IPG). For nursing homes and home health

agencies, we define IPG penetration is as the percentage of providers in a state who are IPs. For
hospitals, we define IPG penetration as the percentage of patients who are in IPG facilities. The
difference across provider types results from differences in data availability. As will be described
later, one is not necessarily preferable to the other.
For nursing homes, IPG penetration varies across measures. All IPs were required to work
on reducing the use of physical restraints and the prevalence of pressure ulcers. Most (94%),
though not all, also worked to reduce psychological distress (depression and anxiety) and pain
among residents.
Our data allowed us to define IPG penetration for only one HHA outcome—acute care
hospitalization. QIOs were required to work with all IP HHAs on reducing ACH. The
administrative reports also indicated that QIOs selected a subset of other outcomes to work on,
but they did not specify whether QIOs worked with all IPs on every one of these additional
selected measures, nor were we able to clarify this issue after speaking with CMS staff. We thus
did not create a measure of IPG penetration for those other outcomes.

DRAFT

A.12

For hospitals we faced a situation similar to that for HHAs in that IPG penetration data are
available only for ACM, not SCIP outcomes. IPG penetration varies by individual item across
the three conditions (AMI, HF, PN) because it is measured at the patient-level and the proportion
of hospital patients who were treated in IP hospitals varies somewhat by condition. We use an
IPG penetration measure that is an average of the IPG penetration rates for the three individual
conditions. As might be expected, however, IPG penetration is very similar across the measures,
with rates between pairs of conditions correlated between .94 and .99.
2.

QIO-Selected Outcomes for Home Health Agencies.
In their statewide work to improve care in home health agencies, QIOs selected one from

among a list of nine measures to work on, in addition to acute care hospitalization. In our impact
analyses of that statewide work, we documented the extent to which providers, on average,
improved disproportionately on the particular measure selected by their state QIO. Those
analyses incorporate binary indicators for individual quality outcomes, indicating whether the
QIO in the provider’s state selected the given outcome to work on.
D. CONTROL VARIABLES
Observed associations between measures of QIO activities and quality improvement could
potentially be the result of other characteristics that are associated with both. We conducted
multivariate analyses that adjust for a number of provider and county-level characteristics that
may be associated with quality outcomes.
1.

Provider Characteristics
We used a number of baseline provider traits that are drawn from the Compare databases.

These are similar across provider types, though there is some variation. In all regressions we
controlled for baseline levels of the outcome in question as baseline levels are the strongest
DRAFT

A.13

predictor of improvement and their inclusion adjusts for both underlying performance and
regression toward the mean (in which greater improvement tends to occur among providers with
lower baseline levels and vice-versa). Other traits, by provider type include:
• Nursing Homes: Binary indicators of ownership type (for-profit, corporate; forprofit, individual or partnership; government; non-profit, corporate; non-profit,
religious; non-profit, other), facility size (indicator for being in the largest quartile of
nursing homes 4); whether situated within a hospital, and presence of both resident
and family councils.
• HHAs: Binary indicators of ownership type (for-profit; government; non-profit,
private; non-profit, religious; non-profit, other), date of certification (pre-1990, 1990s,
2000 or later), and whether the agency provides medical social services. 5
• Hospitals: Binary indicators of ownership type (for-profit; government; non-profit,
private; non-profit, religious; non-profit, other), hospital type (acute care or critical
access), and hospital size (indicator for being in the largest quartile of facilities). 6
2.

Local Area Characteristics
Quality of care is known to vary across geographic regions by such regional factors as

population socioeconomic status and race/ethnicity. Other work has suggested variation by
community characteristics such as local supply of nurses or doctors (Jencks, Jencks, and
McGann 2004). The impact analyses control for a range of characteristics of the county in which
the provider is located. The list of controls is identical across provider types and consists of:
• The number of active physicians per 1,000 population, and the number of nurses per
1,000 population,
• The percentage of the population aged 0 to 19 and the percentage 65 years or older,
4

Those facilities account for roughly 45% of the total patient population in the sample.

5

The Home Health Compare data contain a range of indicators of service sub-type, including nursing care,
physical therapy, occupational therapy, speech pathology, and home health aide services. But the overwhelming
majority of HHAs (>90%) report providing each of those, so those traits do little to differentiate providers. Most
also report providing medical social services, but nearly 20% do not.
6

Note that those large hospitals serve over half of all patients.

DRAFT

A.14

• The percent of county residents without health insurance,
• Two indicators of economic well-being: the log of per capita income and the poverty
rate,
• The percent of the population with four or more years of college,
• The percentage of the population who are Hispanic and the percentage who are
Black/African American,
• And an indicator for whether the provider’s county is part of a metropolitan area.
3.

Non-Focus Quality Measures
The variables noted above control for a range of observable provider and community

characteristics that might correlate with quality of care. However, unobserved causes of quality
improvement may remain. In order to capture unobserved provider propensities toward quality
improvement, we adjusted for provider performance on measures that were not focused on by the
QIOs. We used these non-focus measures as indicators of the “background” improvement in
quality that might have occurred without QIO intervention. Such outcomes are available for only
for nursing homes and home health agencies. All of the outcomes in the Hospital Compare data,
however, were targeted for improvement through QIO initiatives that we examine, so it was not
possible to use non-focus comparison outcomes for that provider type. We briefly describe the
non-focus outcomes for the other two provider types below. The measures and their use are
described further in the discussion of methods.
a.

Nursing Homes
The Nursing Home Compare data contain a range of measures that QIOs did not focus on

improving during the 8th Statement of Work. However, the specific actions taken to improve
care captured by the four focus measures could also lead to improvement on some of the nonfocus measures. In order to determine the set of measures for which performance would be most
independent of performance on the focus outcomes, we created a matrix of specific continuous
DRAFT

A.15

quality improvement (CQI) activities and the quality outcomes that they would be expected to
influence (see Table A.2). We used that matrix to calculate the extent to which activities to
improve performance on each non-focus measure overlapped with those used to improve focus
measures.
All non-focus outcomes shared a moderate number of CQI activities with focus outcomes.
On average each activity used to improve a non-focus outcome would also be relevant for
between two or three of the four focus outcomes. We identified four non-focus measures whose
CQI activities overlap least with those used to improve focus activities. Those four are: reduction
in daily activity, being bed/chairfast, worsening mobility, and the presence of urinary tract
infection. We entered both the change and baseline levels of those measures as controls in the
multivariate analyses. Note that to the extent that efforts to improve focus activities also improve
our non-focus outcomes, this “spillover” will tend to produce underestimates of QIO impacts
because some of those impacts are absorbed by the non-focus outcome controls.
b. Home Health Agencies.
We also incorporated controls for non-focus measures in the analyses of impacts of QIO
statewide efforts to improve health care. Each state QIO selected one optional measure to work
on out of a list of nine. Forty-nine of the 51 QIOs selected either improvement in management of
oral medications (30 QIOs), pain interfering with activity (10 QIOs), or dyspnea (9 QIOs). As
noted above, we examined impacts of statewide activities on those three outcomes. Those
analyses include specifications that control for average baseline levels and improvement in the
other five available outcomes (bathing, transferring, ambulation, incontinence, and discharge to

DRAFT

A.16

DRAFT

TABLE A.2
CARE QUALITY IMPROVEMENT ACTIVITIES AND THE QIO FOCUS AND NON-FOCUS
NURSING HOME OUTCOME MEASURES THEY INFLUENCE
CQI Activities
Behavioral/
Incontinence Frequent Nutrition/ Medication
Fall
Increased
Turning/
Comprehensive Management Restorative/ Family/Staff Psychosocial
Care
Monitoring Hydration Management Prevention Mobility Repositioning Assessment
Philosophy
Rehab
Education Interventions
Focus Quality Measures (QM)a
Long-stay
Physical restraint use
High risk pressure ulcers
More depressed/anxious
Moderate-severe pain

A.17

Other Available QMs (Non-Focus)a
Long-stay
Late-loss ADL worsening
Bedfast
Mobility worsening
Low-risk bowel/bladder incontinence
Indwelling catheter
Urinary tract infections
Low-risk pressure ulcers
Weight loss
a
High-Risk Pressure Ulcers = percent of high-risk long-stay residents with pressure sores; Physical Restraint Use = percent of long-stay residents who were physically restrained; More
Depressed/Anxious = percent of long-stay residents who have become more depressed or anxious; Moderate-Severe Pain = percent of long-stay residents who have moderate-to-severe pain. Late-loss
ADL Worsening = percent of residents whose need for help with daily activities has increased; Bedfast = percent of long-stay residents who spent most of their time in a bed or chair; Mobility
Worsening = percent of long-stay residents whose ability to move about in and around their room got worse; Low-Risk Bowel/Bladder Incontinence = percent of low-risk, long-stay residents who lose
control of their bowels or bladder; Indwelling Catheter = percent of long-stay residents who have/had a catheter inserted and left in their blader; Urinary Tract Infections = percent of long-stay residents
with a urinary tract infection; Low-Risk Pressure Ulcers = percent of low risk residents with pressure sores; Weight Loss = percent of residents who lose too much weight.

community). 7 In order to assure that each of the items had equal weight, we standardized the
measures to have a mean of zero and standard deviation of one prior to averaging them.
E. ANALYTIC METHODS
The report contains both descriptive and impact analyses. Below we describe our approaches
to answering each of the research questions in the study. For the nursing home and hospital
analyses, we conducted all provider-level analyses weighted by facility size, as measured by the
number of patients/residents. This weighting produces results for quality of care that are
reflective of the care received by the average patient, rather than the care provided by the
average facility. For comparison purposes we also describe unweighted results (results where
each provider is given equal weight). 8 The two sets of analyses generally yield substantively
identical results. The home health agency data do not contain information on the number of
patients served by each agency, so all HHA analyses were conducted without weights.
The measures we use differ somewhat by the particular question. When we examine the
amount of improvement overall, we focus on raw changes. When we want to compare
improvement across states or providers, we adjust for factors, such as levels of performance at
the beginning of the Statement of Work because further improvement can be harder to make
when starting at an already high level. The adjustments avoid penalizing states or providers in
comparative analyses for their already high performance.

7

The sixth measure, improvement in the status of surgical wounds, was not available in the Compare data at
baseline, so it cannot be included.
8

In comparison to the analyses weighted by provider size, by giving all providers an equal weight, these
unweighted analyses implicitly weight small providers more heavily and larger providers less heavily.

DRAFT

A.18

1.

Has the quality of care received by patients improved nationwide?
For these descriptive analyses we present measures of average performance at the beginning

and end of the Statement of Work. We measured change using both raw changes and changes
relative to baseline standard deviations to enhance comparability of magnitudes of change across
outcomes.
2.

Do providers who do well in one domain of measures also do well in others?
To answer this question we present correlations of performance across measures. High

correlations suggest that quality of care tends to be an institutional characteristic that produces
positive outcomes across domains and that the measures of quality are precise. Low correlations
suggest that either quality of care is very domain-specific or that the available outcome measures
are imprecise as indicators of quality.
3.

Are most states improving over the Eighth SoW?
We present several indicators of state-level change in quality, including average

improvement, interquartile ranges, and counts of how many states did and did not improve on
each measure. These quantities reflect the absolute improvement occurring in provision of
quality of care.
4.

Do states that do well in one domain of quality also do well in others?
We calculated correlations between state-level improvement on different outcomes to assess

the extent to which states that tend to do well in one area also do well in others. Because
improving by a given amount becomes more difficult when starting from a higher baseline, we
calculated these correlations using improvement adjusted for baseline in order to not penalize
states who started the SOW at a higher performance level. For each state, the adjusted levels are
calculated by regressing the state-level changes in improvement on baseline performance, then

DRAFT

A.19

taking the residual for each state (that is subtracting the observed change from the predicted
change). This adjustment is more important for the next research question—assessments of
which states performed best—than it is for the correlations across measures.
5.

Which states do well in multiple domains?
We averaged z-scores of improvement, adjusted for baseline, across multiple outcomes for

each provider type—and converted the resulting composite measures to z-scores—to identify
states that demonstrated consistent improvement during the years of the 8th SOW. For each
provider type, the criteria we used to define “consistently high performing” for each provider
type are as follows.
a.

Nursing Homes
For nursing homes we list states that performed above the mean on at least three of the four

outcomes of pressure ulcers, physical restraints, depression, and chronic pain, and whose average
improvement across all four was at least one-fifth of a standard deviation above the mean
(average z-score of at least .2).
b. Home health agencies
For HHAs we list the states that performed an average of at least one-fifth of a standard
deviation above the mean on acute care hospitalization and a composite of seven other elective
outcomes related to patient functioning/well-being. To be included providers also had to have
above-average improvement on both outcomes—poor performance on one outcome could not be
outweighed by far above-average performance on the other. We used the patient functioning
composite because of the large number of outcomes. We omit the outcome discharge to the

DRAFT

A.20

community (where discharge indicates discharge from agency care) because it is nearly collinear
with acute care hospitalization, and is only weakly correlated with the other outcomes. 9
c.

Hospitals
We defined high-performing states to be those that (1) improved more than the nationwide

average on both the ACM and SCIP indexes and (2) had an average improvement across the two
that was at least one-fifth of a standard deviation greater than the mean.
6.

Is there an impact of QIOs’ work with IPs on improvement in quality measures?
The main challenge to deriving accurate estimates of those impacts of QIOs’ work with

individual providers is that IPs and non-IPs may differ from one another in ways that impact
their quality of care improvement, other than whether or not they participated with a QIO. For
example, QIOs target participants based, in part, on perceived ability to improve, using
information that the QIO believes it knows about specific providers’ capabilities and interests. In
turn, it is difficult to disentangle whether differences in performance between IPs and non-IPs is
due to the work of the QIOs or to other characteristics that were related to their selection status.
We used an approach to estimating impacts of QIOs’ IPG work that relies on comparisons of
performance across states rather than comparing performance of IPs and non-IPs. In order to
further assure that our impact estimates are not driven by unobserved provider characteristics, we
also conducted analyses that control for providers’ improvement in measures that QIOs did not
work with IPs to improve.

9

It should be noted that the OASIS manual specifically instructs HHAs that discharge to the community and
acute care hospitalization are two separate and distinct outcomes.

DRAFT

A.21

a.

IPG Penetration Approach
Our approach relies on cross-state variation in the percent of providers that are IPs. The

problem with using comparisons of IP providers to non-IP providers to estimate impacts of QIO
efforts is that IPs and non-IPs are likely to differ in ways that influence quality outcomes other
than through their QIO participation. The selection of individual providers to be IPs is based
partially on QIOs’ perceptions of a providers’ need and capacity for improvement. Similarly,
providers’ willingness to be an IP is likely to be a function of their underlying motivation and
ability to improve quality.
However, the design of the QIO program introduces one important influence on the
probability that a provider will end up as an IP that is independent of individual providers’
characteristics. As described earlier, the fraction of providers that QIOs are contractually able
and expected to work with varies substantially across states. This is reflected in variation in the
IPG penetration rates. Consequently, the probability that any given provider will be an IP in one
state may be several times greater than the probability for a similar provider in a different state.
We used the IPG Penetration rate as a proxy instrument for individual IPG status.
A typical regression examining impacts of work with IPs, based on differences in
performance between IPs and non IPs would be estimated using an equation such as the
following.

Δyi = α + β Ti + γ X i + ε i

(II.1)

The outcome on the left-hand side is the change in the quality outcome, y, for each provider
i. Ti is a binary indicator of being in an IPG, and Xi is a set of control variables, including
baseline level of the quality outcome. β is a parameter that captures the average difference in
improvement between IPs and non-IPs.

DRAFT

A.22

We replace the dichotomous IP indicator with the IPG penetration rate for each state, s, a
measure that has a potential range from zero to one. For each provider, this represents the
probability of being an IP. As opposed to the binary IP status indicator, that probability is
conditioned solely on this program design element, not on potentially endogenous provider
characteristics. 10

Δyi = α + λ Psi + γ X i + ε i

(II.2)

The value of the parameter λ has the same interpretation as β, but the estimate is
unconfounded by unobserved provider characteristics that may affect both selection status and
quality improvement. Because all providers in a state have the same value for the %IP variable,
the estimated impacts of QIO work with IPs are identified by cross-state variation. If QIOs are
effective in their work with individual IPs, then overall improvement should be greater in states
where QIOs are able to work with a higher percentage of providers. 11
This approach does not distinguish between impacts on different types of providers, nor
whether per-provider impacts vary for states with low or high levels of IPG penetration. The
actual levels of IPG penetration vary across states from roughly 10 to 100 percent for nursing
homes and hospitals, and 15 to 55 percent for home health agencies. Most states tended to have
rates in the bottom half of those ranges. Because no states had HHA IPG penetration rates much
above 50 percent, the results provide no information on what impacts might be for that half of
HHAs that were unlikely to work with QIOs as IPs. The results also only reflect estimates of
average impacts across states. It is possible, that there is variation in effectiveness across QIOs.
10

The original design of the study called for this approach. However, we had expected to be able to obtain
indicators of IP status for individual providers to conduct analyses such as those in II.1 for comparison. We were
ultimately unable to obtain those indicators, however.
11

Note that we adjust our standard errors for clustering at the state level, which is important given the lack of
within-state variation in the IPG penetration measure. The clustering correction is applied to standard errors in all of
our impact analyses.

DRAFT

A.23

The IPG penetration instrument is a source of variation in providers’ probability of being
selected as an IP that is not obviously related to potential causes of improvement in quality of
care, other than QIO efforts themselves. But as with any instrument, it not possible to guarantee
that there are no unobserved characteristics that are associated both with the instrument and the
outcome (but are not caused by the instrument), and that could consequently cause the results to
be biased.
The primary determinant of IPG penetration is state population size, with IPG penetration
rates tending to increase as state size decreases. It is possible that smaller states differ
systematically from larger states in ways that would affect quality of care. We controlled for a
range of characteristics that might differentiate states of different sizes, including urbanicity,
economic characteristics, and demographic traits. We also controlled for baseline differences in
quality of care—so even if smaller states did tend to have systematically higher or lower levels
of quality of care, that is accounted for as well, and any bias would have to result from
systematic differences related to states size that emerged after baseline, such as quality
improvement efforts led by other organizations that also happened to be more strongly
concentrated in smaller states during the same time period.
b. QIO Focus Measures vs. Those Not Focused On
In order to address potential confounding due to unobserved causes of quality improvement,
we adjusted for baseline performance and change in quality measures that QIOs did not focus on
in their work with IPs in the nursing home analyses. 12 If such confounders exist, they would be
expected to influence quality outcomes broadly, not just those focused on by QIOs. The impact

12

We did this for nursing homes only because adequate measures were unavailable for the other provider

types.

DRAFT

A.24

estimates obtained after adjusting for non-focus quality measures reflect improvement on QIO
focus outcomes over and above what would be expected given improvement on other outcomes.
Because some QIO activities aimed at improving quality on one outcome could have impacts
across other outcomes, adjusting for improvement on non-focus outcomes could net out some
improvement that is actually due to QIOs. As a result, these estimates would be expected to, if
anything, underestimate actual impacts.
Where there were multiple outcomes for a given provider type, we estimated impacts jointly
using a technique called seemingly unrelated regression. Estimating equations simultaneously
rather than one-by-one permits adjustment of standard errors for correlation across outcomes,
and also provides a better means of determining whether QIOs’ efforts across all outcomes for a
provider type were statistically significant. As the number of statistical tests increases, so does
the probability of finding some statistically significant outcomes by chance, what is known as the
problem of multiple comparisons. 13 Tests of joint significance across outcomes allow us to
establish whether we can, on the whole, be confident that there are true impacts in instances
where we have more than one outcome measure.
7.

Is there an impact of QIOs’ statewide efforts on improvement in home health quality
measures?
In general it is difficult to assess the impact of statewide activities because they affect all

providers and all QIOs engage in them, so there are no comparison groups not exposed to the
statewide activities. However, the design of the home health task does present a good
opportunity to assess statewide efforts aimed at improving care in the HHA setting. In their
statewide work with home health agencies, all QIOs were required to promote reduction in acute
13

Concluding an observed difference is statistically significant when in fact there is no underlying difference is
also called a false positive, Type I, or alpha error

DRAFT

A.25

care hospitalization, but as noted in the Measures section above, each also chose one additional
measure from among a list of nine. QIOs did not all choose the same measure, which allows us
to test whether providers tended to improve more on a given measures if their state QIO selected
that measure for statewide improvement, compared to providers in other states.
We estimated equations of the following form:

Δyi = α + φ Cs + γ X i + ε i

(II.3)

where Cs is an indicator that the provider is in a state where the state chose the outcome measure
in the equation to focus on for state-wide improvement. As described earlier, nearly all state
QIOs (49 out of 51) chose one of three outcomes—management of oral medications, pain
interfering with activity, or dyspnea. We simultaneously estimated impacts ( φ ) on those three
outcomes using seemingly unrelated regression. This specification not only avoids selection bias
on unobservable characteristics of providers, but also—on the whole—on unobservable
characteristics of states because nearly all states, regardless of whether they are high or low
performing, selected at least one of the three outcomes in question.
Of the remaining six outcomes, Home Health Compare contains baseline and follow-up data
for five (bathing, transferring, ambulation, incontinence, and discharge to community). We
standardized and then averaged measures of improvement on those five outcomes and added
those to the equation above to adjust for differences in improvement across providers that likely
would have occurred in the absence of QIO intervention.

DRAFT

A.26

APPENDIX B
METHODS FOR CASE STUDY ANALYSIS

On November 10, 2008, Mathematica Policy Research, Inc. (MPR) sent a memorandum to
the Centers for Medicare & Medicaid Services (CMS) that described the patterns in statewide
improvement in hospital performance measures, as measured by the patient-weighted mean
scores in each state. That analysis showed substantial variation among states in the rates of
improvement on two Surgical Care Infection Prevention Program (SCIP) measures that were part
of the Eighth scope of work (SOW). We focused on the SCIP measures because they are the only
hospital measures that are also part of the Ninth SOW. The baseline period for analysis was July
2004 through June 2005 (just prior to the start of the Eighth SOW), and the follow-up period was
October 2006 through September 2007. The follow-up period thus fell about two-thirds of the
way through the Eighth SOW, and represented the most recent data available at the time of the
analysis. To be included in the analysis, hospitals must have had data in both the baseline and
follow-up periods. Therefore, only a fraction of hospitals were included in the analysis, as
discussed below. To understand the variation in state improvement patterns consistent with the
study time frame and resources, we selected a small number of states and conducted telephone
discussions with their Quality Improvement Organizations (QIOs) and hospital associations to
identify any differences in the provider environment and/or the QIO’s approach that might help
explain the different patterns.
A. SELECTION OF CASE STUDY STATES
Although all states improved on the SCIP measures during the Eighth SOW, we identified
five states for case study that followed one of two patterns in improvement: high improvement
from a low baseline (high-improving states), or lesser improvement from a high baseline (highbaseline states). The three high-improving states began the study period with relatively low rates
on both of the SCIP measures but then improved dramatically—more than 14 percentage
points—on the two measures in the follow-up period. The three states were selected to provide
DRAFT

B.3

geographic diversity from among eight states that were in the bottom quartile on both measures
at baseline. The two states that were high performers at baseline had lower rates of improvement,
roughly matching average improvement on these measures across all states. We selected these
two states to provide one urban and one rural state in different parts of the country from the eight
states that were in the top quartile at baseline for both of the SCIP measures. The data for the
selected states along with the national average are shown in Table B.1.
B. DATA COLLECTION
We developed a discussion guide aimed at exploring a number of potential factors in state
SCIP trends: QIO activities, hospital association activities, the SCIP program, the 100,000 Lives
Campaign conducted by the Institute for Health Improvement (IHI), hospitals’ public reporting
of core SCIP measures as part of the Joint Commission on Accreditation of Healthcare
Organizations (JCAHO) accreditation, barriers to improvement, and whether reporting hospitals
were representative of hospitals in the state overall. We used four slightly different versions of
this guide, depending on the entity we were speaking with and the pattern of improvement in the
state (copies of the guides are included in this appendix).
Prior to holding telephone calls with QIOs and hospital associations, we interviewed several
national-level experts from the IHI, representatives of the quality improvement organization
support center (QIOSC), and an expert involved during the study time frame with the national
SCIP program. The purpose of these calls was to identify and prioritize likely explanations, and
to identify any measurement issues pertaining to the SCIP measures that might have affected the
relative rates among states.
From December 2008 through February 2009, we held telephone discussions of about 45
minutes each with the QIO task leaders and separate calls with knowledgeable hospital

DRAFT

B.4

association representatives in each state 1 in order to identify what approaches or provider
characteristics might help explain the different patterns on these measures between the high- and
low-baseline states.
TABLE B.1
PERFORMANCE TRENDS OF FIVE CASE STUDY STATES ON THE TWO SCIP MEASURES
Number of Hospitals
Whose Patients Were
in Analysis

State
High-Improving States that Were Low
Performers at Baseline
Antibiotics one hour before incision
State A
State B
State C
Antibiotics stopped within 24 hours after
surgery
State A
State B
State C
High Performing States at Baseline
Antibiotics one hour before incision
State D
State E
Antibiotics stopped within 24 hours after
surgery
State D
State E
National (Median of States)
Antibiotics one hour before incision
Antibiotics stopped within 24 hours after
surgery

Baseline
Level

End
Level

Percentage
Point Change

58
18
32

69.9
70.1
70.3

88.4
85.3
84.5

18.5
15.2
14.2

54
17
32

66.1
56.4
56.0

88.3
76.0
77.5

22.2
19.6
21.5

57
20

86.3
88.6

92.0
93.2

5.7
4.6

54
19

77.4
76.9

89.7
86.5

12.3
9.6

1,281

79.3

88.6

9.2

1,259

66.7

83.4

16.7

SCIP = Surgical Care Improvement Project.

1

In State C, the hospital association was not responsive to our attempts to schedule a call; however, we
included this state in the analysis because the hospital association and the QIO worked closely and thus the QIO was
able to tell us about the full range of relevant activities.

DRAFT

B.5

DISCUSSION GUIDE FOR QIOS IN HIGH-IMPROVING STATES

In examining the trends in two Hospital Compare measures of surgical care improvement
between June 2004-July 2005 and October 2006-September 2007, we noticed that [state] had one
of the largest improvements in the country. Specifically, for the __ measure, this state showed
positive improvement of __ percentage points, rising from __% meeting the guideline in [time
period] to __ meeting the guideline by ___. For the __ measure, this state showed positive
improvement of __ percentage points, rising from __% meeting the guideline in [time period] to
__ meeting the guideline by September 2007.
1) The trend I just described was based on the __ percent of [state] hospitals that reported
these measures to Hospital Compare in both years. So please look at the list of hospitals
that reported Hospital Compare data on these measures in your state….
a) How many of these hospitals were in your SCIP IPG during the 8th SOW? Did your
SCIP IPG include any hospitals that do not appear on this list? If so, how many?
b) What could you tell us about the hospitals on the attached list in the following areas?
i.

Are their information systems fairly sophisticated as they pertain to quality
measurement?

ii.

How committed does their leadership appear to be to improving quality? How can
you tell?

iii. Do they tend to stand out with respect to leadership in the surgical quality
improvement area—e.g. having a head surgeon who is a particular believer in
measures and/or particularly committed to reducing infections?
iv. Do many of them have an overall philosophy on improving such as six sigma or
Lean
v.

Do most of them have sophisticated quality improvement departments including a
highly skilled quality improvement director?

vi. Was there anything that made them more committed to improving on these
measures than other hospitals around the country might be? For example, a
particular champion, or some organization subsidizing their participation in 100,000
lives campaign?
vii. Do the hospitals on the Hospital Compare list tend to be those who are more
interested in collaborating on quality relative to others in the state?

DRAFT

B.6

c) How large a gap do you think there is between the hospitals on the list and other
hospitals on the state in terms of:
i.

performance on these measures

ii.

ability to improve

2) Does the positive trend we described for the Hospital Compare-reporting hospitals differ
in size much from the trend you saw in the SCIP IPG data abstracted by CDAC for the
Medicare population between baseline—first quarter 2005—and remeasurement—first
quarter 2007?
3) We assume by definition the hospitals in your SCIP IPG were participating in SCIP.
Attachment B lists hospitals that were participants in SCIP in your state. Can you tell us
about what percent of the hospitals on this SCIP list were not part of your SCIP IPG?
4) Were you aware of the Hospital Compare improvement trend we’ve been discussing
before we contacted you?
5) If yes, when and how did you know hospitals were improving very well on these
measures?
6) Why do you think we see this positive trend?
7) Could you tell us about what you did with your SCIP IPG during the 8th SOW?
a) Did you provide direct technical assistance or facilitate a collaborative? (If so, please
describe.)
b) Did you offer a monitoring tool to the IPG hospitals for tracking quarterly SCIP
measures performance? To any other (non-IPG) hospitals? Do you know to what
extent they used it?
c) What else, if anything, did you do?
8) Are hospitals reporting the SCIP measures as a part of any state or regional initiative, in
addition to Hospital Compare? (If yes, please describe.)
9) Are there any payment incentives that you are aware of that would have caused hospitals
to focus on improving on these measure? [Specifically discuss any apparently high
participation in Premier Hospital Quality Improvement Demonstration based on list of
Premier-participating hospitals by state, since these measures are incentivized, albeit for
CABG and hip/knee surgery only.]

DRAFT

B.7

10) How would you characterize the extent to which hospitals in this state are interested in
working with each other to improve on quality measures? In other words, does quality
improvement in this state tend to happen through collaboration, or competition, or both?
a) Please confirm that this general pattern applies for surgical infection-related
measures?
11) Summary: To sum up our discussion, please briefly recap which if any of the following
likely played a positive role, and how so:
a) QIO activities
b) The SCIP program more generally
c) IHI’s 100,000 lives campaign
d) Hospitals’ public reporting of core SCIP measures as part of JCAHO accreditation
e) The Hospital Quality Alliance/Hospital Compare
f) Other things?

DRAFT

B.8

DISCUSSION GUIDE FOR QIOS IN LOW-IMPROVING STATES

Through this discussion, we are hoping to gain insights into the trends in Hospital Compare
data drilling down to study two Hospital Compare measures of surgical care improvement
between a baseline period of July 2004-June 2005 and a follow-up period of October 2006September 2007. During that period, we noticed that [state] did not show as much improvement
as most other states did on the antibiotics one hour before incision measure. For this measure,
this state showed positive improvement of only __ percentage points, where the average for the
nation was __%. The ending percentages for this measures were not particularly high either
relative to other states, with this state ending at __%, below average for the nation. We note that
there was also little improvement compared to other states on the antibiotic stopped within 24
hours after surgery measure, however we recognize the ending percentage was about average for
the nation.
Characteristics of Hospitals on the Hospital Compare List
1) The trend I just described was based on the __ percent of [state] hospitals that reported
these measures to Hospital Compare in both years. So please look at the list of hospitals
that reported Hospital Compare data on these measures in your state….
a) How many of these hospitals were in your SCIP IPG during the 8th SOW?
i.

Did your SCIP IPG include any hospitals that do not appear on this list?

ii.

If so, how many?

b) What could you tell us about the hospitals on the attached list in the following areas?
i.

Organizational characteristics—generally large, urban, nonprofit..?

ii.

Are their information systems fairly sophisticated as they pertain to quality
measurement?

iii. How committed does their leadership appear to be to improving quality? How can
you tell?
iv. Do they tend to stand out with respect to leadership in the surgical quality
improvement area—e.g. having a head surgeon who is a particular believer in
measures and/or particularly committed to reducing infections?
v.

Do many of them have an overall philosophy on improving such as six sigma or
Lean

vi. Do most of them have sophisticated quality improvement departments including a
highly skilled quality improvement director?

DRAFT

B.9

vii. Do the hospitals on the Hospital Compare list tend to be those who are more
interested in collaborating on quality relative to others in the state?
c) How large a gap do you think there is between the hospitals on the list and other
hospitals on the state in terms of:
i.

performance on these measures

ii.

ability to improve

2) Does the relatively stable trend we described for the Hospital Compare-reporting
hospitals differ in size much from the trend you saw in the SCIP IPG data abstracted by
CDAC for the Medicare population between baseline (first quarter 2005) and
remeasurement (first quarter 2007)?
3) We assume by definition the hospitals in your SCIP IPG were participating in SCIP.
Attachment B lists hospitals that were participants in SCIP in your state. Can you tell us
about what percent of the hospitals on this SCIP list were not part of your SCIP IPG?
Awareness of Performance Trend
4) Were you aware of the Hospital Compare improvement trend we’ve been discussing
before we contacted you?
5) If yes, how did you know how hospitals were doing on these measures?
6) Could you tell us about what you did with your SCIP IPG during the 8th SOW?
a) Did you provide direct technical assistance or facilitate a collaborative? (If so, please
describe.)
b) Did you offer a monitoring tool to the IPG hospitals for tracking quarterly SCIP
measures performance? To any other (non-IPG) hospitals? Do you know to what
extent they used it?
c) What else, if anything, did you do?
7) Are hospitals reporting the SCIP measures as a part of any state or regional initiative, in
addition to Hospital Compare? (If yes, please describe.)
8) Are there any payment incentives that you are aware of that would have caused hospitals
to focus on improving on these measure? [Specifically discuss any apparently high
participation in Premier Hospital Quality Improvement Demonstration based on list of
Premier-participating hospitals by state, since these measures are incentivized, albeit for
CABG and hip/knee surgery only.]

DRAFT

B.10

9) How would you characterize the extent to which hospitals in this state are interested in
working with each other to improve on quality measures? In other words, does quality
improvement in this state tend to happen through collaboration, or competition, or both?
a) Please confirm that this general pattern applies for surgical infection-related
measures?
Barriers to Improvement
10) What if any barriers are you aware of that hospitals on the Hospital Compare list faced in
trying to improve on these measures during the 2005-2008 time period? [or hospitals in
general if Hospital Compare list is too precise, but be sure to clarify which]
a) Financial barriers—were most hospitals doing OK during this period financially?
b) Other foci—were hospitals pre-occupied with other concerns during this period? If
so, what types?
c) Lack of leadership in the hospital community around surgical infection measures—or
around QI more generally?
d) Were there other barriers to improvement?
11) To what extent do you think these barriers persist going forward?
Summary of Influences on Improvement
12) To sum up our discussion, please briefly recap which if any of the following played a
positive role, and whether you think of anything that may have limited the effectiveness
of their role in this state compared with others when it came to the surgical infection
prevention measures:
a) QIO activities
b) Hospital association activities
c) The SCIP program more generally
d) IHI’s 100,000 lives campaign
e) Hospitals’ public reporting of core SCIP measures as part of JCAHO accreditation
f) The Hospital Quality Alliance/Hospital Compare
g) Other things?

DRAFT

B.11

DISCUSSION GUIDE FOR HOSPITAL ASSOCIATIONS
IN HIGH-IMPROVING STATES

In examining the trends in two Hospital Compare measures of surgical care improvement
between __ and ___, we noticed that [state] had one of the largest improvements in the country.
Specifically, for the __ measure, this state showed positive improvement of __ percentage points,
rising from __% meeting the guideline in [time period] to __ meeting the guideline by ___. For
the __ measure, this state showed positive improvement of __ percentage points, rising from
__% meeting the guideline in [time period] to __ meeting the guideline by ___.
Characteristics of Hospitals on the Hospital Compare List
1) The trend I just described was based on the __ percent of [state] hospitals that reported
these measures to Hospital Compare in both years. So please look at the list of hospitals
that reported Hospital Compare data on these measures in your state…what could you tell
us about the hospitals on the attached list?
i.

Organizational characteristics—generally large, urban, nonprofit?

ii.

Are their clinical information systems fairly sophisticated?

iii. Are there many that see quality improvement as a key part of their business
strategy?
iv. Are many of them using Six Sigma or Lean, or other paradigms to improve quality?
v.

Do most of them have sophisticated quality improvement departments including a
highly skilled quality improvement director?

vi. Can you think of anything that may have made them more committed to improving
on these measures than other hospitals around the country might be? For example, a
particular champion, or some organization subsidizing their participation in 100,000
lives campaign?
vii. Do you know if the hospitals on the Hospital Compare list tend to be those who are
more interested in collaborating on quality relative to others in the state?
a) Do you have a sense for whether there is much of a gap between the hospitals on the
list and other hospitals on the state in terms of:
i.

performance on these measures and/or

ii.

ability to improve

DRAFT

B.12

Awareness of Performance Trend
2) Were you aware of the Hospital Compare improvement trend we’ve been discussing
before we contacted you?
3) If yes, when and how did you know hospitals were improving very well on these
measures?
Hospitals’ Involvement in Various Related Initiatives
4) Do you have any ideas about why we see this positive trend? To the extent you can,
please tell us which if any of the following may have played a positive role, and how so:
a) Medicare’s Quality Improvement Organization in the state - [name it]
b) The SCIP program more generally
c) IHI’s 100,000 lives campaign
d) The Hospital Quality Alliance/Hospital Compare
e) Hospitals’ public reporting of core SCIP measures as part of JCAHO accreditation
f) Other things?
5) Did the hospital association have much interaction with its members around the issue of
quality improvement during 2005-2008? Surgical care improvement specifically? (Please
explain.)
6) Are you aware of the work the QIO in the state [name] was doing with some hospitals
during 2005-2008 around surgical infection prevention?
If yes:
a) What was your impression of how things went with that work?
b) Is there anything you think the QIO could have done better?
7) Were hospitals during that time involved in any state or regional initiative that might
have affected their improvement on the SCIP measures? (If yes, please describe.)
8) Are there any payment incentives that you are aware of that would have caused hospitals
to focus on improving on these measure? [Specifically discuss any apparently high
participation in Premier Hospital Quality Improvement Demonstration based on list of
Premier-participating hospitals by state, since these measures are incentivized, albeit for
CABG and hip/knee surgery only.]

DRAFT

B.13

Barriers to Improvement
9) What if any barriers are you aware of that hospitals on the Hospital Compare list faced in
trying to improve on these measures during the 2005-2008 time period? [or hospitals in
general if Hospital Compare list is too precise, but be sure to clarify which]
a) Financial barriers—were most hospitals doing OK during this period financially?
b) Other foci—were hospitals pre-occupied with other concerns during this period? If
so, what types?
c) Lack of leadership in the hospital community around surgical infection measures—or
around QI more generally?
d) Were there other barriers to improvement?
10) To what extent do you think these barriers persist going forward?
Summary of Influences on Improvement
11) To sum up our discussion, please briefly recap which if any of the following played a
positive role, and whether you think of anything that may have enhanced the
effectiveness of their role in this state compared with others when it came to the surgical
infection prevention measures:
a) QIO activities
b) Hospital association activities
c) The SCIP program more generally
d) IHI’s 100,000 lives campaign
e) Hospitals’ public reporting of core SCIP measures as part of JCAHO accreditation
f) The Hospital Quality Alliance/Hospital Compare
g) Other things?

DRAFT

B.14

DISCUSSION GUIDE FOR HOSPITAL ASSOCIATIONS
IN LOW-IMPROVING STATES

Through this discussion with you, we are hoping to gain insights into the trends in Hospital
Compare data specifically for two measures of surgical care improvement between July 2004
through June 2005 as the baseline period and October 2006 through September 2007 as the
follow-up period. We noticed that on the measure antibiotics one hour before incision, [state] did
not show as much improvement as most other states did. Specifically, for the __ measure, this
state showed positive improvement of only __ percentage points, where the average for the
nation was __%. The ending percentage for the state’s hospitals on this measure was also below
average. For the other measure we are reviewing--antibiotics stopped within 24 hours after
surgery--the state also showed less than average improvement ( __ percentage points), where the
average for the nation was __%. However, on this measure we recognize that the ending
percentage was about the same as the average other states.
Characteristics of Hospitals on the Hospital Compare List
1) The trend I just described was based on the __ percent of [state] hospitals that reported
these measures to Hospital Compare in both years. So please look at the list of hospitals
that reported Hospital Compare data on these measures in your state…what could you tell
us about the hospitals on the attached list?
i.

Organizational characteristics—generally large, urban, nonprofit..?

ii.

Are their clinical information systems fairly sophisticated?

iii. Are there many that see quality improvement as a key part of their business
strategy?
iv. Are many of them using Six Sigma or Lean, or other paradigms to improve quality?
v.

Do most of them have sophisticated quality improvement departments including a
highly skilled quality improvement director?

vi. Can you think of anything that may have made them more committed to improving
on these measures than other hospitals around the country might be? For example, a
particular champion, or some organization subsidizing their participation in 100,000
lives campaign?
vii. Do you know if the hospitals on the Hospital Compare list tend to be those who are
more interested in collaborating on quality relative to others in the state?
a) Do you have a sense for whether there is much of a gap between the hospitals on the
list and other hospitals on the state in terms of:

DRAFT

B.15

i.

performance on these measures and/or

ii.

ability to improve

Hospitals’ Involvement in Various Related Initiatives
2) To the extent you can, please tell us how involved the state’s hospitals were in the
following:
a) Collaborative work on SCIP with Medicare’s Quality Improvement Organization in
the state - [name it]
b) The SCIP program more generally
c) IHI’s 100,000 lives campaign
d) Hospitals’ public reporting of core SCIP measures as part of JCAHO accreditation
e) The Hospital Quality Alliance/Hospital Compare
f) Anything else that might have influenced hospitals’ work on surgical infection
prevention?
3) Are you aware of the work the QIO in the state [name] was doing with some hospitals
during 2005-2008 around surgical infection prevention?
If yes:
a) What was your impression of how things went with that work?
b) Is there anything you think the QIO could have done better?
4) Did the hospital association have much interaction with its members around the issue of
quality improvement during 2005-2008? Surgical care improvement specifically? (Please
explain.)
5) Are there any payment incentives that you are aware of that would have caused hospitals
to focus on improving on these measure? [Specifically discuss any apparently high
participation in Premier Hospital Quality Improvement Demonstration based on list of
Premier-participating hospitals by state, since these measures are incentivized, albeit for
CABG and hip/knee surgery only.]
Barriers to Improvement
6) What if any barriers are you aware of that hospitals faced in trying to improve on these
measures during the 2005-2008 time period?
a) Financial barriers—were most hospitals doing OK during this period financially?

DRAFT

B.16

b) Other foci—were hospitals pre-occupied with other concerns during this period? If
so, what types?
c) Lack of leadership in the hospital community around surgical infection measures—or
around QI more generally?
d) Unwillingness to collaborate to improve on these measures? (If there was
unwillingness, did this extend to other measure types or something unique about
surgical infections?)
e) Others?
7) To what extent do you think these barriers persist going forward?
Awareness of Performance Trend
8) Were you aware of [state]’s hospitals’ performance on Hospital Compare measures
related to surgical care before we contacted you? More generally were you aware of
Hospital Compare trends in performance?
9) If yes, how did you learn about hospitals’ performance?
10) Do you have any ideas about why we do not see as much of a positive trend here as
elsewhere on these measures? (For Montana—did the fact that you were higher than
average at the start cause you to focus your efforts on other measures instead?)
[add specific probes based on earlier discussions with IHI, hospital improvement QIOSC,
and high-improving states’ QIOs and hospital associations]
Summary of Influences on Improvement
11) To sum up our discussion, please briefly recap which if any of the following played a
positive role, and whether you think of anything that may have limited the effectiveness
of their role in this state compared with others when it came to the surgical infection
prevention measures:
a) QIO activities
b) Hospital association activities
c) The SCIP program more generally
d) IHI’s 100,000 lives campaign
e) Hospitals’ public reporting of core SCIP measures as part of JCAHO accreditation
f) The Hospital Quality Alliance/Hospital Compare
g) Other things?
DRAFT

B.17

APPENDIX C
METHODS FOR ANALYSIS OF PROVIDER
SATISFACTION SURVEY

Provider satisfaction data are from a nationwide survey of providers (nursing homes, home
health agencies, and hospitals), conducted by Westat in 2007, on their experiences with QIOs
during the 8th Scope of Work. 1 CMS provided us the de-identified survey data as a SAS dataset,
and two reports from Westat describing the survey methodology and descriptive results (Giambo
et al. 2007; Narayanan et al. 2008).
A. ANALYSIS SAMPLE
The target population consisted of 100 percent of IPG providers (identified by the QIOs),
and a simple random sample (SRS) of non-IPG providers drawn from the CMS provider data
files that list all Medicare participating providers. Since the IPG sample is a census of IPG
providers, and the non-IPG sample is a straightforward systematic sample (in which the lists of
non-IPGs were sorted by provider characteristics and the samples selected by an “every Nth”
approach after a random start), Westat did not develop any sampling weights.” Westat also did
not calculate any nonresponse adjustments. 2
The survey dataset we received contained no information on “status codes” that indicate
survey eligibility and disposition, 3 and we were thus unable to calculate standard response rates
or to duplicate the response rate results reported by Giambo et al. (2007). For the purposes of this
report, we defined a “completed survey” record as one in which there was at least one non1

The survey also included Medicare Advantage plans but we do not analyze those results here.

2

Westat stated that since any nonresponse adjustments would be defined using QIO State, 8th SOW task (that
is, provider type), and IPG status, and that these variables would be the same for all respondents in any given
stratum, the net effect would be to apply an adjustment factor of “1” to all respondent (Giambo). It is unclear
whether Westat considered using other provider characteristics from CMS’ provider enrollment files for possible
nonresponse adjustments.
3

Status codes indicate information on the results of each interview attempt, such as whether the provider was
ineligible for the survey upon further screening, or the respondent refused the interview, or a respondent was unable
to be located, or the interview was successfully completed. Status code information is necessary to calculate
response rates (American Association for Public Opinion Research 2008).

DRAFT

C.3

missing response to a survey question. Our survey response rates were thus calculated as the
number of completes (as defined above), divided by the number of providers.
As explained in Chapter I, we focus only on hospitals, nursing homes, and home health
agencies, although Westat also surveyed physician practices, Medicare Advantage health plans,
beneficiaries, and stakeholder organizations. Furthermore, among hospitals, we only analyzed
those listed under the 8th SOW task in which the QIOs helped hospitals with care for heart
attacks, heart failure, pneumonia, perioperative patients, and systems and organizational change
(Task 1c1). We did not analyze hospitals listed under the Rural Organization Safety Culture
Change task (Task 1c2). We only included providers in the 50 states and the District of
Columbia, excluding providers in Puerto Rico.
B. OUTCOME MEASURES AND ANALYSIS
We examine the individual survey questions within each of the six main survey topics
developed by Westat, which covered providers’: (1) use of email and the internet to receive,
circulate, or access quality information and QIO resources, (2) knowledge of CMS programs, (3)
satisfaction with their local QIO, (4) perceptions of the value of the QIO, (5) interactions with
the QIO, and (6) sources of information for quality information (Table V.2). Our approach to
analyzing the survey data differs from that used by Westat. Westat’s contract with CMS called
for it to compute overall satisfaction scores for each QIO, using an algorithm specified by CMS.
The algorithm combined responses from all respondents (stakeholders who worked with QIOs,
IPG providers, and non-IPG providers) and across all topic areas (provider knowledge,
satisfaction, and perceived value) into a single score. CMS used this score in its evaluation of the
QIOs’ contract performance.
In addition to examining providers’ responses by their IPG or non-IPG status, we also
studied the association of QIO provision of assistance with responses. We thus used the question
DRAFT

C.4

on whether providers reported receiving help from their QIO (the first question listed under the
“Providers’ Satisfaction with Local QIO” topic in Table V.2). Although there were four potential
categories (whether or not received help, and IPG and non-IPG status), as discussed below only a
small proportion of IPG providers did not report receiving QIO help, so we instead formed three
groups: (1) IPGs, (2) non-IPGs that reported receiving QIO help, and (3) non-IPGs that reported
no help. The other two questions on QIO help—whether or not providers reported receiving
information, and whether or not providers said they asked the QIO for help—displayed too little
variation in the responses to be useful for further grouping.
We combined response categories with few responses into the adjacent category (for
example, combining “strongly disagree” with “somewhat disagree,” and “very dissatisfied” with
“somewhat dissatisfied”). For the numeric 0 to 10 ratings of usefulness of QIO help, we
calculated average values. We tested the statistical significance of differences between group
means with simple analysis of variance tests, t-tests, and chi-square tests. For some of the topics
with many questions, we present results for a few illustrative questions; the Appendix contains
full results for all questions. We focus on national level averages, in which providers are the
units of analysis, each weighted equally, as sample sizes for many states were limited.

DRAFT

C.5

APPENDIX D
SUPPLEMENTAL TABLES FOR CHAPTER II
(ANALYSES OF MEDICARE COMPARE DATA)

TABLE D.1
PREDICTORS OF CHANGE IN NURSING HOME QUALITY,
FROM IPG IMPACT ESTIMATE REGRESSIONS
Quality Measure

Variable
IPG Penetration, State Level

(1)
Pressure
Ulcers

(2)
Physical
Restraints

(3)
Depression

(4)
Chronic
Pain

-0.034**

-0.022**

-0.011

-0.011*

MDs per 1,000 Population

-0.10

-0.04†

0.02

0.06†

RNs per 1,000 Population

0.03

-0.01

-0.04

-0.01

Per Capita Income (natural logarithm)

3.41**

1.04*

-1.46

0.22

0.23

-0.58*

0.34**

County-Level Characteristics

Located in a Metropolitan Area

-0.07

Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black

-0.038
-0.0002
-0.02
0.04
0.09
0.03
0.05**

0.009
-0.006
-0.01
-0.01
0.08*
-0.002
-0.02**

0.218**
-0.031
0.02
-0.40**
0.16*
-0.03†
-0.03*

0.064*
0.012
-0.02
-0.02
0.04
-0.01†
-0.02**

-0.06
-0.74†
-0.86**
-1.10**
-0.19

-0.19
-0.01
-0.51**
-0.36
0.36

-0.81**
0.49
0.77*
1.46*
1.26*

0.08
0.13
-0.24*
-0.35*
-0.24
-0.29**

Provider-Level Characteristics
Ownership Type
For Profit, Individual or Partnership
Government
Nonprofit, Corporation
Nonprofit, Church
Nonprofit, Other
Large Nursing Home

0.30†

0.04

-0.02

Located Within a Hospital

0.61

-0.38*

-0.20

Resident and Family Councils Present

-0.21

0.13

-0.23

-0.31**

Baseline Level of Outcome

-0.62**

-0.55**

-0.61**

-0.73**

Baseline Nonfocus Quality Measures
Improvement in Ambulation
Improvement in Pain Interfering with Activity
Improvement in Transferring

0.03
-0.02
0.08**

-0.02†
-0.002
0.02

0.12**
0.13**
0.004

-0.002
-0.02†
0.08**

DRAFT

D.3

0.34

TABLE D.1 (continued)

Quality Measure

Variable
Improvement in Urinary Incontinence

(1)
Pressure
Ulcers
0.10**

(2)
Physical
Restraints
0.04*

(3)
Depression
0.06*

(4)
Chronic
Pain
0.04**

Change in Nonfocus Quality Measures
Improvement in Ambulation, Change

0.03

0.003

0.13**

0.01†

Improvement in Pain Interfering with Activity,
Change

0.01

-0.003

0.12**

-0.004

Improvement in Transferring, Change

0.15**

0.01

0.02

0.08**

Improvement in Urinary Incontinence, Change

0.15**

0.03**

0.08**

0.05**

Number

6,854

8,547

8,545

8,546

Source:

Nursing Home Compare, second quarter of 2005 and first quarter of 2008 collection periods.

Note:

Results are coefficients estimated jointly through simultaneously seemingly unrelated
regression (from models in Column 3, Table III.7). Providers weighted by total number of
patients. Standard errors are adjusted for geographic clustering. Each model also includes an
IPG penetration measure.

*p<.05; **p<.01; ***p<.10 (two-tailed tests).
IPG = Identified participant group.
MD = medical doctor.
RN = registered nurse.

DRAFT

D.4

TABLE D.2
PREDICTORS OF CHANGE IN HHA ACUTE CARE HOSPITALIZATION,
FROM IPG IMPACT ESTIMATE REGRESSION
Outcome
Variable

Acute Care Hospitalization

IPG Penetration, State Level

-0.133**

County-Level Characteristics
MDs per 1,000 Population

-0.17***

RNs per 1,000 Population

-0.01

Per Capita Income (natural logarithm) [?]

-0.12

Located in a Metropolitan Area

-0.15

Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black

0.115
0.011
0.04
-0.11
0.12***
0.01
-0.0004

Provider-Level Characteristics
Baseline ACH Rates

-.47**

Ownership Type
Government
Nonprofit, Other
Nonprofit, Private
Nonprofit, Religious

-1.06*
-0.88**
-0.89**
-1.16*

Date of Certification, 1990s

0.55***

Date of Certification, 2000s

0.52

Provides Medical Social Services

-0.79*

Number

6,265

Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up
data collected March 2007-February 2008.

Note:

Providers weighted equally. Figures are OLS-estimated coefficients from specification in
Column 2 of Table IV.6. Standard errors are adjusted for geographic clustering.

DRAFT

D.5

TABLE D.2 (continued)
*p<.05; **p<.01; ***p<.10 (two-tailed tests).
ACH = acute care hospitalization.
HHA = home health aide.
IPG = identified participant group.
MD = medical doctor.
OLS = ordinary least squares.
RN = registered nurse.

DRAFT

D.6

TABLE D.3
PREDICTORS OF CHANGE IN HHA QUALITY, FROM IMPACT ESTIMATE REGRESSIONS
OF STATEWIDE EFFORTS (TABLE IV.8, COLUMN 3)
Outcome

Variable
Indicator – State QIO Selected the Measure for
Statewide Efforts

(1)
Management of
Oral Medications

(2)
Pain Interfering
with Activity

1.25**

0.31

(3)
Dyspnea

1.83**

County-Level Characteristics
MDs per 1,000 Population

-0.17

-0.19

0.06
0.03

RNs per 1,000 Population

0.06***

0.03

Per Capita Income (natural logarithm)

3.95**

2.96***

-2.04***

Located in a Metropolitan Area

-0.24

-1.09**

-0.48

Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black

0.267**
0.167**
0.02
0.10
0.07
-0.01
0.05*

-0.134
-0.079
-0.08*
-0.23***
0.05
0.08*
0.10**

-0.056
0.018
0.02
-0.27**
-0.15*
0.05***
0.06**

Baseline Level of Outcome

-0.68**

-0.64**

-0.68**

Ownership Type
Government
Nonprofit, Other
Nonprofit, Private
Nonprofit, Religious

0.12
0.34
-0.06
-0.12

1.04***
0.52
0.31
-0.69

Provider-Level Characteristics

1.11***
1.51**
1.41**
1.78**

Date of Certification, 1990s

0.52***

0.05

-0.22

Date of Certification, 2000s

0.39

0.54

-1.88**

-0.08

-1.12*

Provides Medical Social Services

0.21

Unselected Outcomes, Baseline

7.54**

6.97**

9.45**

Unselected Outcomes, Change

10.94**

10.33**

12.25**

Number

5,490

5,287

5,562

DRAFT

D.7

TABLE D.3 (continued)

Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up
data collected March 2007-February 2008.

Note:

All providers weighted equally. Results estimated simultaneously using seemingly unrelated
regression (see Table IV.8, Column 3). Standard errors are adjusted for geographic clustering.

*p<.05; **p<.01; ***p<.10 (two-tailed tests).
HHA = home health aide.
MD = medical doctor.
QIO = quality improvement organization.
RN = registered nurse.

DRAFT

D.8

TABLE D.4
AVERAGE LEVELS AND IMPROVEMENT ON INDIVIDUAL HOSPITAL ITEMS
Outcome

Baseline

End

Change

ACE Inhibitor or ARB for LVSD
(St. Dev.)

81.8
(17.1)

89.2
(12.5)

7.5
(18.9)

Aspirin at Arrival
(St. Dev.)

94.4)
(6.1)

96.4)
(6.6)

2.1
(7.1)

Aspirin Prescribed at Discharge
(St. Dev.)

92.3
(10.0)

95.3
(8.6)

2.9
(10.1)

Beta Blocker at Arrival
(St. Dev.)

90.1
(10.1)

93.3
(8.9)

3.2
(10.0)

Beta Blocker Prescribed at Discharge
(St. Dev.)

91.5
(10.5)

95.7
(7.8)

4.1
(10.1)

ACE Inhibitor or ARB for LVSD
(St. Dev.)

81.5
(13.0)

88.4
(9.1)

6.9
(13.1)

Evaluation of LVS Function
(St. Dev.)

88.6
(11.0)

94.4
(7.8)

5.8
(8.4)

Pneumococcal Vaccination
(St. Dev.)

54.1
(23.3)

80.9
(15.0)

26.8
(21.2)

Oxygenation Assessment
(St. Dev.)

99.2
(2.0)

99.8
(0.8)

0.6
(1.9)

Initial Antibiotic Received within Four
Hours of Hospital Arrival
(St. Dev.)

78.3
(9.4)

89.6
(6.0)

11.2
(8.9)

Preventative Antibiotics Received One Hour
Before Incision
(St. Dev.)

77.0
(15.9)

90.4
(7.6)

13.4
(14.9)

Preventative Antibiotics are Stopped Within
24 Hours After Surgery
(St. Dev.)

64.4
(20.0)

84.0
(11.8)

19.7
(17.9)

Heart Attack

Heart Failure

Pneumonia

SCIP

DRAFT

D.9

TABLE D.4 (continued)
Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected
October 2006-September 2007.

Note:

Means for each measure are calculated using the sample providers that had data available at
both points in time. Sample size varies across measures. For the ACM items, sample sizes
range from 2,637 to 3,558. The SCIP items have sample sizes of 1,281 and 1,259 for the
preoperative and postoperative measures, respectively.

ACE = angiotensin-converting enzyme.
ACM = appropriate care measure.
ARB = angiotensin receptor blocker.
LVSD = left ventricular systolic dysfunction.
SCIP = Surgical Care Improvement Project.
St. Dev. = standard deviation.

DRAFT

D.10

DRAFT

TABLE D.5
CORRELATIONS BETWEEN IMPROVEMENT ON INDIVIDUAL
ACM AND SCIP ITEMS

D.11

ACE/ ARB for LVSD (HA)
Aspirin at Arrival (HA)
Aspirin at Discharge (HA)
Beta Blocker at Arrival (HA)
Beta Blocker at Discharge (HA)
ACE/ ARB for LVSD (HF)
[LVEF? Function (HF)
Pneumococcal Vaccination (P)
Oxygenation Assessment (P)
Timely Antibiotic (P)
Antibiotics 1 hour before incision (SCIP)
Preventative Antibiotics Stopped w/in
24 hours (SCIP)

ACE/
ARB
(HA)

Aspirin
Arrival
(HA)

Aspirin
Dischrg
(HA)

Beta
Arrival
(HA)

Beta
Discharge
(HA)

ACE/
ARB
(HF)

LVEF
(HF)

Pneum
Vaccine
(P)

Oxygenation
(P)

Timely
Antibio
(P)

Antibio
Before
(SCIP)

1.00
0.18
0.22
0.19
0.26
0.31
0.18
0.12
0.04
0.06
0.07

1.00
0.31
0.36
0.29
0.16
0.18
0.09
0.08
0.02
0.09

1.00
0.22
0.43
0.20
0.22
0.09
0.13
0.04
0.15

1.00
0.47
0.20
0.22
0.11
0.00
0.04
0.09

1.00
0.24
0.29
0.11
0.10
0.05
0.13

1.00
0.30
0.18
0.08
0.07
0.08

1.00
0.22
0.21
0.13
0.24

1.00
0.10
0.18
0.19

1.00
0.10
0.14

1.00
0.13

1.00

0.04

0.08

-0.0004

0.07

0.01

0.06

0.09

0.12

0.02

0.01

0.17

Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected October 2006-September 2007.

Note:

Providers weighted by average total number of patients across all the measures.

ACE = angiotensin-converting enzyme.
ACM = appropriate care measure.
ARB = angiotensin receptor blocker.
HA = heart attack.
HF = heart failure.
LVEF = left ventricle ejection fraction
P = pneumonia.
SCIP = Surgical Care Improvement Project.

Antibio
Stopped
(SCIP)

1.00

TABLE D.6
OLS ESTIMATES OF PREDICTORS OF CHANGE IN HOSPITAL QUALITY
Outcome
Variable

Acute Care Hospitalization

IPG Penetration, State Level

SCIP

0.0013

County-Level Characteristics
MDs per 1,000 Population

0.02

-0.11

RNs per 1,000 Population

0.02

0.09

-0.05

-3.64*

Located in a Metropolitan Area

0.05

-0.29

Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black

-0.017
-0.053
0.004
-0.04
-0.06
2.22*
0.85

-0.012
0.062
0.12*
-0.38*
0.007
5.93
5.68**

-0.65**

-0.74**

Large Hospital

0.81**

1.16*

Acute Care Hospital

1.33

0.69

Ownership Type
Nonprofit, Church
Nonprofit, Other
Nonprofit, Private
Government

-0.45
-0.64
-0.94*
-1.11*

-0.19
1.28
1.05
-1.05

Number

2,353

1,196

Per Capita Income (natural logarithm)

Provider-Level Characteristics
Baseline of Outcome

Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected
October 2006-September 2007.

Note:

Providers weighted by number of patients. Results estimated using ordinary least squares.
Standard errors are adjusted for geographic clustering.

DRAFT

D.12

TABLE D.6 (continued)

*p<.05; **p<.01; ***p<.10 (two-tailed tests).
IPG = identified participant group.
MD = medical doctor.
OLS = ordinary least squares.
RN = registered nurse.
SCIP = Surgical Care Improvement Project.

DRAFT

D.13

DRAFT

TABLE D.7
PREDICTORS OF CHANGE, INCLUDING IPG PENETRATION, FROM SUR ANALYSES ACM ITEMS
ACE/
ARB
(AMI)
IPG Penetration Rate, State
Level

0.003

Aspirin at
Arrival
(AMI)

-0.004

Aspirin at
Discharge
(AMI)

-0.004

Beta Blocker
at Arrival
(AMI)

Beta Blocker
at Discharge
(AMI)

ACE/
ARB
(HF)

0.003

0.0008

0.009

LVS
Evaluation
(HF)

-0.009

Pneu
Vaccine
(P)

0.016

Oxygenation (P)

Timely
Antibiotic
(P)

0.0004

-0.004

-0.0002

0.02

-0.002

0.05*

-0.05

0.38

Baseline Level of Outcome
County-Level Characteristics
MDs per 1,000 Population

0.02

0.14**

0.16**

0.11*

0.07***

0.15***

0.06

RNs per 1,000 Population

0.06

0.03*

0.05*

0.01

0.05**

0.06

0.05**

Per Capita Income ( natural
logarithm)

3.21

0.04

D.14

Located in a Metro Area

-0.18

Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black

0.117
-0.101
0.08
0.16
-0.02
-0.062
-0.0453

1.43**
0.029
-0.026
0.004
-0.12**
-0.041
0.033**
0.0005

-1.22***
1.23**
-0.054
-0.062
0.04*
-0.13*
-0.07
0.0301**
0.004

-0.65
1.27**
0.026
-0.036
0.04
-0.19**
-0.12**
0.057**
0.031*

-2.41*
0.97**
-0.120*
-0.081
0.02
-0.22**
-0.13*
0.0524**
0.0243*

-2.84

-0.32

1.20**

0.52***

0.081
-0.026
0.07*
0.03
-0.23**
0.0161
0.0376*

-0.065
-0.130**
-0.02
-0.16**
0.02
0.0323**
0.010

-0.20
0.16*
-2.26
-1.24***

0.05†

0.58*

-0.003
-0.007*** -0.129***
0.009
-0.005*** -0.128**
-0.05
0.001
-0.01
-0.27*
-0.004
-0.22**
-0.20
0.001
-0.05
0.0831** -0.0002
0.062**
0.0089
-0.0032** 0.0193

Provider-Level Characteristics
Large Hospital

2.71

1.27**

2.47**

1.78**

1.93**

1.17**

0.99**

3.17**

0.07**

0.36***

Acute Care Hospital

-1.65

4.72**

4.30**

6.70**

2.13**

0.24

6.13**

1.41*

0.25**

0.78*

Ownership Type
Nonprofit, Church
Nonprofit, Other
Nonprofit, Private
Government

-0.74
-0.92
-1.82*
-1.50

0.48
0.14
0.05
-0.49

0.29
-0.33
-0.35
-0.75***

-0.21
-0.32
-0.65
-0.96

-0.60*
-1.15**
-1.00**
-1.42**

-1.78*
-3.54**
-3.15**
-6.62**

-0.02
-0.02
-0.04***
-0.13**

3,355

3,282

Number

2,613

0.12
0.16
0.15
-0.35
3,368

0.39
0.28
-0.14
-0.86***
3,271

3,394

3,531

3,517

3,524

0.84***
1.01**
0.51
0.55***
3,137

TABLE D.7 (continued)

DRAFT

Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected October 2006-September 2007.

Note:

Providers weighted by number of patients. Results estimated jointly using seemingly unrelated regression. Standard errors are adjusted for geographic clustering.

*p<.05; **p<.01; ***p<.10 (two-tailed tests).
ACE = angiotensin-converting enzyme.
ACM = appropriate care measure.
AMI = [acute myocardial infarction.
ARB = angiotensin receptor blocker.
HF = heart failure.
IPG = Identified participant group.
LVS = left ventricle systolic.
MD = medical doctor.
P = pneumonia.
RN = registered nurse.
SUR = seemingly unrelated regression.

D.15

TABLE D.8
PREDICTORS OF CHANGE FROM SUR ANALYSES OF SCIP ITEMS
Outcome
Variable

Antibiotic-Timely Start

Antibiotic-Timely Stop

MDs per 1,000 Population

-0.03

-0.09

RNs per 1,000 Population

0.09

0.09

Per Capita Income (natural
logarithm)

-3.39

County-Level Characteristics

-3.75***

Located in a Metropolitan Area

0.11

-0.61

Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black

-0.201
-0.130
0.04
-0.26
-0.003
0.045
0.070

0.156
0.261
0.20*
-0.54
-0.04
0.074
0.054

-0.085**

-0.75**

Provider-Level Characteristics
Baseline of Outcome
Large Hospital

0.41

2.07*

Acute Care Hospital

0.01

2.23

Ownership Type
Nonprofit, Church
Nonprofit, Other
Nonprofit, Private
Government

0.43
1.15
1.29
-1.06

-0.58
1.14
1.00
-0.89

Number

1,218

1,198

Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected
October 2006-September 2007.

Note:

Providers weighted by number of patients. Results estimated simultaneously using seemingly
unrelated regression. Standard errors are adjusted for geographic clustering.

*p<.05; **p<.01; ***p<.10 (two-tailed tests).
MD = medical doctor.
RN = registered nurse.
SCIP = Surgical Care Improvement Project.
SUR = seemingly unrelated regression.

DRAFT

D.16

TABLE D.9
DESCRIPTIVE STATISTICS, 8TH SOW, CONTROL VARIABLES USED IN
PROVIDER-LEVEL NURSING HOME IMPACTS ANALYSIS

Mean

Standard
Deviation

MDs per 1,000 Population

2.61

1.90

RNs per 1,000 Population

6.02

4.35

10.30

0.25

Located in a Metropolitan Area

0.76

0.43

Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black

28.15
13.35
22.95
13.50
12.19
10.68
11.99

2.76
3.53
9.30
4.42
4.89
13.78
12.91

For-Profit Ownership, Individual or Partnership

0.11

0.31

Ownership Type
For Profit
Government
Nonprofit, Corporation
Nonprofit, Church
Nonprofit, Other

0.68
0.07
0.19
0.05
0.02

0.47
0.25
0.39
0.23
0.13

Large Nursing Home

0.48

0.50

Located within a Hospital

0.04

0.19

Resident and Family Councils Present

0.47

0.50

Improvement in Ambulation, Baseline

12.17

7.06

Improvement in Pain Interfering with Activity, Baseline

15.40

8.32

Improvement in Transferring, Baseline

4.25

5.30

Improvement in Urinary Incontinence, Baseline

8.59

5.12

0.21

8.13

-0.15

9.33

Variable
County-Level Characteristics

Per Capita Income (natural logaristhm)

Provider-Level Characteristics

Baseline Nonfocus Quality Measures

Change in Nonfocus Quality Measures
Improvement in Ambulation, Change
Improvement in Pain Interfering with Activity, Change

DRAFT

D.17

TABLE D.9 (continued)

Mean

Standard
Deviation

Improvement in Transferring, Change

0.11

4.30

Improvement in Urinary Incontinence, Change

0.23

5.55

IPG Penetration Rates for Pressure Ulcers/Restraints

15.02

8.96

IPG Penetration Rates for Depression/Pain

14.15

8.34

Variable

State-Level IPG Penetration

Source:

Nursing Home Compare, second quarter of 2005 (baseline) and first quarter of 2008 (followup).

Note:

The figures above are calculated for all providers that have baseline and follow-up values for at least
one of the focus quality measures (and thus are able to be included in the multivariate analyses).
Sample size is between 10,602 and 10,635 for the county-level measures, 10,705 for the provider-level
characteristics other than quality measures, between 8,643 and 10,679 for the nonfocus measures, and
10,705 for IPG penetration.

IPG = Identified participant group.
MD = medical doctor.
RN = registered nurse.
SOW = statement of work.

DRAFT

D.18

TABLE D.10
DESCRIPTIVE STATISTICS, VARIABLES IN PROVIDER-LEVEL
HHA IMPACT ANALYSES

Variable

Mean

Standard
Deviation

County Level Characteristics
MDs per 1,000 Population

2.43

1.77

RNs per 1,000 Population

5.09

4.23

10.26

0.25

Located in a Metropolitan Area

0.71

0.45

Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black

28.73
13.11
21.88
15.14
13.56
14.51
12.01

3.18
3.97
8.73
5.07
5.20
18.75
13.05

Ownership Type
For Profit
Government
Nonprofit, Other
Nonprofit, Private
Nonprofit, Religious

0.61
0.12
0.09
0.15
0.06

0.49
0.33
0.28
0.36
0.24

Date of Certification, 1990s

0.32

0.47

Date of Certification, 2000s

0.25

0.43

Provides Medical Social Services

0.84

0.37

Average Baseline Value on the Nonselected Measuresa

0.00

0.69

Average Change Value on the Nonselected Measuresa

0.00

0.68

Per Capita Income (natural logarithm)

Provider-Level Characteristics

Baseline Quality Measures

DRAFT

D.19

TABLE D.10 (continued)

Variable

Mean

Standard
Deviation

Indicator of Measure Chosen for Statewide Improvement
Dyspnea

0.27

0.45

Management of Oral Medications

0.59

0.49

Pain Interfering with Activities

0.10

0.30

20.92

4.10

IPG Penetration, State Level
IPG Penetration Rate for ACH
Source:

Home Health Compare, baseline data collected September 2004-August 2005 and follow-up
data collected March 2007-February 2008.

Note:

The figures above are calculated for all providers that have baseline and follow-up values for
at least one of the focus quality measures in which the particular measure is used as a control
in the impacts analyses. Sample sizes range from 5,700 to 6,308. Providers are weighted
equally in calculations.

a

Measure has a mean of zero because it is the average of variables that have been standardized to have a
mean of zero (and standard deviation of one).
ACH = acute care hospitalization.
HHA = home health agency.
IPG = identified participant group.

DRAFT

D.20

TABLE D.11
DESCRIPTIVE STATISTICS, 8TH SOW CONTROL VARIABLES
IN HOSPITAL IMPACT ANALYSES
Variable

Mean

Standard Deviation

3.12
6.82
10.33
0.88

1.94
3.89
0.23
0.33

28.19
12.76
24.64
13.83
12.35
11.85
13.83

2.87
3.51
8.51
4.30
4.45
14.91
13.53

Large Hospital
Acute Care Hospital
Ownership Type
For Profit
Nonprofit, Church
Nonprofit, Other
Nonprofit, Private
Government

0.62
0.996

0.48
0.066

0.14
0.21
0.24
0.30
0.12

0.35
0.40
0.43
0.46
0.33

IPG Penetration Rate (ACM), State Level

26.35

10.29

County-Level Characteristics
MDs per 1,000 Population
RNs per 1,000 Population
Per Capita Income (natural logarithm
Located in a Metropolitan Area
Percentage of Population
Ages 0 to 19
Ages 65 and Over
With Four Years College
Uninsured
At or Below Poverty Level
Hispanic
Black
Provider-Level Characteristics

Number

2,353

Source:

Hospital Compare, baseline data collected July 2004-June 2005 and follow-up data collected
October 2006-September 2007.

Note:

Providers weighted by total numbers of patients. Sample consists of providers included in the
ACM impacts analyses (Model 2, Table 5).

ACM = appropriate care measure.
IPG = identified participant group.
MD = medical doctor.
RN = registered nurse.
SOW = statement of work.

DRAFT

D.21

APPENDIX E
SUPPLEMENTAL TABLES FOR CHAPTER IV
(ANALYSES OF PROVIDER SURVEY)

TABLE E.1
ADDITIONAL PROVIDER KNOWLEDGE QUESTION

N

Percent
Answering Yes

2,267
1,934
809

90.8
88.4
67.7

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,796
1,848
419

94.2
91.1
63.5

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,414
1,649
103

96.8
95.7
78.6

Question
Aware That QIOs Work with Many Different Health Care
Providers and Organizations
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Source:

Westat de-identified survey of providers May-September 2007; dataset provided to
MPR by CMS.

Note:

All differences statistically significant at p<0.001, chi-squared test.

DRAFT

E.3

DRAFT

TABLE E.2
ADDITIONAL QUESTIONS ON PROVIDERS’ SATISFACTION WITH THEIR LOCAL QIOS
Question
How Satisfied With the Way in Which Information
(from the QIO) Presented?

E.4

N

Very Satisfied

Somewhat
Satisfied

Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

2,226
1,930
501

69.1
46.6
20.8

24.7
42.8
48.9

3.9
8.5
26.8

2.4
2.2
3.6

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,790
1,844
238

79.7
67.7
25.6

17.9
27.3
46.6

1.2
3.4
21.4

1.2
1.6
6.3

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,413
1,649
64

71.6
61.4
21.9

23.2
31.1
39.1

2.9
4.4
28.1

2.3
3.1
10.9

N

Very Satisfied

Somewhat
Satisfied

Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

2,227
1,924
513

70.2
39.7
15.6

20.2
35.9
30.6

7.5
20.5
46.2

2.1
4.0
7.6

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,787
1,838
250

78.5
60.3
20.0

16.4
26.4
25.6

3.7
10.5
38.0

1.5
2.8
16.4

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,411
1,649
66

75.8
63.2
19.7

17.2
25.2
30.3

5.2
8.7
37.9

1.8
3.0
12.1

How Satisfied with the Amount of Contact With the
QIO?

Neither Satisfied
Nor Dissatisfied

Neither Satisfied
Nor Dissatisfied

Somewhat or Very
Dissatisfied

Somewhat or Very
Dissatisfied

DRAFT

TABLE E.2 (continued)
Question
N

Very Satisfied

Somewhat
Satisfied

Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

2,205
1,859
459

79.4
54.2
19.6

14.4
27.5
27.7

4.5
16.3
48.8

1.7
2.1
3.9

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,779
1,807
231

84.4
70.3
25.1

12.3
20.4
22.9

2.3
7.5
44.6

1.0
1.7
7.4

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,407
1,640
65

77.1
72.4
33.9

18.8
21.1
29.2

1.9
3.4
27.7

2.1
3.2
9.2

N

Very Satisfied

Somewhat
Satisfied

Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

2,213
1,893
456

87.9
74.7
41.2

8.4
17.0
24.3

2.5
7.2
33.1

1.2
1.1
1.3

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,783
1,826
217

92.5
84.9
47.5

6.0
11.0
19.8

0.8
3.3
30.0

0.6
0.8
2.8

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,406
1,642
62

90.5
87.7
48.4

7.8
9.3
27.4

1.2
2.0
24.2

0.5
1.0
0.0

How Satisfied with Timeliness of QIO Responses?

E.5

How Satisfied with Professionalism, Courtesy,
Respectfulness of QIO Staff?

Neither Satisfied
Nor Dissatisfied

Neither Satisfied
Nor Dissatisfied

Somewhat or Very
Dissatisfied

Somewhat or Very
Dissatisfied

DRAFT

TABLE E.2 (continued)
Question
How Often Able To Get Through To the Desired
Person At The QIO?

E.6

N

Always

Usually

Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

2,077
1,694
358

57.8
36.4
22.1

40.2
50.5
40.5

2.0
13.1
37.4

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,669
1,628
182

61.7
51.2
25.8

36.0
41.2
34.6

2.3
7.7
39.6

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,246
1,524
52

49.4
43.0
36.5

49.5
52.4
53.9

1.1
4.5
9.6

Source:
Note:

Sometimes/Never

Westat de-identified survey of providers May-September 2007; dataset provided to MPR by CMS.
All differences statistically significant at p<0.001, chi-squared test. Percentages may not sum to 100 because of rounding.

DRAFT

TABLE E.3
ADDITIONAL QUESTIONS ON PROVIDERS’ PERCEPTIONS OF QIOS’ VALUE

N

Strongly
Agree

Somewhat
Agree

Neither Agree
nor Disagree

Somewhat or
Strongly Disagree

2,215
1,912
507

51.6
33.1
14.8

38.8
49.5
41.6

5.7
11.7
30.6

3.8
5.8
13.0

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,783
1,833
244

73.2
56.6
23.8

24.0
35.6
42.6

1.9
5.2
24.2

1.0
2.6
9.4

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,406
1,642
64

53.7
45.4
14.1

37.0
41.7
43.8

5.6
8.2
23.4

3.8
4.7
18.8

2,213
1,910
499

67.8
50.0
23.7

22.7
34.2
37.7

5.8
11.1
29.9

3.7
4.7
8.8

Home Health Agencies
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,783
1,823
242

81.1
68.8
29.3

15.0
22.9
36.4

2.1
5.6
26.9

1.8
2.7
7.4

Hospitals
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

1,404
1,641
63

68.7
61.1
22.2

23.8
28.8
36.5

5.6
7.3
28.6

1.9
2.9
12.7

Question
Provider Used the Information Provided by the QIO
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

E.7

“Our Organization is Better Off Having Received Services from QIO”
Nursing Homes
IPG
Non-IPG received QIO help
Non-IPG received no QIO help

Source:
Note:

Westat de-identified survey of providers May-September 2007; dataset provided to MPR by CMS.
All differences statistically significant at p<0.001, chi-squared test. Percentages may not sum to 100 because of rounding.


File Typeapplication/pdf
File TitleAssessment of the Eighth Scope of Work of the Medicare Quality Improvement Organization Program
AuthorAndrew Clarkwest, Sue Felt-Lisk, Sarah Croake, Arnold Chen
File Modified2009-03-18
File Created2009-03-18

© 2024 OMB.report | Privacy Policy