Attachment A-Guidance on Public Reporting of Healthcare-Associated Infections: Recommendations of the Healthcare Infection Control Practices Advisory Committee

Attachment A.APIC guidelines.pdf

The National Healthcare Safety Network (NHSN)

Attachment A-Guidance on Public Reporting of Healthcare-Associated Infections: Recommendations of the Healthcare Infection Control Practices Advisory Committee

OMB: 0920-0666

Document [pdf]
Download: pdf | pdf
Guidance on Public Reporting of
Healthcare-Associated Infections:
Recommendations of the Healthcare
Infection Control Practices Advisory
Committee
Linda McKibben, MD,a Teresa Horan, MPH,b Jerome I. Tokars, MD, MPH,b Gabrielle Fowler, MPH,b Denise M. Cardo, MD,a
Michele L. Pearson, MD,c Patrick J. Brennan, MD,d and the Healthcare Infection Control Practices Advisory Committee*

Since 2002, 4 states have enacted legislation that requires health care organizations to publicly disclose health care–associated
infection (HAI) rates. Similar legislative efforts are underway in several other states. Advocates of mandatory public reporting of
HAIs believe that making such information publicly available will enable consumers to make more informed choices about their
health care and improve overall health care quality by reducing HAIs. Further, they believe that patients have a right to know this
information. However, others have expressed concern that the reliability of public reporting systems may be compromised by
institutional variability in the definitions used for HAIs, or in the methods and resources used to identify HAIs. Presently, there is
insufficient evidence on the merits and limitations of an HAI public reporting system. Therefore, the Healthcare Infection Control
Practices Advisory Committee (HICPAC) has not recommended for or against mandatory public reporting of HAI rates. However,
HICPAC has developed this guidance document based on established principles for public health and HAI reporting systems. This
document is intended to assist policymakers, program planners, consumer advocacy organizations, and others tasked with
designing and implementing public reporting systems for HAIs. The document provides a framework for legislators, but does not
provide model legislation. HICPAC recommends that persons who design and implement such systems 1) use established public
health surveillance methods when designing and implementing mandatory HAI reporting systems; 2) create multidisciplinary
advisory panels, including persons with expertise in the prevention and control of HAIs, to monitor the planning and oversight of
HAI public reporting systems; 3) choose appropriate process and outcome measures based on facility type and phase in measures
to allow time for facilities to adapt and to permit ongoing evaluation of data validity; and 4) provide regular and confidential
feedback of performance data to healthcare providers. Specifically, HICPAC recommends that states establishing public reporting
systems for HAIs select one or more of the following process or outcome measures as appropriate for hospitals or long-term care
facilities in their jurisdictions: 1) central-line insertion practices; 2) surgical antimicrobial prophylaxis; 3) influenza vaccination
coverage among patients and healthcare personnel; 4) central line-associated bloodstream infections; and 5) surgical site infections
following selected operations. HICPAC will update these recommendations as more research and experience become available.
(Am J Infect Control 2005;33:217-26.)

Consumer demand for health care information,
including data about the performance of health care
providers, has increased steadily over the past decade.
*Committee members are listed in the text.
From the Office of the Director,a the Healthcare Outcomes Branch,b
and the Prevention and Evaluation Branch,c Division of Healthcare
Quality Promotion, National Center for Infectious Diseases, Centers
for Disease Control and Prevention, United States Department of
Health and Human Services, Atlanta, GA; and the Division of Infectious
Diseases, University of Pennsylvania School of Medicine, Philadelphia,
PA.d
Reprint requests: Michele L. Pearson, MD, Division of Healthcare
Quality Promotion, Mailstop E-68, Centers for Disease Control and
Prevention, 1600 Clifton Road NE, Atlanta, GA 30333. E-mail:
[email protected].
doi:10.1016/j.ajic.2005.04.001

Many state and national initiatives are underway to
mandate or induce health care organizations to publicly disclose information regarding institutional and
physician performance. Mandatory public reporting of
health care performance is intended to enable stakeholders, including consumers, to make more informed
choices on health care issues.
Public reporting of health care performance information has taken several forms. Health care performance reports (report cards and honor rolls) typically
describe the outcomes of medical care in terms of
mortality, selected complications, or medical errors
and, to a lesser extent, economic outcomes. Increasingly, process measures (ie, measurement of adherence
to recommended health care practices, such as hand
hygiene) are being used as an indicator of how well an
217

218

Vol. 33 No. 4

organization adheres to established standards of practice with the implicit assumption that good processes
lead to good health care outcomes. National health care
quality improvement initiatives, notably those of the
Joint Commission on the Accreditation of Healthcare
Organizations (JCAHO), the Centers for Medicare &
Medicaid Services (CMS), and the Hospital Quality
Alliance, use process measures in their public reporting
initiatives.
Health care–associated infections (HAIs) are infections that patients acquire during the course of receiving treatment for other conditions (see Appendix 1 for
full definition of this and other terms used in this
document). In hospitals alone, HAIs account for an
estimated 2 million infections, 90,000 deaths, and $4.5
billion in excess health care costs annually1; however,
few of the existing report cards on hospital performance use HAIs as a quality indicator. Since 2002, 4
states (Illinois, Pennsylvania, Missouri, and Florida)
have enacted legislation mandating hospitals and
health care organizations to publicly disclose HAI
rates. Similar legislative efforts are underway in several
other states.
Because of the increasing legislative and regulatory
interest in this area, the Healthcare Infection Control
Practices Advisory Committee (HICPAC) conducted a
scientific literature review to evaluate the merits and
limitations of HAI reporting systems. We found no published information on the effectiveness of public reporting systems in reducing HAIs. Therefore, HICPAC has
concluded that there is insufficient evidence at this time
to recommend for or against public reporting of HAIs.
However, to assist those who will be tasked with
designing and implementing such reporting systems,
HICPAC presents the following framework for an HAI
reporting system and recommendations for process
and outcome measures to be included in the system.
The framework and recommendations are based on
established principles for public health and HAI
surveillance. This document is intended primarily for
policymakers, program planners, consumer advocacy
organizations, and others who will be developing and
maintaining public reporting systems for HAI. The
document does not provide model legislation.
This document represents the consensus opinion of
HICPAC. HICPAC is a federal advisory committee that
was established in 1991 to provide advice and guidance
to the Department of Health and Human Services and
CDC regarding surveillance, prevention, and control
of HAIs and related events in healthcare settings
(www.cdc.gov/ncidod/hip/HICPAC/Hicpac.htm). These recommendations also have been endorsed by the
Association for Professionals in Infection Control and
Epidemiology, the Council of State and Territorial
Epidemiologists, and the Society for Healthcare Epide-

McKibben et al

miology of America. These recommendations will be
updated as new information becomes available.

ESSENTIAL ELEMENTS OF A PUBLIC
REPORTING SYSTEM FOR HAIs
As a first step, the goals, objectives, and priorities of a
public reporting system should be clearly specified and
the information to be monitored should be measurable
to ensure that the system can be held accountable by
stakeholders. The reporting system should collect and
report healthcare data that are useful not only to the
public, but also to the facility for its quality improvement efforts. This can be achieved by selection of
appropriate measures and patient populations to monitor; use of standardized case-finding methods and data
validity checks; adequate support for infrastructure,
resources, and infection control professionals; adjustment for underlying infection risk; and production of
useful and accessible reports for stakeholders, with
feedback to healthcare providers. The planning and
oversight of the system should be monitored by a
multidisciplinary group composed of public health
officials, consumers, health care providers, and health
care infection control professionals.

Identifying Appropriate Measures of Health
Care Performance
Monitoring both process and outcome measures and
assessing their correlation is a comprehensive approach to quality improvement. Standardized process
and outcome measures for national health care performance for hospitals, nursing homes, and other
settings have been endorsed through the National
Quality Forum (NQF) voluntary consensus process.2-4
NQF also has developed a model policy on the
endorsement of proprietary performance measures.5
Several other agencies and organizations, including
CDC, CMS, the Agency for Healthcare Quality and
Research, JCAHO, the Leapfrog organization, and the
National Committee for Quality Assurance, also have
developed health care quality measures. Health care
performance reports should identify the sources and
endorsers of the measures and the sources of the data
used (eg, administrative or clinical).
Process measures. Process measures are desirable
for inclusion in a public reporting system because the
target adherence rate of 100% to these practices is
unambiguous. Furthermore, process measures do not
require adjustment for the patient’s underlying risk
of infection. Process measures that are selected for
inclusion in a public reporting system should be
those that measure common practices, are valid for a
variety of health care settings (eg, small, rural versus
large, urban hospitals); and can be clearly specified

McKibben et al

(eg, appropriate exclusion and inclusion criteria). Process measures meeting these criteria include adherence rates of central line insertion practices and
surgical antimicrobial prophylaxis and coverage rates
of influenza vaccination for health care personnel and
patients/residents (Table 1). Collection of data on one or
more of these process measures already is recommended by the NQF and required by CMS and JCAHO for
their purposes.
Outcome measures. Outcome measures should be
chosen for reporting based on the frequency, severity,
and preventability of the outcomes and the likelihood
that they can be detected and reported accurately.14
Outcome measures meeting these criteria include
central line–associated, laboratory-confirmed primary
bloodstream infections (CLA-LCBI) in intensive care
units (ICU) and surgical site infections (SSI) following
selected operations (Table 2). Although CLA-LCBIs and
SSIs occur at relatively low rates, they are associated
with substantial morbidity and mortality and excess
health care costs. Also, there are well-established
prevention strategies for CLA-LCBIs and SSIs.6,10 Therefore, highest priority should be given to monitoring
these two HAIs and providers’ adherence to the related
processes of care (ie, central-line insertion practices
for CLA-LCBI and surgical antimicrobial prophylaxis
for SSIs).
Use of other HAIs in public reporting systems may
be more difficult. For example, catheter-associated
urinary tract infections, though they may occur more
frequently than CLA-LCBIs or SSIs, are associated with a
lower morbidity and mortality; therefore, monitoring
these infections likely has less prevention effectiveness
relative to the burden of data collection and reporting.
On the other hand, HAIs such as ventilator-associated
pneumonia, which occur relatively infrequently but
have substantial morbidity and mortality, are difficult
to detect accurately. Including such HAIs in a reporting
system may result in invalid comparisons of infection
rates and be misleading to consumers.
Monitoring of process and outcome measures should
be phased in gradually to allow time for facilities to
adapt and to permit ongoing evaluation of data validity.

May 2005

219

hospital units or operation-specific rates of SSIs is
recommended.16 This practice can help ensure that
data collection is concentrated in populations where
HAIs are more frequent and that rates are calculated
that are more useful for targeting prevention and
making comparisons among facilities or within facilities over time.

Case-Finding
Once the population at risk for HAIs has been
identified, standardized methods for case-finding
should be adopted. Such methods help to reduce
surveillance bias (ie, the finding of higher rates at
institutions that do a more complete job of casefinding). Incentives to find cases of HAI may be helpful.
Conversely, punitive measures for hospitals that report
high rates may encourage underreporting.
Traditional case-finding methods for HAIs include
review of medical records, laboratory reports, and
antibiotic administration records. However, these standard case-finding methods can be enhanced. For
example, substantially more SSIs are found when
administrative data sources (eg, International Classification of Diseases, 9th Revision [ICD-9], discharge
codes) are used in combination with antimicrobial
receipt to flag charts for careful review.18,19 However,
the accuracy of case-finding using ICD-9 codes alone
likely varies by HAI type and by hospital. Therefore,
ICD-9 discharge codes should not be relied upon as the
sole source of case finding for HAI monitoring systems.
Traditional HAI case-finding methods were developed in an era when patients’ lengths of hospitalization
were much longer than they are today, allowing most
HAIs to be detected during the hospital stay. However,
for SSIs in particular, the current climate of short stays
and rapid transfers to other facilities makes accurate
detection difficult because as many as 50% of SSIs
do not become evident until after hospital discharge
or transfer.20 Since there is no consensus on which
postdischarge surveillance methods are the most
accurate and practical for detection of SSIs,10 the
limitations of current case-finding methods should be
recognized if SSIs are selected for inclusion in mandatory reporting systems.

Identifying Patient Populations for Monitoring
CDC16 and other authorities17 no longer recommend
collection or reporting of hospital-wide overall HAI
rates because 1) HAI rates are low in many hospital
locations (which makes routine inclusion of these units
unhelpful), 2) collecting hospital-wide data is labor
intensive and may divert resources from prevention
activities, and 3) methods for hospital-wide risk adjustment have not been developed. Rather than hospitalwide rates, reporting rates of specific HAI for specific

Validation of Data
A method to validate data should be considered in
any mandatory reporting system to ensure that HAIs
are being accurately and completely reported and that
rates are comparable from hospital to hospital or
among all hospitals in the reporting system. The
importance of validation was emphasized by a CDC
study of the accuracy of reporting to the NNIS system,
which found that although hospitals identified and

220

McKibben et al

Vol. 33 No. 4

Table 1. Recommended process measures for a mandatory public reporting system on health care–associated infections
Events

Measures

Rationale for inclusion

Unambiguous target goal (100%)
Two measures (expressed as a
percentage)6:
Numerators: Number of CLIs in
Risk-adjustment is unnecessary
which:
d Maximal sterile barrier precautions
were used
d Chlorhexidine gluconate (preferred),
tincture of iodine, an iodophor, or 70%
alcohol was used as skin antiseptic
Denominator: Number of CLIs
Proven prevention effectiveness6:
d Use of maximal barrier precautions
during insertion and chlorhexidine skin
antisepsis have been shown to be
associated with an 84% and 49%
reduction in central line–associated
bloodstream infection rates,
respectively.7,8
Unambiguous target goal (100%)
Surgical antimicrobial
Three measures (expressed as a
prophylaxis (AMP)
percentage)9:
Numerators: Number of surgical
Risk-adjustment is unnecessary
patients:
d Who received AMP within 1 hour
prior to surgical incision (or 2 hours
if receiving vancomycin or a
fluoroquinolone)
d Who received AMP recommended
for their surgical procedure
d Whose prophylactic antibiotics were
discontinued within 24 hours after
surgery end time
Denominator: All selected surgical
Proven prevention effectiveness10:
d Administering the appropriate
patients
antimicrobial agent within 1 hour
before the incision has been
shown to reduce SSIs
d Prolonged duration of surgical
prophylaxis (.24 hrs) has been
associated with increased risk of
antimicrobial-resistant SSI
Influenza vaccination of Two measures (each expressed as a
Proven prevention effectiveness11-13:
d Vaccination of high-risk patients and
patients and health
percentage of coverage)11:
care personnel
health care personnel has been shown
Numerators: Number of influenza
to be effective in preventing influenza
vaccinations given to eligible patients
or healthcare personnel
Denominators: Number of patients
or healthcare personnel eligible for
influenza vaccine
Central line insertion
(CLI) practices

reported most of the HAIs that occurred, the accuracy
varied by infection site.15

Resources and Infrastructure Needed for a
Reporting System
A reporting system can not produce quality data
without adequate resources. At the institution level,
trained personnel with dedicated time are required,
eg, infection control professionals to conduct HAI
surveillance. At the system level, key infrastructure

Potential limitations
Methods for data collection not
yet standardized
Manual data collection likely to be tedious
and labor intensive, and data are not
included in medical records

Manual data collection may be tedious
and labor intensive, but data can be
abstracted from medical records

Manual data collection may be tedious
and labor intensive

includes instruction manuals, training materials, data
collection forms, methods for data entry and submission, databases to receive and aggregate the data,
appropriate quality checks, computer programs for
data analysis, and standardized reports for dissemination of results. Computer resources within reporting systems must include both hardware and software
and a standard user interface. In order to collect
detailed data on factors such as use of invasive
devises (eg, central lines), patient care location within
the facility, type of operation, and extensive data

McKibben et al

May 2005

221

Table 2. Recommended outcome measures for a mandatory public reporting system on health care–associated infections
Events

Measures

Central line–associated
Numerator: Number of
CLA-LCBI
laboratory-confirmed
primary bloodstream
infection (CLA-LCBI)*

Surgical site infection
(SSI)*

Rationale for inclusion
Overall, an infrequent event but
one that is associated with
substantial cost, morbidity,
and mortality

Denominator: Number of
central-line days in
each population at risk,
expressed per 1,000

Reliable laboratory test available
for identification (ie, positive
blood culture)

Populations at risk:
Patients with central lines
cared for in different types
of intensive care units (ICUs)*
Risk stratification: By type of
ICU
Frequency of monitoring:
12 months per year for ICU
with # 5 beds; 6 months per
year for ICU with . 5 beds
Frequency of rate
calculation: Monthly (or
quarterly for small ICUs) for
internal hospital quality
improvement purposes
Frequency of rate reporting:
Annually using all the data to
calculate the rate
Numerator: Number of SSI for
each specific type of operation*

Prevention guidelines exist6 and
insertion processes can be
monitored concurrently

Denominator: Total number of
each specific type of operation,
expressed per 100
Risk stratification: Focus on
high-volume operations and
stratify by type of operation and
National Nosocomial
Infections Surveillance (NNIS)
SSI risk index*
Alternate risk adjustment:
For low-volume operations,
adjust for risk by using the
standardized infection ratio*

Potential limitations
LCBI* can be challenging to diagnose since the
definition includes criteria that are difficult to
interpret (eg, single-positive blood cultures
from skin commensal organisms may not
represent true infections). To offset this
limitation, a system could include only those
CLA-LCBI identified by criterion 1, which
will result in smaller numerators and
therefore will require longer periods of
time for sufficient data accumulation for
rates to become stable/meaningful.
Standard definition of central line* requires
knowing where the tip of the line
terminates, which is not always
documented and can therefore lead to
misclassification of lines

Sensitivity*: 85%; predictive value
positive (PVP)*: 75%15

Low frequency event but one that is
associated with substantial cost,
morbidity, and mortality

Rates dependent on surveillance intensity,
especially completeness of post-discharge
surveillance (50% become evident after
discharge and may not be detected)
Prevention guidelines exist10 and certain SSI definitions include a ‘‘physician diagnosis’’
important prevention processes can
criterion, which reduces objectivity
be monitored concurrently
Sensitivity*: 67%; PVP*: 73%15

*See Glossary (Appendix 1).

dictionaries and coding schema must be developed
and maintained.

HAI Rates and Risk Adjustment
For optimal comparison purposes, HAI rates should
be adjusted for the potential differences in risk factors.

For example, in the NNIS system, device-associated
infections are risk adjusted by calculating rates per
1,000 device-days (eg, CLA-LCBI per 1,000 central
line–days) and stratifying by unit type.21-23 For that
system, risk adjustment of SSIs is done by calculating of
operation-specific rates stratified by a standardized risk
index.23-25 Although these methods do not incorporate

222

McKibben et al

Vol. 33 No. 4

all potential confounding variables, they provide an
acceptable level of risk adjustment that avoids the data
collection burden that would be required to adjust for all
variables.
Risk adjustment is labor intensive because data must
be collected on the entire population at risk (the
denominator) rather than only the fraction with HAIs
(the numerator). Risk adjustment can not correct for
variability among data collectors in the accuracy of
finding and reporting events. Further, current riskadjustment methods improve but do not guarantee the
validity of inter-hospital comparisons, especially comparisons involving facilities with diverse patient populations (eg, community versus tertiary-care hospitals).
Valid event rates are facilitated by selecting events
that occur frequently enough and at-risk populations
that are large enough to produce adequate sample
sizes. Unfortunately, use of stratification (eg, calculation of rates separately in multiple categories) for risk
adjustment may lead to small numbers of HAIs in any
one category and thereby yield unstable rates, as is the
case of a small hospital with low surgical volume.

Producing Useful Reports and Feedback
Publicly released reports must convey scientific
meaning in a manner that is useful and interpretable
to a diverse audience. Collaboration between subject
matter experts, statisticians, and communicators is
necessary in developing these reports. The reports
should provide useful information to the various users
and highlight potential limitations of both the data and
the methods used for risk adjustment. In a new
reporting system, data should be examined and validated before initial release; in addition, sufficient
sample size should be accumulated so that rates are
stable at the time of public release. Lastly, feedback of
performance data should be given to health care
providers regularly so that interventions to improve
performance can be implemented as quickly as possible. For example, feedback of SSI rates to surgeons has
been shown to be an important component of strategies to reduce SSI risk.26

ADAPTING ESTABLISHED METHODS FOR USE
IN MANDATORY REPORTING SYSTEMS
Where appropriate, developers of reporting systems
should avail themselves of established and proven
methods of collecting and reporting surveillance data.
For example, many of the methods, attributes, and
protocols of CDC’s NNIS system may be applicable for
public reporting systems. A detailed description of the
NNIS methodologies has been described elsewhere,23
and additional information on NNIS is available at
www.cdc.gov/ncidod/hip/surveill/nnis.htm.

Most reporting systems, such as NNIS, use manual
data collection methods. In most instances, information in computer databases, when available, can be
substituted for manually collected data.27,28 However,
when manual data collection is necessary, alternate
approaches include limiting reporting to well-defined
and readily identifiable events, using simpler and more
objective event definitions,29 and sampling to obtain
denominators.30 These approaches could decrease the
burden of data collection and improve the consistency
of reporting among facilities. If data collection were
simplified, expanding the number of infection types
and locations in which they are monitored may
become more feasible.

POTENTIAL CONSEQUENCES OF MANDATORY
PUBLIC REPORTING SYSTEMS
Mandatory reporting of HAIs will provide consumers
and stakeholders with additional information for
making informed health care choices. Further, reports
from private systems suggest that participation in an
organized, ongoing system for monitoring and reporting of HAIs may reduce HAI rates.31,32 This same
beneficial consequence may apply to mandatory public reporting systems. Conversely, as with voluntary
private reporting, mandatory public reporting that
doesn’t incorporate sound surveillance principles and
reasonable goals may divert resources to reporting
infections and collecting data for risk adjustment and
away from patient care and prevention; such reporting
also could result in unintended disincentives to treat
patients at higher risk for HAI. In addition, current
standard methods for HAI surveillance were developed
for voluntary use and may need to be modified for
mandatory reporting. Lastly, publicly reported HAI
rates can mislead stakeholders if inaccurate information is disseminated. Therefore, in a mandatory public
report of HAI information, the limitations of current
methods should be clearly communicated within the
publicly released report.

RESEARCH AND EVALUATION NEEDS
Research and evaluation of existing and future
HAI reporting systems will be needed to answer questions about 1) the comparative effectiveness and efficiency of public and private reporting systems and
2) the incidence and prevention of unintended consequences. Ongoing evaluation of each system will be
needed to confirm the appropriateness of the methods
used and the validity of the results.

RECOMMENDATIONS
The Healthcare Infection Control Practices Advisory
Committee proposes four overarching recommendations

McKibben et al

regarding the mandatory public reporting of HAIs.
These recommendations are intended to guide policymakers in the creation of statewide reporting systems
for health care facilities in their jurisdictions.
1. Use established public health surveillance methods
when designing and implementing mandatory HAI
reporting systems. This process involves:
a. selection of appropriate process and outcome
measures to monitor;
b. selection of appropriate patient populations to
monitor;
c. use of standardized case-finding methods and
data validity checks;
d. provision of adequate support and resources;
e. adjustment for underlying infection risk; and
f. production of useful and accessible reports to
stakeholders.
Do not use hospital discharge diagnostic codes as the
sole data source for HAI public reporting systems.
2. Create a multidisciplinary advisory panel to monitor
the planning and oversight of the operations and
products of HAI public reporting systems. This team
should include persons with expertise in the prevention and control of HAIs.
3. Choose appropriate process and outcome measures
based on facility type, and phase in measures
gradually to allow time for facilities to adapt and
to permit ongoing evaluation of data validity. States
can select from the following measures as appropriate for hospitals or long term care facilities in
their jurisdictions.
a. Three process measures are appropriate for
hospitals and one (iii below) is appropriate for
long term care facilities participating in a mandatory HAI reporting system (Table 1).
i. Central line insertion practices (with the goal of
targeting ICU-specific CLA-LCBIs can be measured by all hospitals that have the type of ICUs
selected for monitoring (eg, medical or surgical).
ii. Surgical antimicrobial prophylaxis (with the
goal of targeting SSI rates) can be measured by
all hospitals that conduct the operations selected for monitoring.
iii. Influenza vaccination coverage rates for health
care personnel and patients can be measured
by all hospitals and long term care facilities. For
example:
d Coverage rates for health care personnel can
be measured in all hospitals and long term
care facilities.
d Coverage rates for high-risk patients can be
measured in all hospitals.
d Coverage rates for all residents can be
measured in all long term care facilities.

May 2005

223

b. Two outcome measures are appropriate for
some hospitals participating in a mandatory HAI
reporting system (Table 2).
i. CLA-LCBIs.
ii. SSIs following selected operations.
Hospitals for which these measures are appropriate
are those in which the frequency of the HAI is sufficient
to achieve statistically stable rates. To foster performance improvement, the HAI rate to be reported
should be coupled with a process measure of adherence to the prevention practice known to lower the rate
(see 3ai and 3aii). For example, hospitals in states
where reporting of SSIs is mandated should monitor
and report adherence to recommended standards for
surgical prophylaxis (see 3aii).
4. Provide regular and confidential feedback of performance data to health care providers. This practice may encourage low performers to implement
targeted prevention activities and increase the acceptability of the public reporting systems within
the health care sector.
HICPAC thanks the following subject-matter experts for reviewing preliminary drafts
of this guidance document: Victoria Fraser, MD, Washington University School of
Medicine, St Louis, MO; Lisa McGiffert, Consumers Union; Richard Platt, MD,
Harvard-Pilgrim Health, Boston, MA; Robert A Weinstein, MD, John J Stroger, Jr,
Hospital of Cook County, Chicago, IL; and Richard P Wenzel, MD, Richmond, VA.
HICPAC also thanks J Shaw and Patricia Simone, MD, for exceptional editorial
guidance during the development of this document. The opinions of all the
reviewers may not be reflected in all the recommendations contained in this
document.

References
1. Weinstein RA. Nosocomial infection update. Emerg Infect Dis 1998;4:
416-20.
2. Kizer KW. Establishing health care performance standards in an era of
consumerism. JAMA 2001;286:1213-7.
3. The National Quality Forum. National voluntary consensus standards
for hospital care: an initial performance measure set. Washington, DC:
The National Quality Forum, 2003.
4. The National Quality Forum. Safe practices for better healthcare:
a consensus report. Washington, DC: The National Quality Forum,
2003.
5. The National Quality Forum. Policy on endorsement of proprietary
performance measures. Available at: www.QualityForum.org. May 14,
2003. Accessed October 14, 2004.
6. Centers for Disease Control and Prevention. Guidelines for the
prevention of intravascular catheter-related infections [Erratum to
p.29, Appendix B published in MMWR Vol. 51, No. 32, p.711]. MMWR
2002;51(No. RR-10):1-29.
7. Raad II, Hohn DC, Gilbreath BJ, Suleiman N, Hill LA, Bruso PA, et al.
Prevention of central venous catheter-related infections by using
maximal sterile barrier precautions during insertion. Infect Control
Hosp Epidemiol 1994;15:231-8.
8. Chaiyakunapruk N, Veenstra DL, Lipsky BA, Saint S. Chlorhexidine
compared with povidone-iodine solution for vascular catheter-site
care: a meta-analysis. Ann Intern Med 2002;136:792-801.
9. Bratzler DW, Houck PM, Surgical Infection Prevention Guidelines
Writers Workgroup, et al. Antimicrobial prophylaxis for surgery:
an advisory statement from the National Surgical Infection Prevention
Project. Clin Infect Dis 2004;38:1706-15.

224

McKibben et al

Vol. 33 No. 4

10. Mangram AJ, Horan TC, Pearson ML, Silver LC, Jarvis WR, the
Hospital Infection Control Practices Advisory Committee. Guideline
for prevention of surgical site infection, 1999. Infect Cont Hosp
Epidemiol 1999;20:247-78.
11. Tablan OC, Anderson LJ, Besser R, Bridges C, Hajjeh R. Guidelines for
preventing healthcare-associated pneumonia, 2003. MMWR 2004;
53(No. RR-3):1-36.
12. Centers for Disease Control and Prevention. Prevention and control
of influenza. MMWR 2004;53(No. RR-06):1-40.
13. Centers for Disease Control and Prevention. Immunization of healthcare workers: recommendations of the Advisory Committee on
Immunization Practices (ACIP) and the Hospital Infection Control
Practices Advisory Committee (HICPAC). MMWR 1997;46(No.
RR-18):1-42.
14. Centers for Disease Control and Prevention. Updated guidelines
for evaluating public health surveillance systems. MMWR 2001;
50(No. RR-13):1-35.
15. Emori TG, Edwards JR, Culver DH, Sartor C, Stroud LA, Gaunt EE,
et al. Accuracy of reporting nosocomial infections in intensive-careunit patients to the National Nosocomial Infections Surveillance
System: a pilot study [Erratum in Infect Control Hosp Epidemiol 1998;
19:479]. Infect Control Hosp Epidemiol 1998;19:308-16.
16. Centers for Disease Control and Prevention. Nosocomial infection
rates for interhospital comparison: limitations and possible solutions.
Infect Control Hosp Epidemiol 1991;12:609-21.
17. Association for Professionals in Infection Control and Epidemiology.
Release of nosocomial infection data [Position Paper]. APIC News
1998;17(2):1-5.
18. Platt R, Yokoe DS, Sands KE. Automated methods for surveillance of
surgical site infections [Review]. Emerg Infect Dis 2001;7:212-6.
19. Yokoe DS. Enhanced identification of postoperative infections among
inpatients. Emerg Infect Dis 2004;10:1924-30.
20. Weigelt JA, Dryer D, Haley RW. The necessity and efficiency of
wound surveillance after discharge. Arch Surg 1992;127:777-82.
21. Jarvis WR, Edwards JR, Culver DH, Hughes JM, Horan TC, Emori TG,
et al. Nosocomial infection rates in adult and pediatric intensive care
units in the United States. Am J Med 1991;91(Suppl 3B):185S-91S.
22. Gaynes RP, Martone WJ, Culver DH, Emori TG, Horan TC, Banerjee
SN, et al. Comparison rates of nosocomial infections in neonatal
intensive care units in the United States. Am J Med 1991;91(Suppl 3B):
192S-6S.
23. Horan TC, Gaynes R. Surveillance of nosocomial infections. In:
Mayhall CG, editor. Hospital epidemiology and infection control.
Philadelphia: Lippincott Williams & Wilkins; 2004. p. 1659-702.
24. Gaynes RP, Culver DH, Horan TC, Edwards JR, Richards C, Tolson JS.
Surgical site infection (SSI) rates in the United States, 1992-1998: the
National Nosocomial Infections Surveillance System basic SSI risk
index. Clin Infect Dis 2001;33(Suppl 2):S69-77.
25. Culver DH, Horan TC, Gaynes RP, Martone WJ, Jarvis WR, Emori
TG. Surgical wound infection rates by wound class, operative
procedures, and patient risk index. Am J Med 1991;91(Suppl 3B):
152S-7S.
26. Haley RW, Culver DH, White JW, Morgan WM, Emori TG, Munn VP,
et al. The efficacy of infection surveillance and control programs in
preventing nosocomial infections in US hospitals. Am J Epidemiol
1985;121:182-205.
27. Trick W, Zagorski B, Tokars J, Vernon MO, Welbel SF, Wisniewski MF,
et al. Computer algorithms to detect bloodstream infections. Emerg
Infect Dis 2004;10:1612-20.
28. Samore MH, Evans RS, Lassen A, Gould P, Lloyd J, Gardner RM,
et al. Surveillance of medical device-related hazards and adverse
events in hospitalized patients [see comment]. JAMA 2004;291:
325-34.
29. Yokoe DS, Anderson J, Chambers R, Connor M, Finberg R, Hopkins
C, et al. Simplified surveillance for nosocomial bloodstream infections.
Infect Control Hosp Epidemiol 1998;19:657-60.

30. Klevens M, Tokars J, Edwards J. Simplified methods for collection of
device denominators. Abstract 144, pg. 88, Program and Abstracts
Book, 14th Annual Scientific Meeting of the Society for Healthcare
Epidemiology of America, April 18, 2004. Society of Healthcare,
Epidemiology of America, Alexandria, VA.
31. McCall JL, Macchiaroli S, Brown RB, Schulte MJ, Calderone S, Selbovitz
LG, et al. A method to track surgical site infections. Quality
Management in Health Care 1998;6:52-62.
32. Centers for Disease Control and Prevention. Monitoring hospitalacquired infections to promote patient safety—United States, 19901999 [Erratum appears in MMWR 49(09);189]. MMWR 2000;49:
149-53.
33. Last JM. A Dictionary of Epidemiology. 2nd ed. New York: Oxford
University Press; 1988.
34. Anonymous. New classification of physical status. Anesthesiology
1963;24:111.
35. Garner JS. CDC Guidelines for prevention of surgical wound
infections, 1985. Infect Control 1986;7:193-200.
36. Simmons BP. Guideline for prevention of surgical wound infections.
Infect Control 1982;3:185-96.

Appendix 1. Glossary
Central line. A vascular infusion device that terminates at or close to the heart or in one of the great
vessels. In the National Healthcare Safety Network
(NHSN), the system replacing NNIS, the following are
considered great vessels for the purpose of reporting
central line infections and counting central line days:
aorta, pulmonary artery, superior vena cava, inferior
vena cava, brachiocephalic veins, internal jugular
veins, subclavian veins, external iliac veins, and
common femoral veins.
Note. In neonates, the umbilical artery/vein is
considered a great vessel.
Note. Neither the location of the insertion site nor the
type of device may be used to determine if a line
qualifies as a central line. The device must terminate in
one of these vessels or in or near the heart to qualify as
a central line. Note: Pacemaker wires and other noninfusion devices inserted into central blood vessels or
the heart are not considered central lines.
d CLA-LCBI. See laboratory-confirmed primary bloodstream infection.
d Confounding. The distortion of the apparent effect of
an exposure on risk brought about by the association
with other factors that can influence the outcome.33
Risk adjustment is performed to minimize the effects
of patient co-morbidities and use of invasive devices
(the confounding factors) on the estimate of risk for a
unit or facility (the exposure).
d Device-associated infection. An infection in a patient with a device (eg, ventilator or central line) that
was used within the 48-hour period before the
infection’s onset. If the time interval was longer
than 48 hours, compelling evidence must be present
to indicate that the infection was associated with use
of the device. For catheter-associated urinary tract
d

McKibben et al

d

d

d

d

d
d

d

infection (UTI), the indwelling urinary catheter must
have been in place within the 7-day period before
positive laboratory results or signs and symptoms
meeting the criteria for UTI were evident.23
Health care–associated infection. A localized or
systemic condition resulting from an adverse reaction to the presence of an infectious agent(s) or
its toxin(s) that 1) occurs in a patient in a health care
setting (eg, a hospital or outpatient clinic), 2) was
not found to be present or incubating at the time
of admission unless the infection was related to a
previous admission to the same setting, and 3) if
the setting is a hospital, meets the criteria for a specific infection site as defined by CDC.23 (See also
Nosocomial.)
Intensive-care unit (ICU). A hospital unit that
provides intensive observation, diagnostic, and therapeutic procedures for adults and/or children who are
critically ill. An ICU excludes bone marrow transplant
units and nursing areas that provide step-down,
intermediate care or telemetry only. The type of ICU
is determined by the service designation of the
majority of patients cared for by the unit (ie, if 80%
of the patients are on a certain service [eg, general
surgery], then the ICU is designated as that type of unit
[eg, surgical ICU]). An ICU with approximately equal
numbers of medical and surgical patients is designated as a combined medical/surgical ICU.23
Laboratory-confirmed primary bloodstream infection (LCBI). A primary bloodstream infection identified by laboratory tests with or without clinical
signs or symptoms; most often associated with the
use of catheters or other invasive medical devices.
For the CDC surveillance definition of LCBIs, please
see reference 14 or www.cdc.gov/ncidod/hip/surveill/
nnis.htm.
NNIS SSI risk index. A score used to predict a surgical
patient’s risk of acquiring a surgical-site infection. The
risk index score, ranging from 0 to 3, is the number of
risk factors present among the following: 1) a patient
with an American Society of Anesthesiologists’ physical status classification score of 3, 4, or 5,34 b) an
operation classified as contaminated or dirty infected,35,36 and c) an operation lasting over T hours,
where T depends upon the operation being performed.25 Current T values can be found in the NNIS
Report at www.cdc.gov/ncidod/hip/surveill/nnis.htm.
Nosocomial. Originating or taking place in a hospital.
Outcomes. All the possible results that may stem from
exposure to a causal factor or from preventive or
therapeutic interventions33 (eg, mortality, cost, and
development of a health care–associated infection).
Predictive value positive. The proportion of infections reported by a surveillance or reporting system
that are true infections.14,15

May 2005

d

d

d

d

d

d
d

d

d

225

Private reporting system. A system that provides
information about the quality of health services or systems for the purposes of improving the quality of the
services or systems. By definition, the general public
is not given access to the data; instead, the data are
typically provided to the organization or health care
workers whose performance is being assessed. The
provision of these data is intended as an intervention
to improve the performance of that entity or person.
Process measure. A measure of recommended infection control or other practices (eg, adherence with
hand hygiene recommendations).
Public reporting system. A system that provides the
public with information about the performance or
quality of health services or systems for the purpose
of improving the performance or quality of the
services or systems.
Risk adjustment. A summarizing procedure for a
statistical measure in which the effects of differences
in composition (eg, confounding factors) of the populations being compared have been minimized by
statistical methods (eg, standardization and logistic
regression).33
Sensitivity. The proportion of true infections that are
reported by a surveillance or reporting system. May
also refer to the ability of the reporting system to
detect outbreaks or unusual clusters of the adverse
event (in time or place).14,15
SSI Risk Index. See NNIS SSI Risk Index.
Standardized infection ratio. The standardized infection ratio as used in this document is an example
of indirect standardization in which the observed
number of surgical site infections (SSIs) is divided by
the expected number of SSIs. The expected number
of SSIs is calculated by using NNIS SSI risk index
category-specific data from a standard population
(eg, the NNIS system data published in the NNIS
Report) and the number of operations in each risk
index category performed by a surgeon, a surgical
subspecialty service, or a hospital. (Detailed explanation and examples can be found in Horan TC,
Culver DH. Comparing surgical site infection rates.
In: Pfeiffer JA, editor. APIC text of infection control
and epidemiology. Washington, DC: Association for
Professionals in Infection Control, 2000. p. 1-7.)
Surgical site infection (SSI). An infection of the
incision or organ/space operated on during a surgical
procedure. For the CDC surveillance definition of
an SSI, see reference 14 or www.cdc.gov/ncidod/hip/
surveill/nnis.htm.
Surveillance. The ongoing, systematic collection,
analysis, interpretation, and dissemination of data
regarding a health-related event for use in public
health action to reduce morbidity and mortality and to
improve health.14

226

McKibben et al

Vol. 33 No. 4
Healthcare Infection Control Practices Advisory Committee

Chair: Patrick J. Brennan, MD, University of Pennsylvania School of Medicine, Philadelphia, PA
Executive Secretary: Michele L. Pearson, MD, CDC, Atlanta, GA
Members: Vicki L. Brinsko, RN, BA, Vanderbilt University Medical Center, Nashville, TN; Raymond Y. W. Chinn, MD, Sharp
Memorial Hospital, San Diego, CA; E. Patchen Dellinger, MD, University of Washington School of Medicine, Seattle, WA; Nancy
E. Foster, BA, American Hospital Association, Washington, DC; Steven M. Gordon, MD, Cleveland Clinic Foundation, Cleveland,
OH; Lizzie J. Harrell, PhD, Duke University Medical Center, Durham, NC; Carol O’Boyle, PhD, RN, University of Minnesota,
Minneapolis, MN; Dennis M. Perrotta, PhD, CIC, Texas Department of Health, Austin, TX; Harriett M. Pitt, MS, CIC, RN, Long
Beach Memorial Medical Center, Long Beach, CA; Robert J. Sherertz, MD, Wake Forest University School of Medicine, Wake
Forest, NC; Nalini Singh, MD, MPH, Children’s National Medical Center, Washington, DC; Kurt B. Stevenson, MD, MPH, Qualis
Health, Boise, ID; Philip W. Smith, MD, University of Nebraska Medical Center, Omaha, NE.
Liaison Representatives: William Baine, MD, Agency for Healthcare Research and Quality; Joan Blanchard, RN, BSN, MSS, CNOR,
CIC., Association of periOperative Registered Nurses, Denver, CO; Georgia Dash, RN, MS, CIC, Association for Professionals of
Infection Control and Epidemiology, Inc., Washington, DC; Sandra L. Fitzler, RN, American Healthcare Association, Washington,
DC; David Henderson, MD, National Institutes of Health; Lorine Jay, RN, Health Services Resources Administration; Stephen F.
Jencks, MD, MPH, Center for Medicare and Medicaid Services, Baltimore, MD; Chiu S. Lin, PhD, Food and Drug Administration,
Rockville, MD; Mark Russi, MD, MPH, American College of Occupational and Environmental Medicine, Arlington Heights, IL; Rachel
Stricof, MPH, Advisory Committee for the Elimination of Tuberculosis, CDC, Atlanta, GA; Michael Tapper, MD, Society for Healthcare
Epidemiology of America, Inc, Washington, DC; Robert Wise, MD, Joint Commisssion on the Accreditation of Healthcare
Organizations, Oakbrooke, IL.


File Typeapplication/pdf
File Titledoi:10.1016/j.ajic.2005.04.001
File Modified2007-10-31
File Created2005-05-04

© 2024 OMB.report | Privacy Policy