CMS-R-305 Conducting Performance Improvement Projects

External Quality Review of Medicaid MCOs and Supporting Regulations in 42 CFR 438.360, 438.362, and 438.364 (CMS-R-305)

Conducting Performance Improvement Projects

External Quality Review of Medicaid MCOs and Supporting Regulations in 42 CFR 438.360, 438.362, and 438.364

OMB: 0938-0786

Document [pdf]
Download: pdf | pdf
OMB Approval No.
0938-0786

CONDUCTING PERFORMANCE
IMPROVEMENT PROJECTS

A protocol for use in Conducting Medicaid External Quality
Review Activities

Department of Health and Human Services
Centers for Medicare & Medicaid Services

Final Protocol
Version 1.0
May 1, 2002
According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid
OMB control number. The valid OMB control number for this information collection is 0938-0786. The time required to complete this
information collection is estimated to average 1,591 hours per response for all activities, including the time to review instructions, search existing
data resources, gather the data needed, and complete and review the information collection. If you have comments concerning the accuracy of the
time estimate(s) or suggestions for improving this form, please write to: CMS, 7500 Security Boulevard, Attn: PRA Reports Clearance Officer,
Baltimore, Maryland 21244-1850.
Form CMS-R-305

CONDUCTING PERFORMANCE IMPROVEMENT PROJECTS
I.

PURPOSE OF THE PROTOCOL

The purpose of health care quality performance improvement projects (PIPs) is to assess and
improve processes, and thereby outcomes, of care. In order for such projects to achieve real
improvements in care, and for interested parties to have confidence in the reported
improvements, PIPs must be designed, conducted and reported in a methodologically sound
manner. In addition to those PIPs an MCO or PIHP is required to undertake on its own, a State
may require an EQRO to conduct PIPs in addition to those conducted by a MCO or PIHP. It is
expected that each State will prescribe how long the EQRO may have to complete the PIP. As
each step for conducting a PIP is conducted, information should be recorded on a standardized
worksheet such as that located in Attachment A. This protocol specifies procedures for external
quality review organizations (EQROs) 1 to use in conducting PIPs for Medicaid Managed Care
Organizations (MCOs) and Prepaid Inpatient Health Plans (PIHPs).
II.

OVERVIEW OF THE PROTOCOL

This protocol has been derived from existing public and private sector tools and approaches to
reviewing PIPs (See Attachment B). Activities that all public and private sector tools have in
common were included in this protocol. In addition, activities found in fewer documents were
included where the activity was felt to be important to promoting stronger PIPs, but would not
result in an inappropriate burden on the MCO, PIHP or the EQRO. In particular, the protocol
relies heavily on a guidebook produced by the National Committee for Quality Assurance
(NCQA) under a contract from the Centers for Medicare & Medicaid Services (CMS), formerly
the Health Care Financing Administration (HCFA), “Health Care Quality Improvement Studies
Managed Care Settings: A Guide for State Medicaid Agencies.” This guidebook identifies key
concepts related to the conduct of quality improvement (QI) studies and details widely accepted
principles in designing, implementing and assessing QI studies.
This protocol describes ten steps to be undertaken when conducting PIPs:
1.
2.
3.
4.
5.
6.

Select the study topic(s)
Define the study question(s)
Select the study indicator(s)
Use a representative and generalizable study population
Use sound sampling techniques (if sampling is used)
Reliably collect data

1

It is recognized that a State Medicaid agency may choose an organization other than an EQRO (as defined
in Federal regulation) to conduct MCO or PIHP performance improvement projects. However, for convenience, in
this protocol we use the term “external quality review organization (EQRO)” to refer to any organization that
conducts PIPs for a MCO or PIHP.

1

7.
8.
9.
10.
III.

Implement intervention and improvement strategies
Analyze data and interpret study results
Plan for “real” improvement.
Achieve sustained improvement

PROTOCOL ACTIVITIES

Activity 1:

Select the Study Topic(s)

Rationale. All PIPs should target improvement in relevant areas of clinical care and non-clinical
services. Topics selected for study must reflect the Medicaid enrollment in terms of
demographic characteristics, prevalence of disease and the potential consequences (risks) of the
disease. Information on Medicaid enrollees can be obtained from the following sources. Note
also that State Medicaid agencies may select the study topic.
Potential Sources of Information on Medicaid Enrollees:
-

-

-

-

-

Data in the MCO’s/PIHP’s enrollment/membership files on Medicaid enrollee
characteristics relevant to health risks or utilization of clinical and non-clinical services,
such as age, sex, race/ethnicity/language and disability or functional status.
Utilization, diagnostic, and outcome information on outpatient and inpatient Medicaid
encounters, services, procedures, medications and devices, admitting and encounter
diagnoses, adverse incidents (such as deaths, avoidable admissions, or readmissions); and
patterns of referrals or authorization requests obtained from MCO/PIHP encounter,
claims, or other administrative data.
Data on the MCO’s/PIHP’s performance as reflected in standardized measures,
including, when possible: local, State, or national information on performance of
comparable organizations.
Data from other outside organizations, such as Medicaid or Medicare fee-for-service
data, data from other health plans, and local or national public health reports on
conditions or risks for specified populations.
Data from surveys, grievance and appeals processes, and disenrollments and requests to
change providers.
Data on appointments and provider networks (e.g., access, open and closed panels, and
provider language spoken).

Methods of Implementation:
In general, a clinical or non-clinical issue selected for study should affect a significant portion of
the enrollees (or a specified sub-portion of enrollees) and have a potentially significant impact on
enrollee health, functional status or satisfaction. The topics should reflect high-volume or highrisk conditions of the population served. High-risk conditions may occur for infrequent

2

conditions or services, such as when a pattern of unexpected adverse outcomes are identified
through data analysis. High risk also exists for populations with special health care needs, such
as children in foster care, adults with disabilities and the homeless. Although these individuals
may be small in number, their special health care needs place them at high risk.
Address the following considerations to ensure an appropriate study topic.
1.

The topic should be identified either as specified by the State Medicaid agency or
through data collection and analysis of comprehensive aspects of enrollee needs, care,
and services. Consider enrollee demographic characteristic and health risks, prevalence
of conditions, or the need for a specific service by enrollees.
A project topic also may be selected based on patterns of inappropriate utilization.
However, the project must be clearly focused on identifying and correcting deficiencies
in care or services that might have led to this pattern, such as inadequate access to
primary care, rather than on utilization or cost issues alone. The goal of the project
should be to improve processes and outcomes of health care. Therefore, it is acceptable
for a project to focus on patterns of over utilization that present a clear threat to health or
functional status.
Topics to be studied may also be selected on the basis of Medicaid enrollee input. To the
extent feasible, input from enrollees who are users of, or concerned with, specific focus
areas; e.g. mental health or substance abuse services should be obtained from individuals
who use or are affected by these services.

2.

Study topics, over time, should address a broad spectrum of key aspects of enrollee care
and services including both clinical and nonclinical focus areas.
It is important that PIPs topics represent the entire spectrum of clinical and nonclinical
areas associated with the MCO/PIHP, and also do not consistently eliminate any
particular subset of Medicaid enrollees; e.g., children with special health care needs.
Clinical focus areas should include, over time, prevention and care of acute and chronic
conditions, high-volume services, and high-risk services. High-volume services, as
opposed to a clinical condition, can include such services as labor and delivery, a
frequently performed surgical procedure, or different surgical or invasive procedures.
The study may also focus on high-risk procedures even if they are low in frequency; e.g.,
care received from specialized centers inside or outside of the organization’s network;
e.g., burn centers, transplant centers, cardiac surgery centers. The study may also assess
and improve the way in which the MCO/PIHP detects which of its members have special
health care needs and assess these members’ satisfaction with the care received from the
organization.
Finally, PIPs can address non-clinical areas. For example, PIPs that address continuity or
coordination of care can study the manner in which care is provided when a patient
3

receives care from multiple providers and across multiple episodes of care. Such studies
may be disease or condition-specific or may target continuity and coordination across
multiple conditions. Projects in other non-clinical areas can also address, over time,
appeals, grievance and complaints; or access to and availability of services. Access and
availability PIPs can focus on assessing and improving the accessibility of specific
services or services for specific conditions, including reducing disparities between
services to minorities and service to other members. Projects related to the grievance and
coverage determination process could aim either to improve the processes themselves or
to address underlying issues in care or services identified through analysis of grievances
or appeals.
Activity 2:

Define the Study Question(s)

Rationale. It is important to clearly state, in writing, the question(s) the study is designed to
answer. Stating the question(s) helps maintain the focus of the PIP and sets the framework for
data collection, analysis, and interpretation.
Potential Sources of Information to Help Form the Study Question:
-

State data relevant to the topic being studied
MCO/PIHP data relevant to the topic being studied
Relevant clinical literature

Methods of Implementation:
A study question(s) must be stated as clear, simple, answerable question(s). An example of a
vague study question is:
“Does the MCO/PIHP adequately address psychological problems in patients recovering
from myocardial infarction?”
In this example, it is not clear how “adequately address” will be assessed. Furthermore,
“psychological factors” is a very broad term. A clearer study question could be:
”Does doing “x” reduce the proportion of patients with myocardial infarction who
develop severe emotional depression during hospitalization?”

4

Activity 3:

Select the Study Indicator(s)

Rationale. A study indicator is a quantitative or qualitative characteristic (variable) reflecting a
discrete event (e.g., an older adult has/has not received a flu shot in the last 12 months), or a
status (e.g., an enrollee’s blood pressure is/is not below a specified level) that is to be measured.
Each project should have one or more quality indicators for use in tracking performance and
improvement over time. All indicators must be objective, clearly and unambiguously defined,
and based on current clinical knowledge or health services research. In addition, all indicators
must be capable of objectively measuring either enrollee outcomes such as health or functional
status, enrollee satisfaction, or valid proxies of these outcomes.
Indicators can be few and simple, many and complex, or any combination thereof, depending on
the study question(s), the complexity of existing practice guidelines for a clinical condition, and
the availability of data and resources to gather the data.
Indicator criteria are the set of rules by which the data collector or reviewer determines whether
an indicator has been met. Pilot or field testing is helpful to the development of effective
indicator criteria. Such testing allows the opportunity to add criteria that might not have been
anticipated in the design phase. In addition, criteria are often refined over time, based on results
of previous studies. However, if criteria are changed significantly, the method for calculating an
indicator will not be consistent and performance on indicators will not be comparable over time.
It is important, therefore, for the indicator criteria to be developed as fully as possible during the
design and field testing of data collection instruments.
Potential Sources of Information to Help Select Study Indicators:
-

Clinical and non-clinical practice guidelines
Administrative data
Medical records

Methods of Implementation:
Address each of the following considerations to ensure an appropriate study indicator(s) is/are
identified.
1.

Each study should have objective, clearly defined, measurable indicators.
When indicators exist that are generally used within the public health community or the
managed care industry (such as NCQA’s Health Plan Employer Data and Information Set
(HEDIS) or the Foundation for Accountability’s (FACCT) measures) and these
indicators are applicable to the topic, use of those indicators is preferred. However,
indicators may be developed by the EQRO on the basis of current clinical practice

5

guidelines or clinical literature derived from health services research or findings of expert
or consensus panels.
The following questions will assist in identifying meaningful indicators.
-

Are the indicator(s) related to identified health care guidelines pertinent to the study
question?

-

Do the indicators measure an important aspect of care that will make a difference to the
MCO’s/PIHP’s beneficiaries?

-

Are data available either through administrative data, medical records or other readily
accessible sources?

-

Will limitations on the ability to collect the data skew the results?

-

Do these indicators require explicit or implicit criteria? Consider the specificity of the
criteria used to determine compliance with an indicator. The greater number of people
involved in data collection and analysis, the greater the need for more explicit, or precise,
data collection and indicator criteria to obtain inter-reviewer reliability. The more
specific the criteria, the easier the data collection process will be, because staff will not
need extensive training. An example of an explicit criterion for an immunization study
is:
-

Documentation of refusal by parent to have a child immunized through nurses
notes and/or signed refusal by the parent in the medical record.
Implicit criteria may require a high degree of professional clinical
judgement, and therefore, may be time-consuming and expensive. An
example of an implicit criterion for an immunization study is:
-

Receipt of a childhood immunization is contraindicated.

Specific indicators do not always need to be established at the outset of a PIP. There may
be instances when a project may begin with more general collection and analysis of
baseline data on a topic, and then narrow its focus to more specific indicators for
measurement, intervention and reevaluation. The success of the project is assessed in
terms of the indicators ultimately selected.
2.

The indicators should measure changes in health status, functional status, or enrollee
satisfaction, or valid proxies of these outcomes.

6

The objective of a PIP should be to improve processes and outcomes of care. For the
purposes of this protocol “outcomes” are defined as measures of patient health, functional
status or satisfaction following the receipt of care or services. Indicators selected for a
PIP in a clinical focus area ideally should include at least some measure of change in
health status or functional status or process of care proxies for these outcomes.
Indicators may also include measures of satisfaction.
It is recognized, however, that relatively few standardized performance measures actually
address outcomes. Even when outcome measures are available, their utility as quality
indicators may be limited because outcomes can be significantly influenced by factors
outside of the organization’s control, such as poverty, genetics, or the environment.
Because of this, quality indicators do not always need to be outcome measures. Process
measures are acceptable as long as it can be shown that there is strong clinical evidence
that the process being measured is meaningfully associated with outcomes. To the extent
possible, this determination should be based on published guidelines that support the
association and that cite evidence from randomized clinical trials, case control studies, or
cohort studies. Although published evidence is generally required, there may be certain
areas of practice for which empirical evidence of process/outcome linkage is limited. At
a minimum, it should be demonstrated that there is a consensus among relevant
practitioners with expertise in the defined area as to the importance of a given process.
While enrollee satisfaction is an important outcome of care in clinical areas,
improvement in satisfaction should not be the sole demonstrable outcome of a project in
any of these areas. Some improvement in health or functional status should also be
measured. For projects in non-clinical areas, use of health or functional status indicators
also is generally preferred, particularly for projects addressing access to and availability
of health care services. However, there may be some non-clinical projects for which
enrollee satisfaction indicators alone are sufficient.
Activity 4:

Use a Representative and Generalizable Study Population

Rationale. Once a topic has been selected, measurement and improvement efforts must be
system-wide; i.e., each project must represent the entire Medicaid enrolled population to which
the PIP study indicators apply. Once that population is identified, the MCO/PIHP must decide
whether to review data for that entire population or use a sample of that population. Sampling is
acceptable as long as the samples are representative of the identified population (see Activity 5).
Potential Sources of Information to Promote Representativeness and Generalizability of
the Study Population:
-

Data on the Medicaid enrolled population that enumerates the numbers of enrollees to
which the study topic and indicators apply. This would include demographic information
from MCO/PIHP enrollment files and MCO/PIHP utilization, diagnostic and outcome
7

information, such as services, procedures, admitting and encounter diagnoses, adverse
incidents (such as deaths, avoidable admissions, or readmissions), and patterns of
referrals or authorization requests.
-

Other data bases, as needed; e.g., pharmacy claims data to identify patients taking a
specific medication(s) during a specific enrollment period.

Methods of Implementation:
Address the following considerations to ensure that a representative and generalizable study
population is identified.
1.

Define the study’s “at risk” population.
-

All individuals to whom the identified study question(s) and indicators are relevant
must be defined.

-

Determine whether to include the entire study population or a sample in the study.
The decision may have been determined by the resources available to analyze the
data. If the State agency or MCO/PIHP is capable of collecting and analyzing data
through an automated data system, it might be possible to study the whole
population because many of the data collection and analysis steps can be
automated. If the data needs to be collected manually, sampling may be more
realistic.

-

Determine if the study population includes any requirements for the length of the
study population’s member enrollment in the MCO or PIHP. The required length
of time will vary depending on the study topic and study indicators.

-

If the entire MCO/PIHP population is to be studied, the data collection approach
should capture all enrollees to which the study question applies.

If a sample is to be used, go to Activity 5. If the entire population is included in the study, skip
Activity 5 and go to Activity 6.
Activity 5:

Use Sound Sampling Techniques

Rationale. If a sample is to be used to select members of the study, proper sampling techniques
are necessary to provide valid and reliable (and therefore generalizable) information on the
quality of care provided. When conducting a study designed to estimate the rates at which certain
events occur, the sample size has a large impact on the level of statistical confidence in the study
estimates. Statistical confidence is a numerical statement of the probable degree of certainty or

8

accuracy of an estimate. In some situations, it expresses the probability that a difference could
be due to chance alone. In other applications, it expresses the probability of the accuracy of the
estimate. For example, a study may report that a disease is estimated to be present in 35% of the
population. This estimate might have a 95% level of confidence, plus or minus five percentage
points. This means that we are 95% sure that between 30-40 percent of the population has the
disease.
The true prevalence or incidence rate for the event in the population may not be known for the
first time a topic is studied. In such situations, the most prudent course of action is to assume
that a maximum sample size is needed to establish a statically valid baseline for the project.
Potential Sources of Information to Support Sampling:
-

-

Data on enrollee characteristics relevant to health risks or utilization of clinical and nonclinical services, including age, sex, race/ethnicity/language and functional status.
Utilization, diagnostic and outcome information, such as services, procedures, admitting
and encounter diagnoses, adverse incidents (such as deaths, avoidable admissions, or
readmissions), and patterns of referrals or authorization requests.
Other information as needed, such as pharmacy claims data to identify patients taking a
defined number of a specific medication(s) during a specific enrollment period.

Methods of Implementation:
Address the following factors to ensure appropriate sampling techniques are used.
1.

Determine the true (or estimated) frequency of occurrence of the event, the confidence
interval to be used, and the acceptable margin of error.

2.

Employ valid sampling techniques.
-

There are two basic categories of statistical sampling methods -- probability
sampling and nonprobability sampling.
Probability (or random) sampling methods leave selection of population units
totally to chance, and not to preference on the part of the individuals conducting
or otherwise participating in the study. Biases are removed in these methods.
There are several types of probability (or random) sampling that can be used:
-

-

In simple random sampling, all members of the study population have
an equal chance of being selected for the sample. Population members
are
generally numbered, and random numbers generated by computer are
used to select units from the population.
9

-

Systematic random sampling - the basic principle is to select every nth
unit in a list. This can be used when a sampling frame is organized in
a way that does not bias the sample. Steps to organize and select a
systematic sample are:

1)

Construct a comprehensive sampling frame (e.g., list of all
beneficiaries).

2)

Divide the size of the sampling frame by the required sample size
to produce a sampling interval or skip interval (e.g., if there are
250 beneficiaries and a sample of 25 is needed, then divide 250/25
= 10).

3)

From a random number table select a random number between 1
and 10.

4)

Count down the list to get the Nth name (i.e., the # identified in
step 3).

5)

Skip down 10 names on the list and select a second name. Repeat
the process as many times as needed until the required sample size
has been reached.

-

Stratified random sampling is used when the target population consists of
non-overlapping sub-groups or strata. Typically this is used if the
population is homogeneous (same) within a strata and heterogeneous
(different) between strata. Stratified random sampling requires more
information about the population and also requires a larger overall sample
size than simple random sampling. Once strata are identified and selected,
sampling must be conducted within each strata using probability (or
random) sampling.

-

Cluster sampling is used when a comprehensive sampling frame is NOT
available. Units in the population are gathered or classified into groups,
similar to stratified sampling. Unlike the stratified sampling method, the
groups must be heterogeneous within themselves with respect to the
characteristic being measured. This method requires prior knowledge
about the population. Once clusters are identified, a random sample of
clusters are selected.

10

Non-probability sampling methods are based on choice, rather than chance;
therefore some bias can be expected. There are several types of non-probability
sampling that can be used:

Activity 6:

-

Judgment sampling involves constructing a sample based on including
units in the sample if they are thought (or judged) to be representative
of the population. By doing so, the sample is constructed to be a minipopulation.

-

Convenience sampling uses units that are readily or conveniently
available. For example, if the objective were beneficiary opinions
regarding a group practice, patients in the office on any given day or
during a specific month could be interviewed.

-

Quota sampling ensures that units in the sample appear in the same
proportion as in the population. For instance, if a certain target
population consisted of 55% female and 45% male, the quota sample
would require a similar female/male distribution.

Reliably Collect Data

Rationale. Procedures used to collect data for a given PIP must ensure that the data collected on
the PIP indicators are valid and reliable. Validity is an indication of the accuracy of the
information obtained. Reliability is an indication of the repeatability or reproducibility of a
measurement. The strategy for developing a data collection plan should include:
-

clear identification of the data to be collected,
identification of the data sources and how and when the baseline and
repeat indicator data will be collected,
specification of who will collect the data, and
identification of instruments used to collect the data.

When data are to be collected from automated data systems, development of specifications for
automated retrieval of the data is necessary. When data are obtained from visual inspection of
medical records or other primary source documents, several steps need to be taken to ensure the
data are consistently extracted and recorded:
1.

The key to successful manual data collection is in the selection of the data collection staff.
Appropriately qualified personnel, with conceptual and organizational skills, must be used
to abstract the data; however, the specific skills will vary with the nature of the data being
collected and the degree of professional judgment required. For example, when data
collection involves searching throughout the medical record to find and abstract
information or involves judging whether clinical criteria were met, experienced clinical
11

staff, such as registered nurses should collect the data. However, when the abstraction
involves verifying the presence of a diagnostic test report, trained medical assistants or
medical records clerks may be used.
2.

Clear guidelines for obtaining and recording data must be established, especially if
multiple reviewers are used to perform this activity. The qualifications of the data
collection staff should be determined before finalizing the data collection instrument.
The abstractor will need fewer clinical skills if the data elements within the data source
are more clearly defined. Developing a glossary of terms for each project should be a part
of the training of abstractors to ensure consistent interpretation among and between the
project staff.

3.

The number of data collection staff to be used for a given project affects the reliability of
the data. A smaller number of staff promotes inter-rater reliability; however, it may also
increase the amount of time it takes to complete this task. Intra-rater reliability (i.e.,
reproducibility of judgements by the same abstractor at a different time) should also be
considered.

Potential Sources of Data:
-

Administrative data; e.g., membership, enrollment, claims, encounters
Medical records
Tracking logs
Results of any provider interviews
Results of any Medicaid beneficiary interviews and surveys

Methods of Implementation:
Address the following issues to ensure sound data collection procedures.
1.

The data to be collected should be clearly specified.
Accurate measurement depends on clearly defined data elements. Data elements must be
carefully specified with unambiguous definitions. When descriptive terms are used (e.g.,
“high”, “low”, “normal”), numerical definitions are established for each term. The units
of measure must also be specified (e.g., pounds, kilograms, etc.).

2.

The sources of data should be clearly specified.
Data sources vary considerably and depend upon the selected topic and indicators.
Similarly, the topic and indicators will reflect not just the clinical and research
considerations, but also the available data sources.

3.

A systematic method of collecting valid and reliable data that represents the entire
population to which the study’s indicators apply should be clearly defined.
12

The study may use automated or manual data collection methods depending on the
resources available. If an automated data collection system is utilized, the degree of
completeness of the data in the automated system is always a concern. For example, for:
-

Inpatient data: The data system should capture all inpatient
admissions.

-

Primary care data: Data for all encounters should be available.

-

Specialty care data: Data for all encounters should be available.

-

Ancillary services data: Encounter or utilization data should be
available for all services provided.

The study’s design and methodology should include an estimation of the degree
of completeness of the automated data available for the PIP study indicators. 2
Manual data collection may be the only feasible option for many topics selected. The
beneficiary medical record is the most frequently used data source. Other manual
systems which might contain sources of information include clinical tracking logs,
registries, complaint logs, and manual claims.
When using manual data collection, the design of the PIP should reflect that:
-

Project staff and personnel have appropriate clinical knowledge and skills,
including good conceptual, organization, and documentation skills.

-

Data collection instruments provide for reliable and accurate data collection
over the time period to be studied.
If manual data collection is to be performed, the data collection instrument(s)
should be clear and promote inter-rater reliability. An important part of
designing data collection instruments is developing instructions or guidelines
for data collection staff. Instrument design is particularly important when
staff not involved in the study design perform data collection. Instructions
should be clearly and succinctly written and should provide an overview of
the study, specific instructions on how to complete each section of the form
and general guidance on how to handle situations not covered by the
instructions.

2

The accuracy of automated data is also a concern, but validation of this is beyond the scope of this

protocol.

13

-

4.

When assessing non-clinical services such as health care access or cultural
competency or care coordination, a study may utilize information on how the
MCO/PIHP is structured and operates.

The study design should specify a data analysis plan which reflects the following
considerations:
-

Whether qualitative or quantitative data, or both, will be collected.
Qualitative data describes characteristics or attributes by which persons or
things can be classified; for example, sex, race, poverty level, or the presence
or absence of a specific disease. Calculation of proportions and calculation of
rates are the two most common qualitative measures.
Quantitative data are concerned with numerical variables such as height,
weight and blood levels. The methods by which the data are analyzed and
presented will vary by type of data. Quantitative data require, at a minimum,
simple descriptive statistics such as measures of central tendency (i.e., mean,
median or mode) and measure of variability (i.e., range or standard deviation).

Activity 7:

-

Whether the data will be collected on the entire population or a sample.

-

Whether the measurements obtained from the data collection activity will be
compared to the results of previous or similar studies. If so, the data analysis
plan should have considered evaluating the comparability of the studies and
identified the appropriate statistical tests to be used to compare studies.

-

Whether the PIP will be compared to the performance of an MCO/PIHP, a
number of MCOs/PIHPs, or different provider sites. Comparing the
performance of multiple entities involves greater statistical design and
analytical considerations than those required for a study of a single entity,
such as a MCO/PIHP.

Implement Intervention and Improvement Strategies.

Rationale. Real, sustained improvements in care result from a continuous cycle of measuring
and analyzing performance, and developing and implementing system-wide improvements in
care. Actual improvements in care depend far more on thorough analysis and implementation of
appropriate solutions than on any other steps in the process.

14

An improvement strategy is defined as an intervention designed to change behavior at an
institutional, practitioner or beneficiary level. The effectiveness of the intervention activity or
activities can be determined by measuring the MCO’s/PIHP’s change in performance, according
to predefined quality indicators. Interventions are key to an improvement project’s ability to
bring about improved health care outcomes. Appropriate interventions must be identified and/or
developed for each PIP, to assure the likelihood of effecting measurable change.
If repeat measures of QI indicate that QI actions were not successful, i.e., did not achieve
significant improvement, the problem-solving process begins again with data analysis to identify
possible causes, propose and implement solutions, and so forth. If QI actions were successful,
the new processes should be standardized and monitored.
Potential Sources of Information:
-

Current project baseline data
Previous project data (if available)
Results of clinical and literature research
Project evaluation results completed by evaluators

Methods of Implementation:
Address the following consideration to ensure appropriate interventions are implemented.
1.

Interventions undertaken should be related to causes/barriers identified through data
analysis and QI processes.
It is expected that interventions associated with improvement on quality indicators will
be system interventions, i.e., educational efforts, changes in policies, targeting of
additional resources, or other organization-wide initiatives to improve performance.
Interventions that might have some short-term effect, but that are unlikely to induce
permanent change (such as a one-time reminder letter to physicians or beneficiaries) are
insufficient.
An EQRO is not required to demonstrate conclusively (for example, through controlled
studies) that a change in an indicator is the effect of its intervention; it is sufficient to
show that an intervention occurred that might reasonably be expected to affect the results.
Nor is the EQRO required to undertake data analysis to correct for secular trends
(changes that reflect continuing growth or decline in a measure as a result of external
forces over an extended period of time). To the extent feasible, however, the EQRO
should demonstrate that data have been corrected for any major confounding variables
with an obvious impact on the outcomes. The interventions should reasonably be
determined to have resulted in measured improvement.

15

Activity 8:

Analyze Data and Interpret Study Results.

Rationale. Data analysis begins with examining the MCO’s/PIHP’s performance on the selected
clinical or non-clinical indicators. The examination should be initiated using statistical analysis
techniques defined in the data analysis plan.
Potential Sources of Data and Information:
-

Baseline project indicator measurements
Repeat project indicator measurements
Industry benchmarks
Analytic reports of PIP results by the MCO/PIHP

Methods of Implementation:
Address the following considerations to ensure that data analysis and interpretations are
appropriate and valid.
1.

The analysis of the findings should be conducted according to the data analysis plan.

2.

The results and findings should present numerical PIP data in a way that provides
accurate, clear, and easily understood information.

3.

Following the data analysis plan, the analysis should identify:

4.

-

initial and repeat measurements of the prospectively identified
indicators for the project.

-

the statistical significance of any differences between the initial and
repeat measurements.

-

factors that influence the comparability of initial and repeat
measurements.

-

factors that threaten the internal or external validity of the findings.

The analysis of the study data should include an interpretation of the extent to which the
PIP was successful and what follow-up activities are planned as a result.
Interpretation and analysis of the study data should be based on continuous improvement
philosophies and reflect an understanding that most problems result from failures of
administrative or delivery system processes, not failures of individuals within the system.

16

Interpreting the data should involve developing a hypotheses about the causes of lessthan-optimal performance and collecting data to validate the hypotheses.
Activity 9:

Plan for “Real” Improvement.

Rationale. When a change in performance is found, it is important to know whether the change
represents “real” change or is an artifact of a short-term event unrelated to the intervention, or
random chance. The EQRO will need to determine the probability that improvement is actually
true improvement. This can be assessed in several ways, but is most confidently done by
calculating the degree to which an intervention is statistically “significant.” While this protocol
does not specify a level of statistical significance to be targeted, it does recommend that EQROs
determine the extent to which any changes in performance is statistically significant. States may
choose to establish their own numerical thresholds that define whether improvements are
“significant.”
Potential Sources of Information:
-

Baseline and repeat measures on quality indicators
Tests of statistical significance calculated on baseline and repeat
indicator measurements
Benchmarks for quality specified by the State Medicaid agency or
found in industry standards

Methods of Implementation:
At the point in time a PIP is far enough into its improvement cycle, determine the extent to
which improvement occurred. Through repeated measurement of the quality indicators selected
for the project, meaningful change in performance relative to the performance observed during
baseline measurement must be demonstrated. The repeat measurement should use the same
methodology as the baseline measurement, except that, when baseline data was collected for the
entire population at risk, the repeat may instead use a reliable sample. Performance using the
identified indicators can be measured by collecting information on all individuals, encounters or
episodes of care to which the indicator is applicable (a census) or by collecting information on a
representative subset of individuals, encounters, providers of care, etc. The following factors
should be considered in evaluating the extent to which real improvement has occurred.
1.

Determine if there is quantitative improvement in processes or outcomes of care
according to the predetermined project indicators.

2.

Determine if the improvement in performance has “face” validity; i.e., on the face of it,
does the intervention appear to have been successful in improving performance? Does
the improvement in performance appear to have been the result of the planned QI
intervention as opposed to some unrelated occurrence?
17

3.

Determine if there is any statistical evidence that any observed performance improvement
is true improvement.

Activity 10:

Achieve Sustained Improvement

Rationale. Real change results from changes in the fundamental processes of health care
delivery. Such changes should result in sustained improvements. In contrast, a spurious “one
time” improvement can result from unplanned accidental occurrences or random chance. If real
change has occurred, the project should be able to achieve sustained improvement.
Potential Sources of Information:
-

Baseline and first repeated measurements on quality indicators
Additional measurements on quality indicators made after the first
repeat measurement

Methods of Implementation:
Remeasurement is required to ensure that the improvement on a project is sustained. Sustained
improvement should be demonstrated through repeated measurements over comparable time
periods.
Repeat measurements of the indicators should occur after the first measurement taken after the
intervention. It is recognized that because of random year-to-year variation, population changes,
and sampling error, performance on any given individual measure may decline in the second
measurement. However, when all of the repeat measurements for a given project are taken
together, this decline should not be statistically significant and should never be statistically
significant after two remeasurement periods.
END OF PROTOCOL

18

ATTACHMENT A

CONDUCTING PERFORMANCE IMPROVEMENT PROJECT
WORKSHEET
Use this or a similar worksheet as a guide while designing and conducting focused studies.
Document the completion of each step. Refer to the protocol for detailed information on each
area.

Demographic Information
MCO/PIHP Name or ID:
Study Leader Name:
Telephone Number:
Name of Focused Study:
Date of Study Period:

/

/

to

/

/

Type of Delivery System (check all that are applicable)
____ MCO
____ Staff Model
_____ Number of Medicaid Enrollees in MCO or
____ PIHP
____ Network
PIHP
_____ Number of Medicare Enrollees in MCO or
PIHP
_____ Number of Medicaid Enrollees in Study
_____ Total Number of MCO or PIHP Enrollees in
____ Direct IPA
____ IPA
Study

Number of MCO/PIHP primary care physicians ____________
Number of MCO/PIHP specialty physicians _______________
Number of physicians in study ____________
19

ATTACHMENT A

Component/Standard Number
Activity 1.

Comments

Date
Comp.

SELECT THE STUDY TOPIC(S)

1.1. Study topic is selected through data
collection and analysis of
comprehensive aspects of enrollee
needs, care and services.
1.2

The topic(s), over time, address a
broad spectrum of key aspects of
enrollee care and services.
1.3. The topics, over time, include all
enrolled populations: i.e., do no
exclude certain enrollees such as
those with special health care needs.
Date of Study Period:

/

/

to

/

/

20

ATTACHMENT A

Activity 2.

DEFINE THE STUDY QUESTION(S)

2.1. The study question(s) is/are clearly
stated in writing.
Activity 3.

SELECT STUDY INDICATOR(S)

3.1. The study has objective, clearly defined,
measurable indicators.
3.2. The indicators measure changes in
health status, functional status, or
enrollee satisfaction, or valid proxies of
these outcomes.
Activity 4.

USE A REPRESENTATIVE AND GENERALIZABLE STUDY
POPULATION

4.1. The at-risk population is defined.
4.2. If the study includes the entire
population, the data collection approach
captures all enrollees to whom the study
question applies
Activity 5.

USE SOUND SAMPLING TECHNIQUES

5.1. The sampling technique considers and
specifies the true frequency of
occurrence, the confidence interval and
the margin of error.
5.2. A sufficient number of enrollees are
sampled.
5.3. Valid sampling techniques are used.
Activity 6.

RELIABLY COLLECT DATA

6.1. The data to be collected are clearly
specified.
6.2. The sources of data are clearly
specified.
21

ATTACHMENT A
6.3. The methods of collecting data are
clearly defined.
6.4. The data collection instruments provide
for consistent, accurate data collection.
6.5. The study design specifies a data
analysis plan.
6.6. Qualified staff and personnel are used to
collect the data.
Activity 7.

IMPLEMENT INTERVENTION AND IMPROVEMENT
STRATEGIES

7.1 Reasonable interventions are undertaken
to address causes/barriers identified
through data analysis and QI processes
undertaken.
Activity 8.

ANALYZE DATA AND INTERPRET STUDY RESULTS

8.1. Analysis of findings are conducted
according to the data analysis plan.
8.2. Results and findings present numerical
data in a way that provides accurate,
clear and easily understood information.
8.3. The analysis identifies initial and
repeated measurements, statistical
significance, factors that influence
comparability of initial and repeat
measurements, and factors that threaten
internal and external validity.
8.4. The analysis includes an interpretation
of the extent to which the PIP was
successful and follow-up activities.
Activity 9:

PLAN FOR “REAL” IMPROVEMENT

9.1. The same methodology as the baseline
measurement is used, when
measurement is repeated.
9.2. An analysis is conducted to determine if
22

ATTACHMENT A
there are quantitative improvements in
processes or outcomes of care.
9.3. An assessment is made to determine if
improvement in performance has face
validity.
9.4. An analysis is conducted to determine
statistical evidence of observed
improvement.
Activity 10:

ACHIEVE SUSTAINED IMPROVEMENT

10.1. Repeated measurements is conducted
to determine sustained improvement.

23

ATTACHMENT A
Record any additional comments pertinent to the design and/or conduct of the study:

24

ATTACHMENT B

ORIGIN OF THE PROTOCOL
This protocol was one of nine protocols developed during 1998-2001 from standards and
guidelines used in the public and private sectors during this time. This protocol was developed
from the following documents:
-

Quality Improvement System for Managed Care (QISMC)
QISMC was an initiative of CMS that set forth standards and guidelines pertaining to
health care quality for Medicaid and Medicare health plans (MCOs, PIHPs, and
Medicare+Choice plans). These standards and guidelines, in part, address MCO and PIHP
quality assessment and improvement projects.

-

Health Care Quality Improvement Studies in Managed Care Settings: A Guide for State
Medicaid Agencies (National Committee for Quality Assurance (NCQA))
Produced under a contract from CMS, this guidebook identifies key concepts related to the
conduct of QI studies and details widely accepted principles of research design and
statistical analysis necessary for designing, implementing and assessing QI studies.

-

A Health Care Quality Improvement System for the Medicaid Managed Care, A Guide for
States (Health Care Financing Administrations (HCFA))
CMS’s 1993 guide for health care QI provides a framework for building QI systems
within State Medicaid managed care initiatives. This document included guidelines
addressing quality assessment and improvement studies and related activities of MCOs
and PIHPs. This document was the result of the Quality Assurance Reform Initiative
(QARI).

-

Framework for Improving Performance, From Principles to Practice (Joint Commission
on Accreditation of Healthcare Organizations (JCAHO)
This publication describes the Joint Commission’s theory-based, practical methodology
for continuously improving the core work and resulting outcomes of any health care
organization. In this document, JCAHO defines the key characteristics and essential
behaviors of any health care organization striving to achieve high quality patient care.

-

1990-2000 Standards for Health Care Networks (SHCN) (JCAHO)
The JCAHO 1990-2000 SHCN provides a standards-based evaluation process to assist the
MCO in measuring, assessing and improving its network’s performance. It also helps the
MCO focus on conducting performance improvement efforts in a multi disciplinary,

25

ATTACHMENT B
system-wide manner. The 1990-2000 SHCN integrates information about the Joint
Commission’s health care network accreditation process.
-

NCQA 1997, 1998, and 1999 Standards for Accreditation of Managed Care
Organizations and NCQA 1999 Standards for Accreditation of Managed Behavioral
Healthcare Organizations (MBHO)
These documents include administrative policies and procedures for NCQA’s MCO and
MBHO accreditation programs, the 1997, 1998, and 1999 standards, and rationale
statements for the standards.

-

Peer Review Organizations (PRO) 4th and 5th Scope of Work (SOW) (CMS)
The 4th and 5th SOW documents outlined the requirements for PROs to adhere to while
conducting health care quality and improvement activities for Medicare beneficiaries.

An in-depth comparison of these documents was performed to identify the activities and features
common to these protocols, and features unique to individual protocols, while acknowledging the
different purposes of the documents. The QISMC, JCAHO and NCQA standards are written as
guides for MCOs/PIHPs to follow in developing, conducting and evaluating their quality
improvement studies. They can also be used by States or their agents (e.g., EQROs) to assess
compliance with State mandated guidelines and/or to facilitate overall plan-to-plan comparisons.
QARI was written with States as the intended audience to help them and their agents (e.g.,
EQROs) assure compliance with regulations and Medicaid program requirements, and promote
consistency in the manner in which MCOs and PIHPs carry out activities related to focused
studies.
The analysis revealed that in spite of their different purposes, all the documents identify several
common characteristics of effective focused studies. These include:
Selection of Topics: All of the reference documents address the need for focused studies to clearly
specify the topic to be addressed. They all acknowledge both clinical (e.g., specific disease or
condition such as pregnancy or asthma) and non-clinical (e.g., availability, timeliness and
accessibility of care) health service delivery issues as appropriate topics for health care QI
initiatives.
Means of Identifying Topics: Continuous data collection and analysis is stressed throughout all
documents as a means of identifying appropriate topics. It is stated that topics should be
systematically selected and prioritized to achieve the greatest practical benefit for enrollees. A
minimal set of criteria is suggested for selecting appropriate topics, including: the prevalence of a
condition among, or need for a service by, the MCO’s/PIHP’s enrollees; enrollee demographic
characteristics and health risks; the likelihood that the study topic will result in improved health
status among the enrollees; and the interest of consumers in the aspect of care or services to be
addressed.

26

ATTACHMENT B
Scope of study topics: The QISMC standards specify that performance improvement projects
should address the breadth of the MCO’s or PIHP’s services, such as whether they include
physical health and mental/substance abuse health services. They also identify specific clinical
and non-clinical focus areas that are applicable to all enrollees. The QISMC standards also
specify that the scope of the health plans’ improvement efforts are to include all enrollees.
Stating the Study Question(s): The HCQIS Guide discusses the importance of “stating the study
question” after a study topic is identified. It asserts that stating a study question helps a project
team avoid becoming sidetracked by data that is not central to the issue under study. For example,
once a focused study has identified childhood immunizations as a study topic: it might specify a
number of different study questions:
-

Have all children received all scheduled doses of one vaccine in particular?

-

Have all children of all ages received all recommended vaccines appropriate
for their age?

-

Have all children of a particular age (e.g., at the age of one, two, six or other
years) received all age-appropriate immunizations?

Alternatively, more detailed information may be desired so it may be necessary to specify
the study questions as:
-

What proportion of Medicaid enrollees who have reached two years of age
have received:
- All four recommended doses of DPT vaccine?
- All three recommended doses of the Polio vaccine?
- One recommended dose of the MMR vaccine?
- At least one dose of Hib in the second year of life?

Further specificity of additional study questions may be desired to provide information in
QI efforts, such as:
-

In what percent of cases of lack of immunization were children not immunized
for one of the following reasons?
- Refusal by a parent or guardian.
- Medical contraindications.
- Member non-complaint with the recommended immunization regimen.

Incorporating the process of documenting a study question(s) into the project design can help
ensure a systematic method of identifying appropriate indicators and data to be collected. In this

27

ATTACHMENT B
protocol we have included “defining the study question(s)” as a key step in designing and
implementing a Focused study.
Use of Quality Indicators: All reference documents address the need to specify well-defined
indicators to be monitored and evaluated throughout the study. It is emphasized that quality
indicators do not always need to be outcome measures. Process measures are also appropriate,
especially when there is strong clinical evidence that the process being measured has a
meaningful association with outcomes. There are various ways to obtain appropriate indicators,
such as using those dictated from outside sources (such as the State or CMS) or by an MCO/PIHP
developing them internally on the basis of clinical literature or findings of expert panels.
In addition to these features found uniformly in all reference documents, other significant aspects
of focused studies were identified by one or more of the reference documents. These include:
Significant improvement: NCQA’s document, “Health Care Quality Improvement Studies in
Managed Care Settings”, states that, “When presenting statistical results of any study, it is
important to fully disclose. . .the statistical significance of the estimates produced, as well as the
statistical significance of any apparent differences between units of comparison.” Building on
this, CMS’s QISMC document called for specific amounts of measurable improvement to be
demonstrated by the health plan. QISMC defines “demonstrable” improvement as either: 1)
benchmarks established by CMS (for national Medicare projects) or State agencies (for statewide
Medicaid QI projects) or by the health plans for individual (organizational) projects; or 2) a 10%
reduction in adverse outcomes. This protocol does not call for a specific level of statistical
achievement to be achieved but, consistent with the NCQA document, calls for disclosure and
review of the statistical significance of any measurable performance of a focused study.
Phase-in or time frame requirements: QISMC delineates specific time frame requirements for
MCOs/PIHPs to reach certain phases in a QI cycle. For example:
-

By the end of the first year, an MCO/PIHP should have initiated at least two
quality improvement projects addressing two different focus areas;

-

By the end of the second review year, at least two additional projects
addressing two different focus areas should be initiated.

-

By the end of the first year after the 2 year phase-in period, and each
subsequent year, at least two projects are to achieve demonstrable
improvement in two of the focus areas.

Evaluation Tools: NCQA’s HCQIS guidebook includes study planning and summary worksheets
to be used in the evaluation of an MCO’s/PIHP’s focused study. This feature provides a helpful
method for recording data during the evaluation process and promotes the collection of consistent
information by all evaluators. This protocol contains an example of a worksheet (Attachment A)
that can be used by EQROs when conducting focused studies.

28

ATTACHMENT B
Scoring system: NCQA accreditation provides a numerical scoring system to measure
performance against standards and to promote consistency in the process used to evaluate MCOs.
Although the scores do not dictate the final decision with respect to compliance with standards,
they do serve as a guide for NCQA evaluators to recommend non-compliance. This scoring
system also includes an opportunity for the MCO/PIHP to comment on the reviewer’s scores
before a final decision is rendered. It also promotes continuous improvement practices by
securing “customer” input into a final product (i.e., evaluation decisions). This protocol does not
include a scoring system.
END OF DOCUMENT

29


File Typeapplication/pdf
File TitleCONDUCTING PERFORMANCE IMPROVEMENT PROJECTS
AuthorHCFA Software Control
File Modified2008-12-31
File Created2008-12-31

© 2024 OMB.report | Privacy Policy