CMS-R-305 final protocol validating performance improvement projec

External Quality Review of Medicaid MCOs and Supporting Regulations in 42 CFR 438.360, 438.362, and 438.364 (CMS-R-305)

final protocol validating performance improvement projects

External Quality Review of Medicaid MCOs and Supporting Regulations in 42 CFR 438.360, 438.362, and 438.364

OMB: 0938-0786

Document [pdf]
Download: pdf | pdf
OMB Approval No.
0938-0786

VALIDATING PERFORMANCE
IMPROVEMENT PROJECTS

A protocol for use in Conducting Medicaid External Quality
Review Activities

Department of Health and Human Services
Centers for Medicare & Medicaid Services

Final Protocol
Version 1.0
May 1, 2002
According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid
OMB control number. The valid OMB control number for this information collection is 0938-0786. The time required to complete this
information collection is estimated to average 1,591 hours per response for all activities, including the time to review instructions, search existing
data resources, gather the data needed, and complete and review the information collection. If you have comments concerning the accuracy of the
time estimate(s) or suggestions for improving this form, please write to: CMS, 7500 Security Boulevard, Attn: PRA Reports Clearance Officer,
Baltimore, Maryland 21244-1850.
Form CMS-R-305

VALIDATING PERFORMANCE IMPROVEMENT PROJECTS
I.

PURPOSE OF THE PROTOCOL

The purpose of health care quality performance improvement projects (PIPs) is to assess and
improve processes, and thereby outcomes, of care. In order for such projects to achieve real
improvements in care, and for interested parties to have confidence in the reported
improvements, PIPs must be designed, conducted and reported in a methodologically sound
manner. This protocol specifies procedures for external quality review organizations (EQROs) 1
to use in evaluating the soundness and results of PIPs implemented by Medicaid Managed Care
Organizations (MCOs) and Prepaid Inpatient Health Plans (PIHPs).
II.

OVERVIEW OF THE PROTOCOL

This protocol has been derived from existing public and private sector tools and approaches to
reviewing PIPs (See Attachment A). Activities that all public and private sector tools have in
common were included in this protocol. In addition, activities found in fewer documents were
included where the activity was felt to be important to promoting stronger PIPs, but would not
result in an inappropriate burden on the MCO, PIHP or the EQRO. In particular, the protocol
relies heavily on a guidebook produced by the National Committee for Quality Assurance
(NCQA) under a contract from the Centers for Medicare & Medicaid Services (CMS), formerly
the Health Care Financing Administration (HCFA), “Health Care Quality Improvement Studies
in Managed Care Settings: A Guide for State Medicaid Agencies” This guidebook identifies key
concepts related to the conduct of quality improvement (QI) studies and details widely accepted
principles in designing, implementing and assessing QI studies.
The protocol describes three activities that are to be undertaken in validating PIPs: 1) assessing
the MCO=s/PIHP=s methodology for conducting the PIP, 2) verifying actual PIP study findings,
and 3) evaluating overall validity and reliability of study results. Activity One, Assessing the
MCO’s /PIHP’s Methodology for Conducting the PIP, involves ten steps:
1.
2.
3.
4.
5.
6.
7.

Review the selected study topic(s)
Review the study question(s)
Review selected study indicator(s)
Review the identified study population
Review sampling methods (if sampling was used)
Review the MCO’s/PIHP’s data collection procedures
Assess the MCO’s/PIHP’s improvement strategies

1

It is recognized that a State Medicaid agency may choose an organization other than an EQRO (as defined
in Federal regulation) to validate MCO or PIHP PIPs. However, for convenience, in this protocol we use the term
“external quality review organization (EQRO)” to refer to any organization that validates performance improvement
projects undertaken by MCOs or PIHPs.

1

8.
9.
10.

Review data analysis and interpretation of study results
Assess the likelihood that reported improvement is “real” improvement
Assess whether the MCO/PIHP has sustained its documented improvement

Activity Two, Verifying PIP Study Findings, is a resource intensive activity that may not always
be feasible. It is included here as an optional component of the protocol. At the conclusion of
Activity One, and as appropriate for Activity Two, Activity Three describes how the EQRO will
need to consider all validation findings and render a judgement about the extent to which the
State should accept the findings of the MCO’s/PIHP’s PIP as valid and reliable.
III.

PROTOCOL ACTIVITIES

ACTIVITY 1:

ASSESS THE STUDY METHODOLOGY

Assessing an MCO’s or PIHP’s methodology for conducting a PIP requires the EQRO to have
information on the design and implementation of the PIP. This information could be obtained
from a hardcopy or electronic written description of the PIP design and implementation that is
transmitted by the MCO/PIHP to the State and/or the EQRO. It also could be obtained through
an interview of MCO/PIHP personnel responsible for the design and conduct of the PIP, or a
combination of a written description plus interviews. Information obtained through hardcopy or
electronic submission, or interview, may also need to be supplemented by supporting
documentation obtained from the MCO/PIHP on an ad hoc basis. Whatever sources(s) of
information are used, the EQRO should follow the steps below to assess the methodology of the
MCO’s/PIHP’s PIP(s). Answers to the questions in each of the steps should be recorded on a
standardized PIP Validation Worksheet such as that located in Attachment B.
It is expected that each State will prescribe how long a MCO/PIHP may take to complete the
PIPs required by the State. In implementing this protocol, the State will need to inform the
EQRO:
1) whether the EQRO is to annually validate all projects a) initiated, b) underway but not
completed, and c) completed during the reporting year, or d) some combination of
these three stages of PIPs.
2) whether the EQRO is to review all projects in the categories above, or a subset of the
PIPs in the particular categories. If the EQRO is to review only a subset, the EQRO
will need to ascertain with the State how that subset will be chosen.
Step 1:

Review the Selected Study Topic(s)

Rationale. All PIPs should target improvement in relevant areas of clinical care and non-clinical
services. Topics selected for study by Medicaid MCOs and PIHPs must reflect the

2

MCO’s/PIHP’s Medicaid enrollment in terms of demographic characteristics, prevalence of
disease and the potential consequences (risks) of the disease. Note that sometimes the State
Medicaid agencies may have selected the MCO’s/PIHP’s study topic.
Potential Sources of Supporting Information:
-

-

-

-

-

Data in the MCO’s/PIHP’s Medicaid enrollment/membership files on enrollee
characteristics relevant to health risks or utilization of clinical and non-clinical services,
such as age, sex, race/ethnicity/language and disability or functional status.
Utilization, diagnostic, and outcome information on Medicaid outpatient and inpatient
encounters, services, procedures, medications and devices, admitting and encounter
diagnoses, adverse incidents (such as deaths, avoidable admissions, or readmissions); and
patterns of referrals or authorization requests obtained from MCO/PIHP encounter,
claims, or other administrative data.
Data on the MCO’s/PIHP’s performance as reflected in standardized measures,
including, when possible: local, State, or national information on performance of
comparable organizations.
Data from other outside organizations, such as Medicaid or Medicare fee-for-service
data, data from other health plans, and local or national public health reports on
conditions or risks for specified populations.
Data from surveys, grievance and appeals processes, and disenrollments and requests to
change providers.
Data on appointments and provider networks (e.g., access, open and closed panels, and
provider language spoken).

Methods of Evaluation:
Review the MCO’s/PIHP’s project documentation and, as needed, the above data sources to
assess the extent to which MCO/PIHP selected an appropriate study topic. In general, a clinical
or non-clinical issue selected for study should affect a significant portion of the enrollees (or a
specified sub-portion of enrollees) and have a potentially significant impact on enrollee health,
functional status or satisfaction. The topics should reflect high-volume or high-risk conditions of
the population served. High-risk conditions may occur for infrequent conditions or services, such
as when a pattern of unexpected adverse outcomes are identified through data analysis. High risk
also exists for populations with special health care needs, such as children in foster care, adults
with disabilities and the homeless. Although these individuals may be small in number, their
special health care needs place them at high risk.
Consider the answers to the following questions to ascertain the extent to which the
MCO’s/PIHP’s PIP reflected an appropriate study topic.
1.

Was the topic selected by the MCO/PIHP either specified by the State Medicaid agency
or identified through MCO/PIHP data collection and analysis of comprehensive aspects
of enrollee needs, care, and services?

3

Review documentation supplied by the MCO/PIHP explaining how the study topic was
chosen. Determine the extent to which the MCO/PIHP considered enrollee demographic
characteristic and health risks, and the prevalence of the chosen topic among, or the need
for a specific service by, enrollees. Determine the extent to which the explanation is
consistent with demographic and epidemiologic information on the MCO’s/PIHP’s
enrollees or consistent with information on similar groups in the MCO’s/PIHP’s
geographic service area.
A project topic also may be suggested by patterns of inappropriate utilization. However,
the project must have been clearly focused on identifying and correcting deficiencies in
care or services that might have led to this pattern, such as inadequate access to primary
care, rather than on utilization or cost issues alone. The goal of the project should be to
improve processes and outcomes of health care. Therefore, it is acceptable for a project
to focus on patterns of over utilization that present a clear threat to health or functional
status.
Topics to be studied may also have been selected on the basis of Medicaid enrollee input.
To the extent feasible, MCOs and PIHPs are encouraged to obtain input from enrollees
who are users of, or concerned with, specific focus areas. For example, priorities in the
area of mental health or substance abuse services could be developed in consultation with
users of these services or their families.
2.

Did the MCO’s/PIHP’s PIPs, over time, address a broad spectrum of key aspects of
enrollee care and services? Did the MCO/PIHP select topics in clinical and non-clinical
focus areas?
It is important that, when multiple years of PIP projects are viewed for an individual
MCO or PIHP, the MCO’s/PIHP’s PIP topics address the full spectrum of clinical and
nonclinical areas associated with the MCO/PIP, and also do not consistently eliminate
any particular subset of Medicaid enrollees; e.g., children with special health care needs.
Clinical focus areas should include, over time, prevention and care of acute and chronic
conditions, high-volume services, and high-risk services. High-volume services, as
opposed to a clinical condition, can include such services as labor and delivery, a
frequently performed surgical procedure, or different surgical or invasive procedures.
The MCO/PIHP may also target high-risk procedures even if they are low in frequency;
e.g., care received from specialized centers inside or outside of the organization’s
network; e.g., burn centers, transplant centers, cardiac surgery centers. It could also
assess and improve the way in which it detects which of its members have special health
care needs and assess these members’ satisfaction with the care received from the
organization.
Finally, PIPs can address non-clinical areas. For example, PIPS addressing continuity or
coordination of care could address the manner in which care is provided when a patient

4

receives care from multiple providers and across multiple episodes of care. Such studies
may be disease or condition-specific or may target continuity and coordination across
multiple conditions. Projects in other non-clinical areas could also address, over time,
appeals, grievance and complaints; or access to and availability of services. Access and
availability PIPs could focus on assessing and improving the accessibility of specific
services or services for specific conditions, including reducing disparities between
services to minorities and service to other members. Projects related to the grievance and
coverage determination process might aim either to improve the processes themselves or
to address an underlying issues in care or services identified through analysis of
grievances or appeals.
Step 2:

Review the Study Question(s)

Rationale. It is important for the MCO/PIHP to clearly state, in writing, the question(s) the study
is designed to answer. Stating the question(s) helps maintain the focus of the PIP and sets the
framework for data collection, analysis, and interpretation.
Potential Sources of Supporting Information:
-

QI study documentation
Relevant clinical literature

Methods of evaluation:
Review the MCO’s/PIHP’s project documentation to determine whether a study question(s) was
clearly defined. The problem to be studied must be stated as clear, simple, answerable
question(s). An example of a vague study question is:
“Does the MCO/PIHP adequately address psychological problems in patients recovering
from myocardial infarction?”
In this example, it is not clear how “adequately address” will be assessed. Furthermore,
“psychological factors” is a very broad term. A clearer study question could be:
“Does doing ‘x’ reduce the proportion of patients with myocardial infarction who
develop severe emotional depression during hospitalization?”
Step 3:

Review the Selected Study Indicator(s)

Rationale. A study indicator is a quantitative or qualitative characteristic (variable) reflecting a
discrete event (e.g., an older adult has/has not received a flu shot in the last 12 months), or a
status (e.g., an enrollee’s blood pressure is/is not below a specified level) that is to be measured.

5

Each project should have one or more quality indicators for use in tracking performance and
improvement over time. All indicators must be objective, clearly and unambiguously defined,
and based on current clinical knowledge or health services research. In addition, all indicators
must be capable of objectively measuring either enrollee outcomes such as health or functional
status, enrollee satisfaction, or valid proxies of these outcomes.
Indicators can be few and simple, many and complex, or any combination thereof, depending on
the study question(s), the complexity of existing practice guidelines for a clinical condition, and
the availability of data and resources to gather the data.
Indicator criteria are the set of rules by which the data collector or reviewer determines whether
an indicator has been met. Pilot or field testing is helpful to the development of effective
indicator criteria. Such testing allows the opportunity to add criteria that might not have been
anticipated in the design phase. In addition, criteria are often refined over time, based on results
of previous studies. However, if criteria are changed significantly, the method for calculating an
indicator will not be consistent and performance on indicators will not be comparable over time.
It is important, therefore, for the indicator criteria to be developed as fully as possible during the
design and field testing of data collection instruments.
Potential Sources of Supporting Information:
-

Clinical and non-clinical practice guidelines
Administrative data
Medical records

Methods of Evaluation:
Review the MCO’s/PIHP’s project documentation to assess whether appropriate study indicators
were used. Use the following questions to help assess study indicators.
1.

Did the study use objective, clearly and unambiguously defined, measurable indicators?
When indicators exist that are generally used within the public health community or the
managed care industry (such as NCQA’s Health Plan Employer Data and Information Set
(HEDIS) or the Foundation for Accountability’s (FACCT) measures) and these indicators
are applicable to the topic, use of those indicators is preferred. However, indicators may
be developed by the MCO/PIHP on the basis of current clinical practice guidelines or
health services research. When a MCO/PIHP develops its own indicators, it must be able
to document the basis on which it adopted an indicator.
Consider the following list of key characteristics to determine if meaningful indicators
were developed.

6

-

Was/were the indicator(s) related to identified health care guidelines pertinent to the
study question?

-

Was this an important aspect of care to monitor that made a difference to the
MCO’s/PIHP’s beneficiaries?

-

Were the data available either through administrative data, medical records or other
readily accessible sources?

-

Did limitations on the ability to collect the data skew the results?

-

Did these indicators require explicit or implicit criteria? The MCO/PIHP must consider
the specificity of the criteria used to determine compliance with an indicator. The greater
number of people involved in data collection and analysis, the greater the need for more
explicit, or precise, data collection and indicator criteria to obtain inter-reviewer
reliability. The more specific the criteria, the easier the data collection process will be,
because staff will not need extensive training. An example of an explicit criterion for an
immunization study is:
-

Documentation of refusal by parent to have a child immunized through nurses notes
and/or signed refusal by the parent in the medical record.
Implicit criteria may require a high degree of professional clinical
judgement, and therefore, may be time-consuming and expensive. An
example of an implicit criterion for an immunization study is:

-

Medical contraindications for receiving childhood immunizations.

Specific indicators do not always need to be established at the outset of a PIP. There may
be instances in which a project would begin with more general collection and analysis of
baseline data on a topic, and then narrow its focus to more specific indicators for
measurement, intervention and reevaluation. The success of the project is assessed in
terms of the indicators ultimately selected.
2.

Did the MCO’s/PIHP’s indicators measure changes in health status, functional status, or
enrollee satisfaction, or valid proxies of these outcomes?
The objective of a PIP should be to improve processes and outcomes. For the purpose of
this protocol “outcomes” are defined as measures of patient health, functional status or
satisfaction following the receipt of care or services. Indicators selected for a PIP in a
clinical focus area ideally should include at least some measure of change in health status
or functional status or process of care proxies for these outcomes. Indicators may also
include measures of satisfaction.

7

It is recognized, however, that relatively few standardized performance measures actually
address outcomes. Even when outcome measures are available, their utility as quality
indicators may be limited because outcomes can be significantly influenced by factors
outside of the organization’s control, such as poverty, genetics, and environment.
Because of this, quality indicators do not always need to be outcome measures. Process
measures are acceptable as long as it can be shown that there is strong clinical evidence
that the process being measured is meaningfully associated with outcomes. To the extent
possible, this determination should be based on published guidelines that support the
association and that cite evidence from randomized clinical trials, case control studies, or
cohort studies. Although published evidence is generally required, there may be certain
areas of practice for which empirical evidence of process/outcome linkage is limited. At
a minimum, it should be demonstrated that there is a consensus among relevant
practitioners with expertise in the defined area as to the importance of a given process.
While enrollee satisfaction is an important outcome of care in clinical areas,
improvement in satisfaction should not be the sole demonstrable outcome of a project in
any of these areas. Some improvement in health or functional status should also be
measured. For projects in non-clinical areas, use of health or functional status indicators
also is generally preferred, particularly for projects addressing access to and availability
of health care services. However, there may be some non-clinical projects for which
enrollee satisfaction indicators alone are sufficient.
Step 4:

Review the Identified Study Population

Rationale. Once a topic has been selected, measurement and improvement efforts must be
system-wide; i.e., each project must represent the entire Medicaid enrolled population to which
the PIP study indicators apply. Once that population is identified, the MCO/PIHP must decide
whether to review data for that entire population or use a sample of that population. Sampling is
acceptable as long as the samples are representative of the identified population (see Step 5).
Potential Sources of Supporting Information:
-

-

Data on the Medicaid enrolled population that enumerates the numbers of enrollees to
which the study topic and indicators apply. This would include demographic
information from MCO/PIHP enrollment files and MCO/PIHP utilization, diagnostic
and outcome information, such as services, procedures, admitting and encounter
diagnoses, adverse incidents (such as deaths, avoidable admissions, or readmissions),
and patterns of referrals or authorization requests.
Other data bases, as needed; e.g., pharmacy claims data to identify patients taking a
specific medication(s) during a specific enrollment period.

Methods of Evaluation:

8

Review the study description and methodology to assess whether the study clearly identified the
study population. Consider the answers to the following questions to assess the extent to which
the MCO/PIHP clearly identified the study population.
1.

How did the MCO/PIHP define the study’s “at risk” population?
-

Did the MCO/PIHP clearly define all individuals to whom the identified study
question(s) and indicators are relevant?

-

Did the MCO/PIHP include the entire study population or use a sample in the study?
The organization’s decision may have been determined by the resources available to
analyze the data. If the organization is capable of collecting and analyzing data
through an automated data system, it might be possible to study the whole population
because many of the data collection and analysis steps can be automated. If the data
must be collected manually, sampling may be more realistic.

-

Did the definition of the study population include any requirements for the length of
the study populations’ members enrollment in the MCO or PIHP? The required
length of time will vary depending on the study topic and study indicators.

-

If the MCO/PIHP studied the entire population, did its data collection approach truly
capture all enrollees to which the study question applied?

If the MCO/PIHP used a sample, go to Step 5. If the MCO/PIHP studied the entire population,
skip Step 5 and go to Step 6.
Step 5:

Review Sampling Methods

Rationale. If the MCO/PIHP used a sample to select members of the study, proper sampling
techniques are necessary to provide valid and reliable (and therefore generalizable) information
on the quality of care provided. When conducting a study designed to estimate the rates at which
certain events occur, the sample size has a large impact on the level of statistical confidence in
the study estimates. Statistical confidence is a numerical statement of the probable degree of
certainty or accuracy of an estimate. In some situations, it expresses the probability that a
difference could be due to chance alone. In other applications, it expresses the probability of the
accuracy of the estimate. For example, a study may report that a disease is estimated to be
present in 35% of the population. This estimate might have a 95% level of confidence, plus or
minus five percentage points. This means that we are 95% sure that between 30-40 percent of
the population has the disease.
The true prevalence or incidence rate for the event in the population may not be known for the
first time a topic is studied. In such situations, the most prudent course of action is to assume

9

that a maximum sample size is needed to establish a statically valid baseline for the project
indicators.
Potential Sources of Supporting Information:
-

-

-

Data on enrollee characteristics relevant to health risks or utilization of clinical and
non-clinical services, including age, sex, race/ethnicity/language and functional
status.
Utilization, diagnostic and outcome information, such as services, procedures,
admitting and encounter diagnoses, adverse incidents (such as deaths, avoidable
admissions, or readmissions), and patterns of referrals or authorization requests.
Other information as needed, such as pharmacy claims data to identify patients taking
a defined number of a specific medication(s) during a specific enrollment period.

Methods of Evaluation:
Review the study description and methodology. Consider the answers to the following questions
in evaluating the soundness of the MCO’s/PIHP’s approach to sampling.
1.

Did the methods used by the MCO/PIHP to calculate the needed sample size consider
and specify the true (or estimated) frequency of occurrence of the event, the confidence
interval to be used, and the acceptable margin of error?

2.

Did the MCO/PIHP employ valid sampling techniques?
-

There are two basic categories of statistical sampling methods -- probability
sampling and nonprobability sampling.
Probability (or random) sampling methods leave selection of population units
totally to chance, and not to preference on the part of the individuals conducting
or otherwise participating in the study. Biases are removed in these methods.
There are several types of probability (or random) sampling that can be used by
the MCO/PIHP:
-

In simple random sampling, all members of the study population have an
equal chance of being selected for the sample. Population members are
generally numbered, and random numbers generated by computer are used to
select units from the population

-

Systematic random sampling - the basic principle is to select every nth unit in
a list. This can be used when a sampling frame is organized in a way that
does not bias the sample. Steps to organize and select a systematic sample
are:
1)

Construct a comprehensive sampling frame (e.g., list of all
beneficiaries).
10

2)

Divide the size of the sampling frame by the required sample size
to produce a sampling interval or skip interval (e.g., if there are
250 beneficiaries and a sample of 25 is needed, then divide 250/25
= 10).

3)

From a random number table select a random number between 1
and 10.

4)

Count down the list to get the Nth name (i.e., the # identified in
step 3).

5)

Skip down 10 names on the list and select a second name. Repeat
the process as many times as needed until the required sample size
has been reached.

-

Stratified random sampling is used when the target population consists of nonoverlapping sub-groups or strata. Typically this is used if the population is
homogeneous (same) within a strata and heterogeneous (different) between
strata. Stratified random sampling requires more information about the
population and also requires a larger overall sample size than simple random
sampling. Once strata are identified and selected, sampling must be
conducted within each strata using probability (or random) sampling.

-

Cluster sampling is used when a comprehensive sampling frame is NOT
available. Units in the population are gathered or classified into groups,
similar to stratified sampling. Unlike the stratified sampling method, the
groups must be heterogeneous within themselves with respect to the
characteristic being measured. This method requires prior knowledge about
the population. Once clusters are identified, a random sample of clusters are
selected.

Non-probability sampling methods are based on choice, rather than chance;
therefore some bias can be expected. There are several types of non-probability
sampling that can be used:
-

Judgment sampling involves constructing a sample based on including units in
the sample if they are thought (or judged) to be representative of the
population. By doing so, the sample is constructed to be a mini-population.

-

Convenience sampling uses units that are readily or conveniently available.
For example, if the objective were beneficiary opinions regarding a group
practice, patients in the office on any given day or during a specific month
could be interviewed.

11

-

Step 6:

Quota sampling ensures that units in the sample appear in the same proportion
as in the population. For instance, if a certain target population consisted of
55% female and 45% male, the quota sample would require a similar
female/male distribution.

Review the MCO’s/PIHP’s Data Collection Procedures

Rationale. Procedures used by the MCO/PIHP to collect data for its PIP must ensure that the data
collected on the PIP indicators are valid and reliable. Validity is an indication of the accuracy of
the information obtained. Reliability is an indication of the repeatability or reproducibility of a
measurement. The MCO or PIHP should have employed a data collection plan that included:
-

clear identification of the data to be collected,
identification of the data sources and how and when the baseline and repeat
indicator data will be collected,
specification of who will collect the data, and
identification of instruments used to collect the data.

When data were collected from automated data systems, development of specifications for
automated retrieval of the data should have been devised. When data were obtained from visual
inspection of medical records or other primary source documents, several steps should have been
taken to ensure the data were consistently extracted and recorded:
1.

The key to successful manual data collection is in the selection of the data collection
staff. Appropriately qualified personnel, with conceptual and organizational skills, should
have been used to abstract the data; however, the specific skills should vary depending on
the nature of the data collected and the degree of professional judgment required. For
example, if data collection involved searching throughout the medical record to find and
abstract information or judging whether clinical criteria were met, experienced clinical
staff, such as registered nurses should have collected the data. However, if the
abstraction involved verifying the presence of a diagnostic test report, trained medical
assistants or medical records clerks may have been used.

2.

Clear guidelines for obtaining and recording data should have been established,
especially if multiple reviewers were used to perform this activity. The MCO/PIHP
should have determined the necessary qualifications of the data collection staff before
finalizing the data collection instrument. An abstractor would need fewer clinical skills if
the data elements within the data source are more clearly defined. Defining a glossary of
terms for each project should have been a part of the training of abstractors to ensure
consistent interpretation among and between the project staff.

12

3.

The number of data collection staff used for a given project affects the reliability of the
data. A smaller number of staff promotes inter-rater reliability; however, it may also
increase the amount of time it takes to complete this task. Intra-rater reliability (i.e.,
reproducibility of judgements by the same abstractor at a different time) should have also
been considered.

Potential Sources of Supporting Information:
-

-

List of sources of data used in the study
If medical record review, or other manual data collection was used to produce study data:
- data recording forms
- instructions to data collectors
If automated data collection was used, an algorithm showing the steps in the production
of quality indicators and other relevant data collection

Methods of Evaluation:
Evaluation of the MCO’s/PIHP’s data collection procedures can be determined through two
processes: assessing the study’s approach to data collection (discussed in this step) and
conducting a verification sample of the study’s findings (discussed in ACTIVITY II). Consider
the answers to the following questions in determining the soundness of data collection
procedures.
1.

Did the study design clearly specify the data to be collected?
Accurate measurement depends on clearly defined data elements. Data elements must be
carefully specified with unambiguous definitions. When descriptive terms are used (e.g.,
high, low, normal), numerical definitions are established for each term. The units of
measure must also be specified (e.g., pounds, kilograms, etc.).

2.

Did the study design clearly specify the sources of data?
Data sources vary considerably and depend upon the selected topic and indicators.
Similarly, the topic and indicators will reflect not just the clinical and research
considerations, but also the available MCO/PIHP data sources. Sources can include:
beneficiary medical records, tracking logs, encounter and claims systems, provider
interviews, beneficiary interviews and surveys.

3.

Did the study design specify a systematic method of collecting valid and reliable data that
represents the entire population to which the study’s indicators apply?
The MCO’s/PIHP’s PIP study may use automated or manual data collection methods
depending on the resources available. If an automated data collection system was

13

utilized, the degree of completeness of the data in the automated system is always a
concern. For example, for:
-

Inpatient data: Did the data system capture all inpatient admissions?

-

Primary care data: Did primary care providers submit encounter data for all
encounters?

-

Specialty care data: Did specialty care providers submit encounter data for all
encounters?

-

Ancillary services data: Did ancillary service providers submit encounter or
utilization data for all services provided?

The study’s design and methodology should include an estimation of the degree
of completeness of the automated data used for the PIP study indicators. 2
Manual data collection may be the only feasible option for many MCOs/PIHPs and for
many topics selected. The beneficiary medical record is the most frequently used data
source. Other manual systems which might contain sources of information include
clinical tracking logs, registries, complaint logs, and manual claims. When evaluating
manual data collection, the following issues should be considered:
-

Did the MCO/PIHP use qualified staff and personnel to collect the data?
Clinical knowledge and skills were addressed, including good conceptual
skills, organization skills, thoroughness, and strong documentation skills.

-

Did the MCO/PIHP use instruments for data collection that provide for
reliable and accurate data collection over the time periods studied?
If manual data collection was performed, the data collection instrument(s)
should be clear and promote inter-rater reliability. An important part of
designing data collection instruments is developing instructions or guidelines
for data collection staff. Instrument design is particularly important when
staff not involved in the study design perform data collection. Instructions
should be clearly and succinctly written and should provide an overview of
the study, specific instructions on how to complete each section of the form
and general guidance on how to handle situations not covered by the
instructions.

2

The accuracy of automated data is also a concern, but validation of this is beyond the scope of this

protocol.

14

-

4.

When assessing non-clinical services such as health care access or cultural
competency or care coordination, a study may utilize information on how the
MCO/PIHP is structured and operates.

Did the study design prospectively specify a data analysis plan which reflected the
following considerations?
-

Whether qualitative or quantitative data, or both, were to be collected.
Qualitative data describes characteristics or attributes by which persons or
things can be classified; for example, sex, race, poverty level, or the presence
or absence of a specific disease. Calculation of proportions and calculation of
rates are the two most common qualitative measures.
Quantitative data are concerned with numerical variables such as height,
weight and blood levels. The methods by which the data are analyzed and
presented will vary by type of data. Quantitative data require, at a minimum,
simple descriptive statistics such as measures of central tendency (i.e., mean,
median or mode) and measure of variability (i.e., range or standard deviation).

-

Whether the data were to be collected on the entire population or a sample.

-

Whether the measurements obtained from the data collection activity were to
be compared to the results of previous or similar studies. If so, the data
analysis plan should have considered evaluating the comparability of the
studies and identified the appropriate statistical tests to be used to compare
studies.

-

Whether the PIP was to be compared to the performance of an individual
MCO/PIHP, a number of MCOs/PIHPs, or different provider sites.
Comparing the performance of multiple entities involves greater statistical
design and analytical considerations than those required for a study of a single
entity, such as a MCO/PIHP.

15

Step 7:

Assess the MCO’s/PIHP’s Improvement Strategies.

Rationale. Real, sustained improvements in care result from a continuous cycle of measuring
and analyzing performance, and developing and implementing system-wide improvements in
care. Actual improvements in care depend far more on thorough analysis and implementation of
appropriate solutions than on any other steps in the process.
An improvement strategy is defined as an intervention designed to change behavior at an
institutional, practitioner or beneficiary level. The effectiveness of the intervention activity or
activities can be determined by measuring the MCO’s/PIHP’s change in performance, according
to predefined quality indicators. Interventions are key to an improvement project’s ability to
bring about improved health care outcomes. Appropriate interventions must be identified and/or
developed for each PIP, to assure the likelihood of effecting measurable change.
If repeat measures of QI indicate that QI actions were not successful, i.e., did not achieve
significant improvement, the problem-solving process begins again with data analysis to identify
possible causes, propose and implement solutions, and so forth. If QI actions were successful,
the new processes should be standardized and monitored.
Potential Sources of Supporting Information:
-

Current project baseline data
Previous project data (if available)
Results of clinical and literature research
Project evaluation results completed by evaluators

Methods of Evaluation:
Consider the answer to the following question to help determine the extent to which appropriate
interventions were addressed.
1.

Did the MCO/PIHP undertake interventions related to causes/barriers identified through
data analysis and QI processes?
It is expected that interventions associated with improvement on quality indicators will
be system interventions, i.e., educational efforts, changes in policies, targeting of
additional resources, or other organization-wide initiatives to improve performance.
Interventions that might have some short-term effect, but that are unlikely to induce
permanent change (such as a one-time reminder letter to physicians or beneficiaries) are
insufficient.
An MCO/PIHP is not required to demonstrate conclusively (for example, through
controlled studies) that a change in an indicator is the effect of its intervention; it is
sufficient to show that an intervention occurred that might reasonably be expected to
affect the results. Nor is the MCO/PIHP required to undertake data analysis to correct for
16

secular trends (changes that reflect continuing growth or decline in a measure as a result
of external forces over an extended period of time). To the extent feasible, however, the
MCO/PIHP should be able to demonstrate that its data have been corrected for any major
confounding variables with an obvious impact on the outcomes. The MCO’s/PIHP’s
interventions should reasonably be determined to have resulted in measured
improvement.
Step 8:

Review Data Analysis and Interpretation of Study Results.

Rationale. Review of MCO/PIHP data analysis begins with examining the MCO’s/PIHP’s
calculated plan performance on the selected clinical or non-clinical indicators. The review
examines the appropriateness of, and the MCO’s/PIHP’s adherence to, the statistical analysis
techniques defined in the data analysis plan.
Potential Sources of Supporting Information:
-

Baseline project indicator measurements
Repeat project indicator measurements
Industry benchmarks
Analytic reports of PIP results by the MCO/PIHP

Methods of Evaluation:
Consider the answers to each of the following to assess the extent to which MCO/PIHP PIP data
analysis and interpretation was appropriate and valid.
1.

Did the MCO/PIHP conduct an analysis of the findings according to its data analysis
plan?

2.

Did the MCO/PIHP present numerical PIP results and findings data in a way that
provides accurate, clear, and easily understood information?

3.

Following the data analysis plan, did the analysis identify:
-

initial and repeat measurements of the prospectively identified indicators for
the project?

-

the statistical significance of any differences between the initial and repeat
measurements?

-

factors that influence the comparability of initial and repeat measurements?

-

factors that threaten the internal or external validity of the findings?

17

4.

Did the MCO’s/PIHP’s analysis of the study data include an interpretation of the extent
to which its PIP was successful and what follow-up activities were planned as a result?
Interpretation and analysis of the study data should be based on continuous improvement
philosophies and reflect an understanding that most problems result from failures of
administrative or delivery system processes, not failures of individuals within the system.
Interpreting the data should involve developing a hypothesis about the causes of lessthan-optimal performance and collecting data to validate the hypotheses.

Step 9:

Assess the Likelihood that Reported Improvement is “Real” Improvement.

Rationale. When a MCO/PIHP reports a change in its performance, it is important to know
whether the reported change represents “real” change or is an artifact of a short-term event
unrelated to the intervention, or random chance. The EQRO will need to assess the probability
that reported improvement is actually true improvement. This probability can be assessed in
several ways, but is most confidently assessed by calculating the degree to which an intervention
is statistically “significant.” While this protocol does not specify a level of statistical significance
that must be met, it does require that EQROs assess the extent to which any changes in
performance reported by an MCO/PIHP can be found to be statistically significant. States may
choose to establish their own numerical thresholds for finding reported improvements to be
“significant.”
Potential Sources of Supporting Information:
-

Baseline and repeat measures on quality indicators
Tests of statistical significance calculated on baseline and repeat indicator
measurements
Benchmarks for quality specified by the State Medicaid agency or found in
industry standards

Methods of Evaluation:
At the point in time an MCO/PIHP is far enough into its improvement cycle, review
documentation to determine the extent to which improvement occurred. Through repeated
measurement of the quality indicators selected for the project, meaningful change in performance
relative to the performance observed during baseline measurement must be demonstrated. The
repeat measurement should use the same methodology as the baseline measurement, except that,
when baseline data was collected for the entire population at risk, the repeat may instead use a
reliable sample. Performance using the identified indicators can be measured by collecting
information on all individuals, encounters or episodes of care to which the indicator is applicable
(a census) or by collecting information on a representative subset of individuals, encounters,
providers of care, etc. The following questions should be considered in making the evaluation.

18

1.

Was there any documented QI in processes or outcomes of care?

2.

Does the reported improvement in performance have “face” validity; i.e., on the face of
it, does the intervention appear to have been successful in improving performance? Does
the improvement in performance appear to have been the result of the planned QI
intervention as opposed to some unrelated occurrence?

3.

Is there any statistical evidence that any observed performance improvement is true
improvement?

Step 10:

Assess Whether the MCO/PIHP has Sustained its Documented Improvement

Rationale. Real change results from changes in the fundamental processes of health care
delivery. Such changes should result in sustained improvements. In contrast, a spurious “one
time” improvement can result from unplanned accidental occurrences or random chance. If real
change has occurred, the MCO/PIHP should be able to document sustained improvement.
Potential Sources of Supporting Information:
-

Baseline and first repeated measurements on quality indicators
Additional measurements on quality indicators made after the first repeat
measurement

Methods of Evaluation:
Review of the remeasurement documentation is required to assure that the improvement on a
project is sustained. Consider the answer to the following question in making a determination as
to whether the improvement was sustained.
1.

Was the MCO/PIHP able to demonstrate sustained improvement through repeated
measurements over comparable time periods?
The MCO/PIHP should repeat measurements of the indicators after the first measurement
taken after the intervention. It is recognized that because of random year-to-year
variation, population changes, and sampling error, performance on any given individual
measure may decline in the second measurement. However, when all of the
MCO’s/PIHP’s repeat measurements for a given review are taken together, this decline
should not be statistically significant and should never be statistically significant after
two remeasurement periods.

19

ACTIVITY 2:

VERIFY STUDY FINDINGS (OPTIONAL)

Rationale. In addition to reviewing the methodology and findings of a MCO’s/PIHP’s PIP, at
times States might want the EQRO to verify the actual data produced as part of a PIP to
determine if the initial and repeat measurements of the quality indicators are accurate. This
activity can be resource intensive and may not be feasible to perform for every (or even some)
PIPs that an EQRO is to validate. However, if undertaken, verification activities can provide
added confidence in reported MCO/PIHP PIP results because they provide greater evidence that
a given PIP’ findings are accurate and reliable. Therefore, this activity is included in this
protocol as an optional activity that a State may elect to have the EQRO conduct on an ad hoc
basis when the State has special concerns about data integrity.
Potential Data Sources Needed for Verification Activities:
-

Current project data and findings
Depending upon the source of the PIP data:
- MCO/PIHP administrative data
- Beneficiary interviews and surveys
- An assessment of the MCO’s/PIHP’s Information System (IS) (see Appendix Z)

Methods of Evaluation:
The key focus in this activity is validating the processes through which data needed to produce
quality indicators was obtained, converted to information, and analyzed. How to verify quality
indicator findings depends on whether the data was obtained through review and abstraction of
medical records or produced through an MCO’s /PIHP’s automated IS:
Verification of data obtained through medical record review: Verification of quality
indicators produced through medical record review can be achieved by conducting a reabstraction of a small subset (validation sample) of the records that provided the data for the
quality indicators used in the study. Data retrieval and analysis will be conducted on a small
scale, with the validation sample following the same abstracting rules of the original study.
Statistical correlations then will be made between the validation sample and the original study
data.
A wide variety of statistical methods can be applied to assess the degree of correlation between
the study and validation measures. Two recommended methods are the Pearson correlation
coefficient for continuous data (e.g., age, income, etc) and the Kappa statistic for categorical data
(e.g., gender, race, etc.).
Verification of data obtained though MCO/PIHP automated IS: The accuracy of quality
indicators produced through an MCO’s/PIHP’s automated IS is a reflection of three phenomena:
1) the soundness of the algorithm the MCO/PIHP used to produce quality indicators from its IS;

20

2) the integrity (completeness and accuracy) of the MCO’s /PIHP’s IS at capturing enrollee
information; and
3) the accuracy of the information translated from source documents (e.g., an enrollee’s medical
record) into automated data in the MCO’s/PIHP’s IS.
The soundness of the algorithm the MCO/PIHP used to produce quality indicators from its IS is
to be assessed in Step 6. In order to assess the integrity of the MCO’s IS, and the accuracy of the
information translated from source documents (e.g., an enrollee’s medical record) into automated
data in the MCO’s/PIHP’s IS, the EQRO should review a copy of an assessment of the MCO’s
/PIHP’s IS and any validations of MCO/PIHP encounter data that the State has produced. These
activities are described in Appendix Z and in the protocol, Validating MCO/PIHP Encounter
Data, and would typically be conducted as a part of another activity conducted by the EQRO or
other organization as part of another activity; e.g., validating encounter data, validating
performance measures, or assessing an MCO’s or PIHP’s compliance with standards for
MCO/PIHP IS specified by the State Medicaid agency or other organization such as a private
accrediting organization. In order to use this information to help verify the accuracy of reported
quality indicators, the EQRO should obtain a copy of a recently completed assessment of the
MCO’s/PIHP’s IS and validation of its encounter data from the MCO/PIHP, the State Medicaid
agency, or other organization identified by the MCO/PIHP. In the event that no current
evaluation of an MCO’s/PIHP’s IS or encounter data exists, the State may want to contract out
this function as a part of validating MCO/PIHP PIPs.
Assessing the MCO’s/PIHP’s algorithm together with the integrity of the MCO’s/PIHP’s IS and
encounter data should provide a strong indication of the accuracy of the MCO’s/PIHP’s reported
quality indicators.
ACTIVITY 3:

EVALUATE OVERALL VALIDITY AND RELIABILITY OF PIP
RESULTS

After completing Activity One and, as appropriate Activity Two, the EQRO will need to assess
the implications of all findings on the likely validity and reliability of the MCO/PIHP PIP
findings and thereby whether or not the State Medicaid agency should have confidence in the
MCO’s/PIHP’s reported PIP findings. Because it is almost always (if not always) possible to
design the “perfect” study or PIP, the EQRO will need to accept some threats to the accuracy and
generalizeability of the PIP as a routine fact of QI activities. Determining when an accumulation
of threats to validity and reliability and PIP design problems reach a point at which the PIP
findings are no longer credible is always a judgement call. The EQRO may want to report its
findings back to the State in the form of a short summary of the validation findings along with a
summary rating using levels such as the following:
-

High confidence in reported MCO/PIHP PIP results
Confidence in reported MCO/PIHP PIP results
Low confidence in reported MCO/PIHP PIP results

21

-

Reported MCO/PIHP PIP results not credible
END OF PROTOCOL

22

ATTACHMENT A

ORIGIN OF THE PROTOCOL
This protocol was one of nine protocols developed during 1998-2001 from standards and
guidelines used in the public and private sectors during this time. This protocol was developed
from the following documents:
-

Quality Improvement System for Managed Care (QISMC)
QISMC was an initiative of CMS that set forth standards and guidelines pertaining to
health care quality for Medicaid and Medicare health plans (MCOs, PIHPs, and
Medicare+Choice plans). These standards and guidelines, in part, address MCO and PIHP
quality assessment and improvement projects.

-

Health Care Quality Improvement Studies in Managed Care Settings: A Guide for State
Medicaid Agencies (National Committee for Quality Assurance (NCQA))
Produced under a contract from CMS, this guidebook identifies key concepts related to the
conduct of QI studies and details widely accepted principles of research design and
statistical analysis necessary for designing, implementing and assessing QI studies.

-

A Health Care Quality Improvement System for the Medicaid Managed Care, A Guide for
States (Health Care Financing Administrations (HCFA))
CMS’s 1993 guide for health care QI provides a framework for building QI systems
within State Medicaid managed care initiatives. This document included guidelines
addressing quality assessment and improvement studies and related activities of MCOs
and PIHPs. This document was the result of the Quality Assurance Reform Initiative
(QARI).

-

Framework for Improving Performance, From Principles to Practice (Joint Commission
on Accreditation of Healthcare Organizations (JCAHO)
This publication describes the Joint Commission’s theory-based, practical methodology
for continuously improving the core work and resulting outcomes of any health care
organization. In this document, JCAHO defines the key characteristics and essential
behaviors of any health care organization striving to achieve high quality patient care.

-

1990-2000 Standards for Health Care Networks (SHCN) (JCAHO)
The JCAHO 1990-2000 SHCN provides a standards-based evaluation process to assist the
MCO in measuring, assessing and improving its network’s performance. It also helps the
MCO focus on conducting performance improvement efforts in a multi disciplinary,
system-wide manner. The 1990-2000 SHCN integrates information about the Joint
Commission’s health care network accreditation process.

23

ATTACHMENT A
-

NCQA 1997, 1998, and 1999 Standards for Accreditation of Managed Care
Organizations and NCQA 1999 Standards for Accreditation of Managed Behavioral
Healthcare Organizations (MBHO)
These documents include administrative policies and procedures for NCQA’s MCO and
MBHO accreditation programs, the 1997, 1998, and 1999 standards, and rationale
statements for the standards.

-

Peer Review Organizations (PRO) 4th and 5th Scope of Work (SOW) (CMS)
The 4th and 5th SOW documents outlined the requirements for PROs to adhere to while
conducting health care quality and improvement activities for Medicare beneficiaries.

An in-depth comparison of these documents was performed to identify the activities and features
common to these protocols, and features unique to individual protocols, while acknowledging the
different purposes of the documents. The QISMC, JCAHO and NCQA standards are written as
guides for MCOs/PIHPs to follow in developing, conducting and evaluating their quality
improvement studies. They can also be used by States or their agents (e.g., EQROs) to assess
compliance with State mandated guidelines and/or to facilitate overall plan-to-plan comparisons.
QARI was written with States as the intended audience to help them and their agents (e.g.,
EQROs) assure compliance with regulations and Medicaid program requirements, and promote
consistency in the manner in which MCOs and PIHPs carry out activities related to focused
studies.
The analysis revealed that in spite of their different purposes, all the documents identify several
common characteristics of effective focused studies. These include:
Selection of Topics: All of the reference documents address the need for focused studies to clearly
specify the topic to be addressed. They all acknowledge both clinical (e.g., specific disease or
condition such as pregnancy or asthma) and non-clinical (e.g., availability, timeliness and
accessibility of care) health service delivery issues as appropriate topics for health care QI
initiatives.
Means of Identifying Topics: Continuous data collection and analysis is stressed throughout all
documents as a means of identifying appropriate topics. It is stated that topics should be
systematically selected and prioritized to achieve the greatest practical benefit for enrollees. A
minimal set of criteria is suggested for selecting appropriate topics, including: the prevalence of a
condition among, or need for a service by, the MCO’s/PIHP’s enrollees; enrollee demographic
characteristics and health risks; the likelihood that the study topic will result in improved health
status among the enrollees; and the interest of consumers in the aspect of care or services to be
addressed.
Scope of study topics: The QISMC standards specify that performance improvement projects
should address the breadth of the MCO’s or PIHP’s services, such as whether they include

24

ATTACHMENT A
physical health and mental/substance abuse health services. They also identify specific clinical
and non-clinical focus areas that are applicable to all enrollees. The QISMC standards also
specify that the scope of the health plans’ improvement efforts are to include all enrollees.
Stating the Study Question(s): The HCQIS Guide discusses the importance of “stating the study
question” after a study topic is identified. It asserts that stating a study question helps a project
team avoid becoming sidetracked by data that is not central to the issue under study. For example,
once a focused study has identified childhood immunizations as a study topic: it might specify a
number of different study questions:
-

Have all children received all scheduled doses of one vaccine in particular?

-

Have all children of all ages received all recommended vaccines appropriate
for their age?

-

Have all children of a particular age (e.g., at the age of one, two, six or other
years) received all age-appropriate immunizations?

Alternatively, more detailed information may be desired so it may be necessary to specify
the study questions as:
-

What proportion of Medicaid enrollees who have reached two years of age
have received:
- All four recommended doses of DPT vaccine?
- All three recommended doses of the Polio vaccine?
- One recommended dose of the MMR vaccine?
- At least one dose of Hib in the second year of life?

Further specificity of additional study questions may be desired to provide information in
QI efforts, such as:
-

In what percent of cases of lack of immunization were children not immunized
for one of the following reasons?
- Refusal by a parent or guardian.
- Medical contraindications.
- Member non-complaint with the recommended immunization regimen.

Incorporating the process of documenting a study question(s) into the project design can help
ensure a systematic method of identifying appropriate indicators and data to be collected. In this
protocol we have included “defining the study question(s)” as a key step in designing and
implementing a Focused study.

25

ATTACHMENT A
Use of Quality Indicators: All reference documents address the need to specify well-defined
indicators to be monitored and evaluated throughout the study. It is emphasized that quality
indicators do not always need to be outcome measures. Process measures are also appropriate,
especially when there is strong clinical evidence that the process being measured has a
meaningful association with outcomes. There are various ways to obtain appropriate indicators,
such as using those dictated from outside sources (such as the State or CMS) or by an MCO/PIHP
developing them internally on the basis of clinical literature or findings of expert panels.
In addition to these features found uniformly in all reference documents, other significant aspects
of focused studies were identified by one or more of the reference documents. These include:
Significant improvement: NCQA’s document, “Health Care Quality Improvement Studies in
Managed Care Settings”, states that, “When presenting statistical results of any study, it is
important to fully disclose. . .the statistical significance of the estimates produced, as well as the
statistical significance of any apparent differences between units of comparison.” Building on
this, CMS’s QISMC document called for specific amounts of measurable improvement to be
demonstrated by the health plan. QISMC defines “demonstrable” improvement as either: 1)
benchmarks established by CMS (for national Medicare projects) or State agencies (for statewide
Medicaid QI projects) or by the health plans for individual (organizational) projects; or 2) a 10%
reduction in adverse outcomes. This protocol does not call for a specific level of statistical
achievement to be achieved but, consistent with the NCQA document, calls for disclosure and
review of the statistical significance of any measurable performance of a focused study.
Phase-in or time frame requirements: QISMC delineates specific time frame requirements for
MCOs/PIHPs to reach certain phases in a QI cycle. For example:
-

By the end of the first year, an MCO/PIHP should have initiated at least two
quality improvement projects addressing two different focus areas;

-

By the end of the second review year, at least two additional projects
addressing two different focus areas should be initiated.

-

By the end of the first year after the 2 year phase-in period, and each
subsequent year, at least two projects are to achieve demonstrable
improvement in two of the focus areas.

Evaluation Tools: NCQA’s HCQIS guidebook includes study planning and summary worksheets
to be used in the evaluation of an MCO’s/PIHP’s focused study. This feature provides a helpful
method for recording data during the evaluation process and promotes the collection of consistent
information by all evaluators. This protocol contains an example of a worksheet (Attachment A)
that can be used by EQROs when conducting focused studies.
Scoring system: NCQA accreditation provides a numerical scoring system to measure
performance against standards and to promote consistency in the process used to evaluate MCOs.
Although the scores do not dictate the final decision with respect to compliance with standards,

26

ATTACHMENT A
they do serve as a guide for NCQA evaluators to recommend non-compliance. This scoring
system also includes an opportunity for the MCO/PIHP to comment on the reviewer’s scores
before a final decision is rendered. It also promotes continuous improvement practices by
securing “customer” input into a final product (i.e., evaluation decisions). This protocol does not
include a scoring system. This protocol includes an example of a summary scale that EQROs can
use to report their aggregate findings and the degree of confidence in the MCO/PIHP PIP
suggested by the validation activities.

27

ATTACHMENT B

PERFORMANCE IMPROVEMENT PROJECT VALIDATION
WORKSHEET
Use this or a similar worksheet as a guide when validating MCO/PIHP Performance
Improvement Projects. Answer all questions for each activity. Refer to the protocol for
detailed information on each area.
ID of evaluator:

Date of evaluation:

/

/

Demographic Information
MCO/PIHP Name or ID:
Project Leader Name:
Telephone Number:
Name of Performance Improvement Project:
Dates in Study Period:

_____/_____/_____ to _____/_____/_____

Type of Delivery System (check all that are applicable)
____ MCI
____ Staff Model
____ Number of Medicaid Enrollees in MCO or PIHP
____ PIHP
____ Number of Medicare Enrollees in MCO or PIHP
____ Network
____ Number of Medicaid Enrollees in Study
____ Direct IPA
____ Total Number of MCO or PIHP Enrollees in
____ IPA Organization
Study
Number of MCO/PIHP primary care physicians_______
Number of MCO/PIHP specialty physicians _________
Number of physicians in study (if applicable) _______
I.

ACTIVITY 1: ASSESS THE STUDY METHODOLOGY

Step 1. REVIEW THE SELECTED STUDY TOPIC(S)
Component/Standard

Y N N/A

Comments

1.1 Was the topic selected through data
collection and analysis of comprehensive
aspects of enrollee needs, care and
services?

28

ATTACHMENT B
1.2. Did the MCO’s/PIHP’s PIPs, over time,
address a broad spectrum of key aspects of
enrollee care and services?
1.3. Did the MCO’s/PIHP’s PIPs over time,
include all enrolled populations; i.e., did
not exclude certain enrollees such as those
with special health care needs?
Step 2:REVIEW THE STUDY QUESTION(S)
2.1. Was/were the study question(s) stated
clearly in writing?
Step 3:REVIEW SELECTED STUDY INDICATOR(S)
3.1. Did the study use objective, clearly
defined, measurable indicators?
3.2. Did the indicators measure changes in
health status, functional status, or enrollee
satisfaction, or processes of care with
strong associations with improved
outcomes?
Step 4:REVIEW THE IDENTIFIED STUDY POPULATION
4.1. Did the MCO/PIHP clearly define all
Medicaid enrollees to whom the study
question and indicators are relevant?
4.2. If the MCO/PIHP studied the entire
population, did its data collection approach
capture all enrollees to whom the study
question applied?
Step 5:REVIEW SAMPLING METHODS
5.1. Did the sampling technique consider and
specify the true (or estimated) frequency of
occurrence of the event, the confidence
interval to be used, and the margin of error
that will be acceptable?
5.2. Did the MCO/PIHP employ valid sampling
techniques that protected against bias?
Specify the type of sampling or census
used:

29

ATTACHMENT B
5.3. Did the sample contain a sufficient number
of enrollees?
Step 6:REVIEW DATA COLLECTION PROCEDURES
6.1. Did the study design clearly specify the
data to be collected?
6.2. Did the study design clearly specify the
sources of data?
6.3. Did the study design specify a systematic
method of collecting valid and reliable data
that represents the entire population to
which the study’s indicators apply?
6.4. Did the instruments for data collection
provide for consistent, accurate data
collection over the time periods studied?
6.5. Did the study design prospectively specify
a data analysis plan?
6.6. Were qualified staff and personnel used to
collect the data?
Step 7:ASSESS IMPROVEMENT STRATEGIES
7.1. Were reasonable interventions undertaken
to address causes/barriers identified
through data analysis and QI processes
undertaken?
Step 8:REVIEW DATA ANALYSIS AND INTERPRETATION OF STUDY RESULTS
8.1. Was an analysis of the findings performed
according to the data analysis plan?
8.2. Did the MCO/PIHP present numerical PIP
results and findings accurately and clearly?
8.3. Did the analysis identify: initial and repeat
measurements, statistical significance,
factors that influence comparability of
initial and repeat measurements, and
factors that threaten internal and external
validity?
8.4. Did the analysis of study data include an

30

ATTACHMENT B
interpretation of the extent to which its PIP
was successful and follow-up activities?
Step 9:ASSESS WHETHER IMPROVEMENT IS “REAL” IMPROVEMENT
9.1. Was the same methodology as the baseline
measurement, used, when measurement
was repeated?
9.2. Was there any documented, quantitative
improvement in processes or outcomes of
care?
9.3. Does the reported improvement in
performance have “face” validity; i.e., does
the improvement in performance appear to
be the result of the planned quality
improvement intervention?
9.4. Is there any statistical evidence that any
observed performance improvement is true
improvement?
Step 10:ASSESS SUSTAINED IMPROVEMENT
10.1. Was sustained improvement
demonstrated through repeated
measurements over comparable time
periods?
ACTIVITY 2.VERIFYING STUDY FINDINGS (OPTIONAL)
1. Were the initial study findings verified upon
repeat measurement?

31

ATTACHMENT B
ACTIVITY 3. EVALUATE OVERALL VALIDITY AND RELIABILITY OF STUDY RESULTS:
SUMMARY OF AGGREGATE VALIDATION FINDINGS AND SUMMARY.

Check one:
Χ
Χ
Χ
Χ

High confidence in reported MCO/PIHP PIP results
Confidence in reported MCO/PIHP PIP results
Low confidence in reported MCO/PIHP PIP results
Reported MCO/PIHP PIP results not credible

END OF DOCUMENT

32


File Typeapplication/pdf
File TitleVALIDATING PERFORMANCE IMPROVEMENT PROJECTS
AuthorHCFA Software Control
File Modified2008-12-31
File Created2008-12-31

© 2024 OMB.report | Privacy Policy