CMS-R-305 Administering or Validating Surveys

External Quality Review of Medicaid MCOs and Supporting Regulations in 42 CFR 438.360, 438.362, and 438.364 (CMS-R-305)

Administering or Validating Surveys

External Quality Review of Medicaid MCOs and Supporting Regulations in 42 CFR 438.360, 438.362, and 438.364

OMB: 0938-0786

Document [pdf]
Download: pdf | pdf
OMB Approval
No. 0938-0786

ADMINISTERING OR VALIDATING
SURVEYS

Two Protocols for Use in Conducting Medicaid External
Quality Review Activities

Department of Health and Human Services
Centers for Medicare & Medicaid Services

Final Protocol
Version 1.0
May 1, 2002

According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid
OMB control number. The valid OMB control number for this information collection is 0938-0786. The time required to complete this
information collection is estimated to average 1,591 hours per response for all activities, including the time to review instructions, search existing
data resources, gather the data needed, and complete and review the information collection. If you have comments concerning the accuracy of the
time estimate(s) or suggestions for improving this form, please write to: CMS, 7500 Security Boulevard, Attn: PRA Reports Clearance Officer,
Baltimore, Maryland 21244-1850.
Form CMS-R-305

1

ADMINISTERING SURVEYS
I.

PURPOSE OF THE PROTOCOL

Surveys of Medicaid beneficiaries or other entities are becoming an increasingly common
method of health care quality measurement. Surveys can provide information that is not
generally available through clinical or administrative records. Surveys also can capture
beneficiaries’, families’, caregivers’, providers’, or other relevant parties’ perceptions of access
to care and quality of health services. However, in order for a survey to produce valid and
reliable information (upon which to make valid and reliable judgements about health care access,
quality, and/or timeliness), the design and administration of a survey must be carried out in
accord with certain fundamental principles of survey design and administration. Therefore, this
protocol describes the activities that must be undertaken in administering surveys if those
surveys are to be included as a component of the external, independent quality review required
under federal law for Medicaid managed care organizations (MCOs) and prepaid health plans
(PIHPs).

II.

ORIGIN OF THE PROTOCOL

This protocol is based on generally accepted principles of survey design and administration.
These principles are typically discussed in textbooks and other academic and research
publications. The publications (or references) that were used in the development of this protocol
are located in Attachment A.

III.

PROTOCOL OVERVIEW

This protocol applies to all types of surveys; for example, beneficiary surveys and provider
surveys. It does not recommend one particular survey instrument, sampling methodology or
approach to analyzing and reporting study results because differences in the purposes of surveys
preclude any one questionnaire, sampling methodology and analysis and reporting strategy from
meeting all needs. In a number of instances, however, the protocol provides specific information
about the Consumer Assessment of Health Plans Study (CAHPS) surveys and reporting formats,
which were developed by the Agency for Healthcare Research and Quality (AHRQ) (formerly
the Agency for Health Care Policy and Research - AHCPR) through cooperative agreements
with the RAND Corporation, Harvard University, and the Research Triangle Institute. CAHPS
surveys are frequently used by States to evaluate Medicaid beneficiaries’ experiences with
managed care.
This protocol assumes that the State Medicaid agency has made certain key decisions pertaining
to: 1) survey goals and objectives (specifically, the question(s) the survey is designed to answer);
2) the intended audience(s) for survey findings; and 3) the selection of a survey instrument. The

2

State may make these decisions independently or in consultation with the States’ External
Quality Review Organization (EQRO) 1 at the discretion of the State.
The protocol specifies eight activities that must be undertaken as part of a methodologically
sound survey:
1.
2.
3.
4.
5.
6.
7.
8.

Identification of survey purpose(s) and objective(s)
Identification of intended survey audience(s)
Selection of the survey instrument
Development of the sampling plan
Development of the strategy for maximizing the response rate
Implementation of the survey
Preparation and analysis of the data obtained from the survey
Documentation of the survey process and results

A companion protocol on validating the results of a survey previously administered is located
beginning on page 29.

IV.

PROTOCOL ACTIVITIES

The eight activities comprising this protocol are discussed below. EQROs should use a work
sheet such as that found in Attachment B to assist in and document the implementation of these
activities.
ACTIVITY 1:

Identification of survey purpose(s) and objective(s).

The EQRO must have a clearly written statement of the State’s purpose(s) and objective(s) for
administering the survey. These decisions will shape the survey methodology, administration,
and analytical approach. The EQRO should have in writing the answers to the following
questions: “What does the State want to learn by administering the survey?” and “What does the
State plan to do with the survey results?” Typically, States will want to obtain survey
information to help monitor and evaluate the quality of care provided to Medicaid beneficiaries,
assist Medicaid beneficiaries to choose between MCOs/PIHPs, and/or promote quality
improvement initiatives. Although a State may have multiple purposes for a given survey, for
purposes of this protocol, one of the stated purposes of the survey must be to assess -- either

1

A State may choose an organization other than an EQRO as defined in Federal regulation to administer a
survey. However, for convenience, in this protocol we use the term “external quality review organization” (EQRO)
to refer to any organization administering a survey.

3

alone or in combination -- the access, quality, and/or timeliness of care received by Medicaid
beneficiaries enrolled in individual MCOs/PIHPs.
The purpose(s) and objective(s) of the survey will also determine the survey’s “unit of analysis.”
The unit of analysis refers to the type of entity about which the State wishes to obtain
information. The unit of analysis may be, for example: individual MCOs/PIHPs, the State’s
managed care initiative as a whole, or a particular group of providers. Thus, the unit of analysis
also greatly influences the design, conduct, and analysis of the survey. Although the State’s
survey may address multiple units of analysis, at least one of the units of analysis must be the
individual MCOs/PIHPs participating in the survey; i.e., the survey must be designed to provide
representative and generalizeable information about individual MCOs/PIHPs.
Following the determination of the survey’s purposes(s) and unit(s) of analysis, more detailed
objectives, derived from the purpose, should be stated in writing. Objectives should be concise,
explicit and measurable. For example, if the purpose of the study is to discover how satisfied
Medicaid managed care enrollees are with their health care, more specific objectives might
include:
-

Determining if enrollees of individual MCOs/PIHPs are satisfied with their access to
specialty care.

-

Determining if enrollees of individual MCOs/PIHPs are involved in planning for their
own treatment.

-

Determining if enrollees of individual MCOs/PIHPs are satisfied with the quality of their
interactions with their primary care provider.

ACTIVITY 2:

Identification of intended survey audience(s).

The EQRO also must know the State’s intended audience(s) for the survey results. The intended
audience(s) will have implications for the design, administration and analysis of the survey.
While the primary audience is the State Medicaid agency, which will use the survey results as
part of external quality review, other audiences, identified by the State, could include:

-

Medicaid beneficiaries and their families who are choosing between fee-for-service
(FFS) and MCOs/PIHPs, or among Medicaid MCOs/PIHPs. Survey information
increasingly is viewed as one source of information to assist the consumer in making an
informed choice. This requires that the survey be designed to enable MCO/PIHP - to MCO/PIHP comparisons.

-

MCO/PIHP managers and providers, who can use the survey results to identify areas of
superior health service delivery as well as areas that need improvement. When surveys
will be used to provide information to MCOs/PIHPs, the State will need to develop a
policy that specifies whether or not the MCO/PIHP will be able to receive, or have access
to, individual enrollee survey results or just summary, plan-level information on their
enrollees’ survey results in the aggregate. Providing MCOs/PIHPs with survey results for
4

individual enrollees could violate enrollee confidentiality and potentially subject
enrollees for adverse action by the MCO/PIHP. Even if the enrollee’s name or other
identification is removed from the survey response form, there remains a concern that a
MCO might still be able to identify a particular survey respondent by his or her
responses. Because of this, the Medicare program does not allow its contracting MCOs to
receive individual-level survey data, even when the surveys are stripped of enrollee
identifiers. Medicare is, however, establishing a mechanism for MCOs to request special
analyses of their own data since they cannot receive the individual-level data themselves.
-

State policy makers who can use survey findings to monitor how Medicaid beneficiaries
perceive the care they receive under the State’s managed care initiative. In this case, the
survey analysis would need to provide information on the State’s managed care initiative
as a whole, as well as on individual MCOs/PIHPs.

It is important for the State to clearly identify the intended audiences to the EQRO, as this will
significantly affect the format of the report(s) prepared by the EQRO for the State. Because of
this, the State will need to identify the audiences for which it wants the EQRO to prepare reports.
Sometimes the State will want the EQRO to prepare reports only for the State Medicaid agency,
after which the Medicaid agency will prepare reports for other audiences. In other situations, the
State may want the EQRO to prepare survey reports in multiple formats for different audiences.

ACTIVITY 3:

Selection of the survey instrument.

The State will determine the survey instrument to be used. The State may do so independently or
in consultation with the EQRO, at the discretion of the State. There are three approaches to
selecting a survey instrument for use: 1) use an existing instrument; 2) develop a new
instrument; and 3) adapt an existing instrument. The State’s decision with respect to these
choices should be consistent with the survey purposes, objectives, and units of analysis, and
should promote the collection of reliable and valid data (reliability and validity).
Reliability refers to: 1) the internal consistency of a survey instrument; and 2) the reproducibility
of survey results when the survey is administered under different conditions; e.g., by different
people, or at different times. Internal consistency requires that positive statistical correlations
exist between questions within a survey. In other words, a respondent’s answers to particular
survey questions should not contradict the respondent’s answer to other survey questions.
Answers should be consistent across related questions. Reproducibility means that a tool yields
the same results in repeated trials. For example, a grocery store scale should show the same
weight each time the same bag of produce is measured. If it shows a different weight, it is
unreliable. Similarly, a survey instrument’s questions should not be answered differently
depending upon who is administering the survey or in what location or manner the survey is
administered. Survey questions also should be clear so individuals responding to the same
question in the survey will not interpret the same question differently.
Validity refers to the degree to which a tool captures what it is intended to measure. Different
types of validity measurement include: face validity, content validity, construct validity, and

5

criterion-related validity. Face validity is also known as logical validity and ensures that the tool
generates common agreement and/or mental images associated with a concept about which the
survey is asking. This type of validity does not involve statistical analysis, but is an opportunity
to determine if a concept is well addressed by a question or questions in a survey. This can be
accomplished through a review that may include one or more focus groups whose primary
purpose is to agree/disagree on whether the survey, on its face, is capturing the concept to be
addressed by the survey. Content validity builds on face validity by asking whether the survey
fully captures and represents the concept under study. This type of validity is typically
established by a review for relevance by individuals with expertise in the subject matter
addressed in the survey. Content validity also is not established using statistical procedures.
Rather, it is established by agreement from experts in the topic under study.
Construct and criterion validity are more sophisticated measures of validity and require more
time and resources to assess. Construct validity assesses how well the test measures the
underlying issue (e.g., access to care, satisfaction with care, satisfaction with providers) it is
supposed to measure. Criterion validity measures how well the performance on a test reflects the
performance on what the test is supposed to measure. There are two types of criterion validity:
concurrent and predictive. Concurrent validity uses a correlation coefficient to compare the
survey to a "gold standard" survey for measuring the same variable. Predictive validity measures
the survey's ability to "predict" future outcomes and also uses a correlation coefficient for
comparison. While no instrument is ever determined to be “perfect” with respect to reliability
and validity, the greater amount of reliability and validity testing, and the greater the positive
findings with respect to reliability and validity associated with a given survey instrument, the
greater confidence an EQRO can have in a survey’s findings.
The CAHPS survey instruments and reporting formats have undergone rigorous testing for
reliability and validity, including focus group interviewing, cognitive interviewing, and fieldtesting. However, if a State chooses to use another instrument, develop one of its own, or
modify an existing instrument, at a minimum, it should promote reliability and validity by first
establishing face and content validity, then pre-testing the tool for reliability. Face and content
validity can be established by convening one or more focus groups that include beneficiaries and
individuals with subject matter expertise. Reliability can be assessed using the test-retest method
in which the survey is administered to the same group at two different times. A correlation
coefficient is calculated and indicates the reproducibility of results. Correlation coefficients with
r-values at or above 0.70 indicate good reliability.
Option: using existing survey instruments. Because of the resources required to develop and
test new survey tools, many States will choose to use an existing survey instrument - especially
one that has undergone strong reliability and validity testing such as CAHPS. CAHPS testing
included cognitive testing during the development and evaluation phases, calculation of
reliability estimates in a sample of Medicaid enrollees and private health insurance purchasers,
and convening of focus groups to test relevance of survey concepts and items. In addition,
CAHPS surveys include a free, easy-to-use CAHPS Survey and Reporting Kit. The kit contains
a set of mail and telephone survey questionnaires including Spanish language versions, sample
reporting formats, and a handbook with step-by-step instructions for sampling, administration,
analysis, and reporting. The handbook’s instructions are easy to understand and allow for

6

flexibility within the sample and analysis design. The handbook also includes a comprehensive,
statistical software package as well as the telephone number and e-mail address of the technical
assistance hotline.
There are a wide variety of other existing questionnaires that can be selected as the survey
instrument. While some have undergone extensive reliability and validity testing, others have
been developed with little or no testing for reliability and validity. Unless the developers of an
existing survey can describe an instrument’s reliability and validity testing and results, an
instrument’s reliability and validity should always be suspect. Even pre-existing, validated
survey instruments may not have been validated in a Medicaid population. Selection of
instruments not validated in a Medicaid population may not yield valid or reliable results for this
population.
Another advantage of selecting an existing instrument is that use of the same questionnaire,
methods, and analysis for surveys of MCOs/PIHPs, population groups, and States, permits a
State to compare its findings against those of other studies. For example, standardization of the
survey instruments, methodology, and report formats in CAHPS provides reliable and valid
measures that can be compared across MCO/PIHPs, across MCO/PIHPs and fee-for-service
(FFS), as well as across a wide range of respondents and States.
When using an existing survey instrument for External Quality Review, the EQRO should be
able to document the extent of reliability and validity testing of the survey instrument.
Option: developing new survey instruments. In some cases, a State may decide to use a new
survey that it or a State contractor develops. Usually a State will choose to develop a new
survey instrument when the purposes and objectives of the study require answers to questions
that are not addressed by existing instruments. A well-designed, State-specific instrument can
capture information that is of great interest and relevance to the questions under study.
However, assuring the reliability and validity of new surveys is costly and takes time. Without
such reliability and validity testing, surveys can have serious methodological shortcomings and
be vulnerable to criticism. A 1997 report by the Department of Health and Human Services
(DHHS) Inspector General, “Medicaid Managed Care: The Use of Surveys as a Beneficiary
Protection Tool” found that some State-specific surveys were designed to meet too many
objectives and provided little useful data. When developing new survey instruments and methods
for purposes of External Quality Review, the State Medicaid agency should involve an
individual with experience in survey design and methodology and assure testing of the
instrument for validity and reliability.
Option: adapting existing surveys. States also may decide to adapt an existing questionnaire
by adding or deleting items, modifying questions, or using only certain groups of questions to
respond to a specific issue under study. This approach provides States with the flexibility to add
or change the data to be collected, while providing many of the advantages of using a preexisting questionnaire. However, this approach also can raise questions about the validity and
reliability of the new or modified questions, as well as the survey overall. Validated instruments
are tested “as a whole,” and modifications can change the focus and purpose of the tool.
Adapting an existing survey is easier if CAHPS is used, as the CAHPS survey instruments have

7

been specifically designed with opportunities to customize the questionnaire by adding questions
from a list of supplementary optional items, as well as State-designed questions. Most preexisting questionnaires are not designed to accommodate such modifications and require
additional testing. Modifications and additional testing is time intensive and will likely result in
increased costs. When a State undertakes to adapt an existing tool for its purposes, it should
obtain the advice of an individual knowledgeable in survey design (preferably knowledgeable
about the original survey) to advise on the modification and how to appropriately test for
reliability and validity.

ACTIVITY 4:

Development of the sampling plan.

When the State decides to collect information directly from beneficiaries, providers, or other unit
of analysis, it must decide how many “units” will need to be surveyed. A State could decide to
take a “census;” i.e., ask every individual entity which the survey is designed to represent, to
answer the survey questions. For example, a State could decide to survey all Medicaid
beneficiaries or all Medicaid providers. However, because the cost of contacting all these entities
is expensive, surveys typically are administered to only a portion (a “sample”) of the target
population. Whenever a sample, as opposed to a census, is used, it is very important to select the
sample members in a way that represents as closely as possible the population about whom
information is desired - both those included in the sample, and those not selected to respond to
the survey.
Step 1:Identify the study population.
Sample selection begins with the identification of the population to be studied; e.g., all Medicaid
beneficiaries enrolled in a State’s Medicaid managed care initiative, or all children with special
health care needs. The “population” represents all units of analysis (sometimes referred to as
“the universe”). A good sample is a smaller version of the population to be studied and is
representative of the population in terms of certain pre-identified characteristics (e.g., race, age,
MCO/PIHP affiliation.)
Step 2:Define the sample frame.
Once the target population for the survey is identified, the population from which the sample will
be drawn must be further refined. This is the “sample frame.” The sample frame lists all
members of the study population eligible for the study and is used to select the sample.
Questions used to define individuals eligible for the survey (the sample frame) might include, for
example:
Is a minimum period of enrollment in a MCO/PIHP necessary for individuals who will be
surveyed? Must the beneficiary be continuously enrolled in a specific MCO/PIHP for a
specified period of time to be considered an enrollee for purposes of the survey?
Frequently, a minimum period of continuous enrollment, such as six months, is required
to increase the likelihood that enrollees have had contact with their specific MCO/PIHP.

8

Should any MCO/PIHP enrollees be excluded? Why? For example, if a study population
is to address children with special health care needs, should the survey be limited to only
children receiving SSI or Title V; should it address all children that a MCO/PIHP might
identify as having special health care needs; or should it include all children whose
family might identify them as having a special health care need?
Should only one member per household be surveyed?
Will children be surveyed?
When is open enrollment and how will it impact survey administration?
Step 3:Determine the type of sampling to be used.
There are two basic types of statistical sampling: probability sampling and non-probability
sampling. Probability sampling leaves selection of the units to be included in the survey up to
chance. It is designed to prevent any biases in the sample selected by ensuring that each segment
of the study population is represented in the sample to the extent it composes part of the
population under study. Simple random sampling is the most common form of probability
sampling used in survey research. Population members are generally assigned a number, and
random numbers generated by a computer or table of random numbers are used to select
members from the population. This sampling approach ensures that all members of the target
population have an equal chance of selection. The sample is thus assumed to be fully
representative of the population.
Stratified random sampling also may be used. This technique calls for dividing the population
(on the basis of prior knowledge about the population) into specific, pre-identified strata or
subgroups that are homogeneous with respect to certain characteristics (e.g., ethnicity, age,
diagnosis). A random sample is then taken from each stratum or subgroup. Stratification is done
both to improve the accuracy of estimating the total population’s characteristics and to provide
information about the characteristics of interest within subgroups. However, stratified random
sampling requires more information about the population, and can also require a larger total
sample size. As a result, it is typically more expensive than simple random sampling. Stratified
sampling may also involve “weighting” the sample. In this process, a survey would select a
disproportionately larger number of units of analysis from one or more of the strata to allow the
survey to produce information on that particular strata; e.g., individuals dually receiving both
Medicare and Medicaid.
Non-probability sampling methods are based on the decisions of those administering the survey,
and not on random chance. For example, a State might wish to obtain information about new
Medicaid beneficiaries by surveying all new beneficiaries eligible in a particular month. It might
wish to obtain information on average stage of pregnancy at time of the establishment of
Medicaid eligibility by examining the records of women giving birth at a particular hospital. In
such cases, the information obtained by the State will have been biased. Non-random sampling
methods such as these also do not lend themselves to statistical analysis. Because of the risk of
biased results and the obstacles to statistical analysis, non-probability sampling is discouraged.

9

However, at times it can be an appropriate and efficient way of collecting needed information.
Step 4:Determine needed sample size.
A sample only collects information from a fraction of the population. Based on data obtained
from the sample, estimates are produced about the utilization of health care services, satisfaction
with care, quality of care or other issues. However, these estimates are at risk of being
“inaccurate” to the extent that: (1) they are different from the population’s true values because
only a portion of the entire population was involved in producing the estimates, and (2) a
different sample will give different estimates. Fortunately, the risk of inaccuracy can be reduced
by carefully selecting the sample size.
Two factors influence the determination of the appropriate sample size for the survey 2 : (1) the
level of certainty desired for the estimate to be produced and 2) the margin of error that is
defined as acceptable. Combining these two factors establishes the parameters for the estimates
to be produced by the survey (e.g., a survey would target the answers to its questions to be 95%
accurate with a 5% margin of error). The greater the certainty and the smaller the margin of error
desired, the larger the sample will need to be (and the more expensive the survey will be). In
addition, over-sampling (sampling for a greater number of entities than are actually needed to
produce valid estimates) is sometimes undertaken, to compensate for an anticipated percent of
non-respondents to the survey. Because of these and other factors; e.g., whether or not the State
intends to compare MCOs/PIHPs to one another, the EQRO will need to determine the sample
size either as directed by the State, or, if the State allows the sample size to be determined by the
EQRO, in consultation with a qualified statistician. The level of certainty and margin of error
must be approved by the State.
If the CAHPS survey is used, the EQRO can take advantage of the statistical analysis already
performed by the CAHPS developers and follow the sampling specifications given in the
CAHPS handbook. However, these will differ from the specifications provided by the National
Committee for Quality Assurance (NCQA) in the CAHPS 2.0H manual. (CAHPS 2.0H is an
adaptation of CAHPS developed by the NCQA, in conjunction with AHCPR, for use with
NCQA’s Healthplan Employer Data and Information Set - HEDIS). The CAHPS 2.0H
questionnaire is identical to the CAHPS questionnaire except for the inclusion of the HEDIS
smoking cessation counseling questions as mandatory items. This difference produces different
sampling specifications for CAHPS 2.0 and for NCQA CAHPS 2.0 H. NCQA’s CAHPS 2.0H
requires a larger sample size in order to obtain adequate numbers of respondents who can answer
the smoking questions.
Step 5:Specify the sample selection strategy.

2

A third factor (the true frequency of the occurrence of a characteristic within a population) is also often
used to calculate the sample size for some studies. However, since a survey studies multiple questions, and the true
prevalence of responses to any of these questions may be variable across questions, and change over time, the most
prudent course of action is to assume the need for the maximum sample size by assuming that the presence or
absence of the characteristic in question is evenly distributed across the population.

10

Once the sample size is determined, this number of entities must be selected from the sample
frame. The strategy for selecting the sample should specify the following:
How the sample will be drawn. Once the type of sampling has been determined, the
EQRO, State, or some other entity will implement the sample selection procedures to
identify the specific individuals to be surveyed. The State should first approve sample
selection parameters and criteria. If the EQRO or some other entity is drawing the
sample, the EQRO should review the specifications prior to the draw. The EQRO will
review the sample selection procedures even if it does not pull the sample. This should
include review of the statistical program used to generate the sample, a scan of the
sample frame files for face validity, and the generation of the statistical program on a
sample from the data files.
How will the files/sample be delivered and in what format? Information technology staff
may need to be consulted on the mechanism to be used to deliver the sample frame
member files for sample selection and contact. Typical modes of delivery include file
transfer protocol (FTP), e-mail, or overnight mail on disk or tape. Formats might include
CD-ROM, Excel, 4 mm.dat, or reel tape.
Is there a data dictionary for the sample frame or sample file? A data dictionary should
be provided to facilitate immediate use of the sample frame or sample file. The data
dictionary is a listing and description of all data fields found in an electronic or hardcopy
list of sample frame or selected sample members.

ACTIVITY 5:

Development of the strategy for maximizing the response rate.

The EQRO must employ a strategy for maximizing the response rate that addresses: 1) how
respondents in the sample will be contacted; and, 2) how the response rate will be maximized
and calculated.
Step 1:Specify the strategy for contacting individuals to be surveyed.
The strategy for locating and contacting individuals who are to be surveyed (target respondents)
should be developed in detail. Surveys can be administered in a face-to-face interview, by
telephone interview, or by a mailed questionnaire to be completed by the respondent and mailed
back. Because of the importance of achieving a high response rate, considerable research has
been conducted on these approaches to survey administration. Recent studies indicate the
percentage of non-respondents is increasing, making survey research more difficult. Although in
the past survey researchers believed that face-to-face interviewing produced higher response
rates, recent research challenges this view. No single method of survey administration is
believed to be superior in all circumstances. Therefore, the type of survey administration chosen
(e.g., face-to-face interviewing, telephone or mail) should reflect careful consideration of the
response rate as well as cost and time issues. Telephone and mail surveys usually are much less
expensive than in-person interviewing. In-person interviews are expensive because of the time
and cost required when using experienced, well-qualified interviewers.

11

Specific data needed by the EQRO to administer the survey should be identified; e.g., an
individual’s full name, address, phone number, date of birth, primary language and, if relevant,
the name of the MCO/PIHP the individual is enrolled in, and the length of time of enrollment. It
is also important to consider the completeness of the information needed to contact individuals
selected for the survey, and to what extent, if any, necessary information such as addresses and
telephone numbers can be verified through other means; e.g., the MCO’s/PIHP’s enrollee files.
Plans for maximizing the probability that respondents can be located and contacted also should
be outlined (e.g., sending names in the sample to a telephone number look-up vendor or using a
change-of-address data base vendor.)
The strategy also should address the steps to be undertaken if the actual rates of contacting
individuals are lower than expected. These may include mailing a reminder postcard or second
survey, or a follow-up phone call to individuals failing to respond to a mail survey; conducting
repeat calls in a telephone survey; using special mail delivery services for respondents who have
no telephones; or tracking and following up on the number of “bad number” or “undeliverable”
cases by using directory assistance, reverse directories, CD-ROM address directories, national
change-of-address database vendors, and telephone number look-up vendors. However, the data
collection strategy will also need to consider the cost implications of using these techniques. For
example, the CAHPS developers have estimated that tracking respondents through directory
assistance adds $.75 per case, tracking them through reverse directories adds $3 per case, and
tracking them through the use of telephone number look-up vendors adds $4 to $8 dollars per
case.
Step 2:Maximizing and calculating the response rate.
The response rate generally is the number of entities that respond to the survey divided by the
number of entities selected to respond to the survey. However, the exact definition is generally
more detailed and complex that this; there are a number of different ways to calculate response
rates, as well as different strategies for reporting these rates. In addition, the definition of
response rate may differ depending on mode of administration. For example, mail surveys
typically may exclude non-deliverable addresses from the denominator while telephone
interviews may exclude people with non-working telephone numbers and those who no longer
reside in the household from the denominator. The EQRO and the State must determine an
appropriate response rate methodology. Given the response rate methodology chosen, the State
will define acceptable levels of response rates.
The response rate is important because a low rate reduces the ability to make accurate
conclusions (statistical inferences) about the population under study. Those individuals who do
not respond to the survey may differ systematically from individuals who respond, thereby
biasing the results.
Medicaid beneficiaries typically have a low survey response rate. Medicaid populations tend to
have lower literacy levels than average, are more likely to speak a primary language other than
English, are highly mobile, and have high rates of inaccurate or unavailable telephone numbers
and mailing addresses. As a result, surveys of Medicaid beneficiaries tend to require extra effort

12

to maximize response rates. These efforts can include using an expansive contact strategy that
emphasizes tracking methods, and a mixed-mode approach to survey administration that
typically combines mail and telephone survey procedures. However, even with these additional
measures, response rates for Medicaid beneficiaries are likely to be lower than for other
populations. The CAHPS developers suggest that the target response rate for administering
CAHPS to Medicaid beneficiaries should range from 40% to 60%. They also recommend that
survey vendors focus on strategies that promote high response rates and develop a plan of
correction actions to take if the response rate falls short of the goal.
No matter what mode of survey administration is selected, research indicates that a number of
strategies can be used to achieve better response rates. For example, if mail surveys are used, all
of the following strategies should be considered:
-

Sending an advance (pre-notification) letter. When developing an in-house letter, the
CAHPS developers suggest that a number of points should be stressed in the
correspondence:
Anonymity and confidentiality
How the enrollee was selected for participation
How the results will benefit the respondent
Why the survey should be completed
How to return the survey

-

Using follow-up contacts (e.g., reminder postcards, second mailing of the questionnaire,
phone contact or special postage mailing of a second questionnaire);

-

Emphasizing survey sponsorship (e.g., to indicate State agency sponsorship, use a cover
letter on State government letterhead signed by the Agency Director;

-

Using personalized correspondence with respondents (e.g., addressing all correspondence
to the respondent);

-

Providing stamped return envelopes; and

-

Using special postage for replacement questionnaire mailings (e.g., overnight mail or
certified mail).

Translation of surveys and correspondence into languages other than English also may be
important for Medicaid populations. If not addressed, language issues may lead to lower
response rates because surveys cannot be completed if they cannot be read. Failure to provide
surveys in the respondent’s primary language may also result in excluding vulnerable segments
of the population. Unfortunately, the primary language of specific individuals in the survey
sample may be difficult to determine. One solution may be to translate a sentence in the
introductory letter and provide a phone number for the respondent to call for more information.
During the call, the respondent may request a translated survey, complete the survey over the
phone, or schedule a more convenient telephone interview time. However, survey translations
and beneficiary telephone hot lines can have significant cost implications for the project.

13

The survey methodology should specify the required response rate established by the State,
procedures for handling missing data, and the specific methods for calculating both raw and
adjusted response rates. For example:
Raw response rates can be calculated by:
Number of completed surveys
Total number of targeted survey respondents
Adjusted response rates can be calculated by:
Number of completed surveys
Total number of targeted survey respondents minus deceased and ineligible due to
hospitalization, cognitive impairment, or other reasons specified in the methodology

ACTIVITY 6:

Implementation of the survey.

Based on the information obtained in Activities 1, 2, and 3, and the decisions made in activities 4
and 5, the EQRO should prepare a work plan to govern the implementation of the survey. The
work plan should specify routine aspects of project management including key staff and their
responsibilities, timelines, and deliverables; the degree to which each phase of the survey
process will be documented; and a plan of action for identifying and resolving problems. It
should also address final report production including format, content, and specification of any
needed approval by the State or other entities, and number of reports to be submitted. It should
include a description of any reports to be publicly released by the EQRO, if this is part of the
EQRO’s scope of work.
Key methodological issues to be addressed in the work plan include:
-

procedures to be followed in formatting, reproducing, and distributing the survey
questionnaire;

-

procedures for assuring the confidentiality of the data;

-

data collection, data entry, and internal quality control methods; The EQRO should
determine with the State procedures for handling responses that fail edit checks, the
treatment of missing data, and the handling of procedures for determination of
usable/complete surveys.

-

data analysis plan including statistical methodology; and,

-

production of data files and their format and delivery, if applicable.

The work plan should be approved by the State prior to implementation.

14

If it is feasible, the survey vendor should provide the State with a mock-up of survey results prior
to administering the survey. This will help assure that the information to be produced by the
survey is consistent with the State’s planned use of survey results. The implementation of the
survey should be in accord with the State-approved work plan.

ACTIVITY 7: Preparation and analysis of data obtained from the survey.
Once the surveys have been completed and returned to the EQRO, preparation of the data for
analysis, and data analysis occurs. This includes data quality control procedures (e.g., cleaning
and editing), data analysis, and production of data files.
Data quality control. The EQRO should develop, implement, and document quality control
procedures for all survey administration activities. In particular, quality control should address:
recording receipt of mail surveys; telephone administration of surveys; handling of respondent
information and assistance telephone calls; coding, editing, and keying in or optical scanning of
survey data; data tabulation and analysis; and preparation of survey reports.
Data analysis. Each MCO/PIHP should be treated as the basic unit of analysis and reporting.
The analysis should include simple statistical procedures such as calculation of measures of
central tendency and frequency distributions for specific survey questions. In addition,
differences in survey results among MCOs/PIHPs may be examined using standard statistical
tests. For example, the F-test may be used to determine if the average scores of any MCO/PIHP
differ significantly from that of other MCOs/PIHPs. The F-test tests the hypothesis that all
MCO/PIHP means (averages) are equal by comparing the relative size of two variances. The Ftest will be significant if at least one MCO or PIHP mean is statistically different from any other
MCO or PIHP mean. If the F-test indicates that there are significant differences, a series of ttests may be conducted to identify which MCO/PIHP is different from other MCO/PIHPs. The
t-test tests the hypothesis that two specific MCO/PIHP means are significantly different from
each other. A significant t-test indicates that the two MCO/PIHPs represent two populations
with different means.
In addition, an EQRO may want to consider multivariate regression and/or analysis of
covariance techniques to further identify meaningful differences between MCOs/PIHPs while
controlling for respondent characteristics. For example, a driver analysis and a quadrant analysis
may be used to gain more insight into the current survey measures and possible approaches to
future improvement. Driver analysis refers to statistical techniques that may be used to identify
the important factors or “drivers” that significantly affect some outcome measure. In this
context, the driver analysis will identify the most important factors (drivers) that significantly
affect overall scores. Individual questions can be analyzed to determine their effect on overall
scores. These identified drivers, along with their current measures, will then be subjected to a
quadrant analysis that compares the importance of the drivers against the current survey
measures on these drivers. For example, if access to specialists has been identified as a driver of
patient satisfaction, then depending on the importance of access as a driver and the MCO/PIHP
current performance on access, there could be four different combinations or quadrants: (1) high

15

importance and high performance; (2) high importance and low performance; (3) low importance
and low performance; and (4) low importance and high performance. High importance means
that health care access has an important effect on satisfaction, and high performance means that
health care access has been well addressed. Therefore, quadrant (1) identifies strength; i.e.,
satisfactory performance on a crucial driver. Similarly, quadrant (2) identifies areas needing
improvement; quadrant (3) and (4) are areas of low leverage and insignificance. The most
important insight that can be gained through quadrant analysis is the identification of
improvement opportunities. MCOs/PIHPs can then make strategic decisions about where to
devote their efforts and resources in order to improve MCO/PIHP scores.
Consistent with the purposes and objectives of the survey, populations within each MCO/PIHP
could also be analyzed and reported separately. For instance, the State may be interested in
whether any particular variable differs significantly across geographic locations, demographic
groups, socio-economic groups, or other identifiable subgroups (e.g., frequent and infrequent
users of health care services).
Survey results may be adjusted for external factors that influence results. For example, studies
suggest that age, education, and self-reported health status, among others, have a significant
effect on patient satisfaction.
Regardless of the statistical analyses performed, the display of survey findings is important.
Statistical graphs should accompany narrative text to aid comparison and interpretation. For
example, bar graphs and comparison charts, such as those recommended by CAHPS, convey
important information about the performance of each MCO/PIHP and indicate meaningful
differences among MCOs/PIHPs.

ACTIVITY 8:

Documentation of the survey process and results.

The EQRO should prepare and submit reports to the State that document the survey process and
results, including:
1.
2.
3.
4.
5.

6.

survey purpose and objectives;
technical methods of survey implementation and analysis (Activities 2 - 7);
data obtained;
conclusions drawn from the data;
a detailed assessment of each MCO/PIHP’s strengths and weaknesses with
respect to (as appropriate) access, quality, and/or timeliness of health care
furnished to Medicaid enrollees;
as the State determines, methodologically appropriate, comparative information
about all MCOs/PIHPs.

The final report also should include a detailed description of survey methodology problems,
lessons learned, and recommendations for improving future efforts. If the scope of work with
the State requires the EQRO to produce a report on survey results for public reporting, the
content and format of this report should conform to the mock-up specifications previously

16

approved by the State. Recently, many States have shown increasing interest in using survey data
for public reporting. However, fitting survey information into the design and distribution of other
beneficiary materials and agreeing on the report’s length, design, and graphics can prove
difficult. Therefore, it is important for the EQRO and the State to agree on how to present the
results before the production of the reports.

17

ATTACHMENT A
Works Consulted in Protocol Development
Babbie, E. “The Practice of Social Research”. 6th Edition. Chapter 10. Wadsworth Publication
Company: Belmont, CA. 1992.
Brown, J., Nederend, S., Hays, R., Short, P.F., & Farley, D. “Special Issues in Assessing Care of
Medicaid Recipients.” CAHPS Papers for Medical Care Supplement. December 1998.
Cella, D., Hernandez, L., Bonomi, A., Corona, M., Vaquero, M., Shiomoto, G., Baez, L.
“Spanish Language Translation and Initial Validation of the Functional Assessment of Cancer
Therapy Quality-of-Life Instrument.” Medical Care. Volume 36, Number 9: 1407-1418. 1998.
Department of Health and Human Services Office of Inspector General. “Medicaid Managed
Care: Use of Surveys As A Beneficiary Protection Tool.”1996 Dec. OEI-01-95-00280.
Dillman, Don A., Sinclair, Michael D., Clark, Jon R. “Effects of Questionnaire Length,
Respondent-Friendly Design, and a Difficult Question on Response Rates for OccupantAddressed Census Mail Surveys”. Public Opinion Quarterly. Volume 57: 289-304. 1993.
Dillman, Don A. , Mail And Telephone Surveys: The Total Design Method. New York, A WileyInterscience Publication, 1978.
Donelan, Karen. “How Surveys Answer A Key Question: Are Consumers Satisfied With
Managed Care?” Managed Care, Volume 5, Issue 2: 17-24. February 1996.
Gallagher, P.M., Fowler, F.J., & Stringfellow, V. “Respondent Selection by Mail, To Obtain
Probability Samples of Enrollees in a Health Care Plan”. CAHPSTM Papers for Medical Care
Supplement. 1998 Dec. (In Publication).
Gold, Marsha & Eden, Jill. “Monitoring Health Care Access Using Population-Based Surveys:
Challenges in Today’s Environment, Information For Policymakers.” Methods Brief, August
1998.
Harris-Kojetin, L.D., Fowler, F.J., Brown, J.A., & Sweeney, S.F. “The Use of Cognitive
Testing for Developing and Evaluating CAHPS 1.0 Survey Items.” CAHPS’ Papers for Medical
Care Supplement. 1998 Dec.
Hays, R.D., Shaul, J.A., Williams, V.S.L., Lubalin, J.S., Harris-Kojetin, L., Sweeney, S.F., &
Cleary, P.D. “Psychometric Properties of the CAHPSTM 1.0 Survey Measures”. Revised paper,
98020; Medical Care. CAHPSTM Papers for Medical Care Supplement December 1998. (In
Publication).

18

Kasper, Judith D. “Part III: What Can We Realistically Expect from Surveys? Constraints Of
Data and Methods; Asking About Access: Challenges for Surveys in a Changing Healthcare
Environment.” Health Services Research. July 1997.
Kassberg, Maria & Wynn, Paul. “What Falls Through The Cracks When Quality Is Measured?”
Managed Care, Volume 6, Issue 3. 22-28. March 1997.
Krowinski, W.J. & Steiber, S.R., “Measuring and Managing Patient Satisfaction.” Chapters 5 &
6. 2nd Ed., American Hospital Publishing, Inc. 1996.
Leedy, P.D. “Practical Research-Planning and Design.” 5th Edition. Chapter 2 & 8. MacMillan
Publishing Company: New York. 1993.
National Committee for Quality Assurance (NCQA), HEDIS 1999, HEDIS Protocol for
Administering CAHPS 2.0H Survey. Volume 3, 1998.
Weidmer, B., Brown, J., & Garcia, L. “Translation Issues: Translating the CAHPSTM Survey
Items into Spanish.” CAHPS Papers for Medical Care Supplement. December 1998.
Wimmer, R.D. & Dominick, J.R., “Survey Research Basics.” Mass Media Research,1997.
(http://web.utk.edu/~toddc/survey)

19

ATTACHMENT B
SURVEY PLANNING AND IMPLEMENTATION -- DOCUMENTATION
SURVEY ELEMENT

DOCUMENTATION

Activity 1: Survey Purposes(s) and Objective(s)
-There is a written statement of survey purpose(s)
that addresses access, timeliness, and /or
quality of care.
-Unit(s) of analysis are clearly stated and include
individual MCOs/PIHPs.
-Study objectives are clear, measurable, and in
writing.
Activity 2: Survey Audience(s)
-Audiences for survey findings are identified.
- If consumers are an intended audience, survey
strategy will allow MCO-to-MCO comparisons.
- If MCOs/PIHPs are an intended audience, the
State has a policy regarding MCO/PIHP access
to individual enrollee’s survey responses.
- If information on the State’s program as a whole
is desired, the design of the survey will produce
information on the State’s managed care
initiative as a whole.

20

Activity 3: Survey Instrument
-If using an existing survey, there is evidence of
its reliability.
-If using an existing survey, there is evidence of
its validity.
-If using a newly developed survey, there is
evidence that an individual with experience
in survey design and methodology was
involved in the development of the survey.
-If using a newly developed survey, there is
evidence of reliability testing.
-If using a newly developed survey, there is
evidence of validity testing.
-If using an adapted survey, an individual(s) with
expertise in survey design (preferably
expertise in the original survey instrument)
participated in the adaptation and testing of
the instrument.

-If using an adapted survey, there is evidence of
reliability testing.
-If using an adapted survey, there is evidence of
validity.

21

Activity 4: Sampling Plan
-Population to be studied is clearly identified.
-Sample frame clearly defined.
-Sampling strategy (simple random, stratified
random, non-probability) is appropriate to
study question.

-If random sampling is used, sampling process is
valid.
-If stratified random sampling is used, sampling
process is valid.

-If non-probability sampling is used, there is
clear and strong evidence why random
sampling is not feasible.

-Sample size is determined using reasonable
statistical parameters and is appropriate to
survey purpose and objectives.

-Sample selection processes are in place,
including: specifications for drawing the
sample and assuring validity of sample
frame. A test of the sampling specifications
is to be performed on a portion of the sample
frame file.

22

-Methods for delivering sample files are in place.

-Data dictionary is provided for the sample frame
file.

Activity 5: Response rate
-Strategy for locating and contacting target
respondents is in place. Specify if mail,
phone, face-to-face, or combination strategy
is used.
-The required response rate is specified by the
State.
-Strategies for maximizing response rates are in
place.

-Procedures for handling missing data and for
calculating raw and adjusted response rates
are in place.
Activity 6: Implementation of the survey

23

-There are procedures for formatting,
reproducing, and distributing the survey
questionnaire.

-There are procedures for assuring data
confidentiality.

-Data collection, data entry and internal quality
control methods are in place.

-The data analysis plan is in place.

-Data file production and delivery mechanisms
are in place.
Activity 7: Data Preparation and Analysis
-Quality control procedures are in place,
including: administration of surveys; receipt
of survey data; information and assistance;
coding, editing and entering data; procedures
for missing data and data that fails edits.
-Data preparation and analysis procedures are in
place.

-Final report includes narrative text accompanied

24

by graphical display of data.
Activity 8: Survey Documentation – Final report includes:
-Survey purpose and objectives
-Technical methods of survey implementation
and analysis
-Description of data obtained

-Conclusions drawn from data
-Detailed assessment of each MCO’s/PIHP’s
strengths and weaknesses with respect to (as
appropriate) access, quality, and/or
timeliness of health care furnished to
Medicaid enrollees.

-Comparative information about all MCOs/PIHPs
(as directed by the State).

-Problems with survey methodology.

-Lessons learned.
-Recommendations for future survey efforts.

25

VALIDATING SURVEYS
I.

PURPOSE OF THE PROTOCOL

Surveys of Medicaid beneficiaries or other entities are becoming an increasingly common
method of health care quality measurement. Surveys can provide information that is not
generally available through clinical or administrative records. Surveys also can capture
beneficiaries’, families’, caregivers’, providers’, or other relevant parties’ perceptions of access
to care and quality of health services. However, in order for a survey to produce valid and
reliable information (upon which to make valid and reliable judgements about health care access,
quality, and/or timeliness), the design and administration of a survey must be carried out in
accord with certain fundamental principles of survey design and administration.
If a State desires its External Quality Review Organization (EQRO) to incorporate the findings
of a survey (one that is not administered by the EQRO at the direction of the state Medicaid
agency) into the required EQRO activity of “analysis and evaluation . . of aggregated
information on timeliness, access, and quality of the health care services furnished to Medicaid
recipients by each MCO . . ,” 1 the EQRO must determine the extent to which the survey has
produced valid and reliable information. This determination is necessary in order for the EQRO
to determine how much significance to place on a particular survey’s findings when the EQRO is
aggregating and analyzing information from multiple sources about access, quality and/or
timeliness of care. This protocol describes the steps that need to be undertaken by an EQRO to
evaluate the methodological soundness of a particular survey, if the survey is to be included as a
component of the external, independent quality review required under federal law for Medicaid
managed care organizations (MCOs) and prepaid health plans (PIHPs). In the Center for
Medicare and Medicaid Service’s (CMS)’s (formerly the Health Care Financing Administration
(HCFA)) proposed rule on external quality review of Medicaid MCOs and PIHPs, administration
or validation of provider surveys of quality of care is identified as an optional EQRO activity, to
be used at the discretion of the State. 2

II.

ORIGIN OF THE PROTOCOL

This protocol is a variation of the protocol on “Administering Surveys” and is based on the same
generally accepted principles of survey design and administration that guided that protocol. This
protocol was also guided by the same academic and research publications. These are cited in
Attachment A.
1

Proposed rule on Medicaid Program; External Quality Review of Medicaid Managed Care Organizations.
Federal Register Vol.64, No 230 December 1, 1999.
2

Ibid.

26

III.

PROTOCOL OVERVIEW

This protocol applies to all types of surveys; for example, beneficiary surveys and provider
surveys. It does not assume the use of one particular survey instrument, sampling methodology
or approach to analyzing or reporting study results because differences in the purposes of
surveys preclude any one questionnaire, sampling methodology and analysis and reporting
strategy from meeting all needs. In a number of instances, however, the protocol addresses the
Consumer Assessment of Health Plans Study (CAHPS) surveys and reporting formats, which
were developed by the Agency for Healthcare Research and Quality (AHRQ) (formerly the
Agency for Health Care Policy and Research - AHCPR) through cooperative agreements with
the RAND Corporation, Harvard University, and the Research Triangle Institute. CAHPS
surveys are frequently used by States to evaluate Medicaid beneficiaries’ experiences with
managed care.
In this protocol, survey validation is limited to review of survey procedures. It does not include
collecting survey data anew from the initial survey respondents to verify their responses. Time,
resource requirements, and the potential demands on survey respondents typically make this
activity infeasible.
The protocol specifies seven activities that must be undertaken to assess the methodological
soundness of a given survey:
1.
2.
3.
4.
5.
6.
7.

IV.

Review survey purpose(s) and objective(s)
Review intended survey audience(s)
Assess the reliability and validity of the survey instrument
Assess the sampling plan
Assess the adequacy of the response rate
Review survey data analysis and findings/conclusions
Document evaluation of survey

PROTOCOL ACTIVITIES

The eight activities comprising this protocol are discussed below. EQROs should use a work
sheet such as that found in Appendix B to assist in performing and documenting the validation
process for the survey under review.
ACTIVITY 1:

Review survey purposes(s) and objective(s).

In order to have a sufficient understanding of the design, administration, and analysis of the
survey, the EQRO should communicate with the entity(ies) that administered the survey to
review and understand key decisions about the survey’s purpose(s) and objective(s). These
decisions will have shaped the survey methodology, administration, and analytical approach. The
EQRO should be able to answer the following question: “What did the survey sponsor want to
learn by administering the survey?” Although a survey sponsor may have multiple purposes for a
given survey, for purposes of this protocol, one of the stated purposes of the survey must have

27

been to assess -- either alone or in combination -- the access, quality, and/or timeliness of care
received by Medicaid beneficiaries enrolled in individual MCOs/PIHPs.
The purpose(s) and objective(s) of the survey will also have determined the survey’s “unit of
analysis.” The unit of analysis refers to the type of entity about which the survey sponsor wishes
to obtain information. The unit of analysis may be, for example: individual MCOs/PIHPs, the
State’s managed care initiative as a whole, or a particular grouping of providers. Thus, the unit
of analysis also greatly influences the design, administration, and analysis of the survey.
Although the survey under review may have addressed multiple units of analysis, at least one of
the units of analysis must have been individual MCOs/PIHPs participating in the survey; i.e., the
survey must have been designed to provide representative and generalizeable information about
individual MCOs/PIHPs.
Following from the survey’s purposes(s) and unit(s) of analysis, more detailed objectives should
have been explicitly stated in writing. Ideally, objectives should be concise, explicit and
measurable. For example, if the purpose of the study was to discover how satisfied Medicaid
managed care enrollees were with their health care, more specific objectives might include:
• Determining if enrollees of individual MCOs/PIHPs are satisfied with their access to
specialty care.
• Determining if enrollees of individual MCOs/PIHPs are involved in planning for their own
treatment.
• Determining if enrollees of individual MCOs/PIHPs are satisfied with the quality of their
interactions with their primary care provider.
The survey sponsor should provide documentation of the survey purpose(s), explicitly stated
objectives, and a description of the unit of analysis.

ACTIVITY 2:

Review intended audience(s) for survey findings.

The EQRO also should have an understanding of the intended audiences for the survey results.
Typically, the audiences may include:
- State Medicaid agencies that plan to use the survey results as part of their contracting
strategy for improving delivery of health services;
- Medicaid beneficiaries and advocates who are choosing between FFS and MCOs/PIHPs or
among Medicaid MCOs/PIHPs. Survey information increasingly is viewed as one
method for allowing the consumer to make an informed choice. This type of survey effort
would typically enable comparisons of MCOs/PIHPs;
- MCO/PIHP managers and providers who can use the survey results to identify areas of
superior health service delivery performance as well as areas that need improvement.

28

- State policy makers who can use survey findings to monitor how Medicaid beneficiaries
perceive the care they receive.
The intended audience for a survey can influence the selection of the survey instrument, the
wording of individual survey questions, the type of data analysis performed, and the selection
and presentation of particular survey findings. It also can inadvertently introduce bias in the
survey process. The EQRO should understand the intended use of the survey, as one part of its
overall evaluation of the reliability and validity of the survey.

ACTIVITY 3:

Assess the reliability and validity of the survey instrument.

A survey may have been administered using: 1) a pre-existing instrument that was developed
prior to the survey for use by others; 2) a new instrument developed specifically for the
particular survey under review; or 3) an adaptation of a pre-existing instrument. Regardless of
which of these three options a survey sponsor employed, the EQRO will need to determine the
validity and reliability of the survey instrument as a critical step in deciding how much
confidence to place in the survey’s findings.
Reliability refers to: 1) the internal consistency of a survey instrument; and 2) the reproducibility
of survey results when the survey is administered under different conditions; e.g., by different
people, or at different times. Internal consistency requires that positive statistical correlations
exist between questions within a survey. In other words, a respondent’s answers to particular
survey questions should not contradict the respondent’s answer to other survey questions.
Answers should be consistent across related questions. Reproducibility means that a tool yields
the same results in repeated trials. For example, a grocery store scale should show the same
weight each time the same bag of produce is measured. If it shows a different weight, it is
unreliable. Similarly, a survey instrument’s questions should not be answered differently
depending upon who is administering the survey or in what location or manner the survey is
administered. Survey questions also should be clear so individuals responding to the same
question in the survey will not interpret the same question differently.
Validity refers to the degree to which a tool measures what it is intended to measure. Different
types of validity measurements include: face validity, content validity, construct validity, and
criterion-related validity. Face validity is also known as logical validity and ensures that the tool
generates common agreement and/or mental images associated with a concept about which the
survey is asking. This type of validity does not involve statistical analysis, but is an opportunity
to determine if a concept is well assessed by a question or questions in a survey. This can be
accomplished through a review that may include one or more focus groups whose primary
purpose is to agree/disagree on whether the survey, on its face, is capturing the concept to be
addressed by the survey. Content validity builds on face validity by asking whether the survey
fully captures and represents the concept under study. This type of validity is typically
established by a review for relevance by individuals with expertise in the subject matter
addressed in the survey. Content validity also is not established using statistical procedures.
Rather, it is established by agreement from experts in the topic under study.

29

Construct and criterion validity are more sophisticated measures of validity and require more
time and resources to assess. Construct validity assesses how well the test measures the
underlying issue (e.g., access to care, satisfaction with care, satisfaction with providers) it is
supposed to measure. Criterion validity measures how well the performance on a test reflects the
performance on what the test is supposed to measure. There are two types of criterion validity:
concurrent and predictive. Concurrent validity uses a correlation coefficient to compare the
survey to a "gold standard" survey for measuring the same variable. Predictive validity measures
the survey's ability to "predict" future outcomes and also uses a correlation coefficient for
comparison. While no instrument is ever determined to be “perfect” with respect to reliability
and validity, the greater amount of reliability and validity testing, and the greater the positive
findings with respect to reliability and validity that are associated with a given survey
instrument, the greater confidence an EQRO can have in a survey’s findings.
The CAHPS survey instruments and reporting formats have undergone rigorous testing for
reliability and validity, including focus group interviewing, cognitive interviewing, and fieldtesting. If a survey sponsor chose to use another instrument, develop one of its own, or modify
an existing instrument, at a minimum, it should promote reliability and validity by first
establishing face and content validity, then pre-testing the tool for reliability. Face and content
validity can be established by convening one or more focus groups that include beneficiaries and
individuals with subject matter expertise. Reliability can be assessed using the test-retest method
in which the survey is administered to the same group at two different times. A correlation
coefficient is calculated and indicates the reproducibility of results. Correlation coefficients with
r-values at or above 0.70 indicate good reliability.
Existing survey instruments. Because of the resources required to develop and test new survey
tools, many survey sponsors will choose to use an existing survey instrument - especially one
that has undergone strong reliability and validity testing such as CAHPS. CAHPS testing
included cognitive testing during the development and evaluation phases, calculation of
reliability estimates in a sample of Medicaid enrollees and private health insurance purchasers,
and convening of focus groups to test relevance of survey concepts and items. In addition,
CAHPS surveys include a free, easy-to-use CAHPS Survey and Reporting Kit. The kit contains
a set of mail and telephone survey questionnaires including Spanish language versions, sample
reporting formats, and a handbook with step-by-step instructions for sampling, administration,
analysis, and reporting. The handbook’s instructions are easy to understand and allow flexibility
within the sample and analysis design. The handbook also includes a comprehensive, statistical
software package as well as the telephone number and e-mail address of the technical assistance
hotline.
There are a wide variety of other existing questionnaires that can be selected as the survey
instrument. While some have undergone extensive reliability and validity testing, others have
been developed with little or no testing for reliability and validity. Unless there is evidence of an
instrument’s reliability and validity testing and results, an instrument’s reliability and validity
should always be suspect. Even pre-existing, validated survey instruments may not have been
validated in a Medicaid population. Instruments not validated in a Medicaid population may not
yield valid or reliable results for this population.

30

Another advantage of existing instruments is that use of the same questionnaire, methods, and
analysis for surveys of MCOs/PIHPs, population groups, and States, can permit a particular
survey’s findings to be compared against those of other studies. For example, standardization of
survey instruments, methodology, and report formats in CAHPS provides reliable and valid
measures that can be compared across MCOs/PIHPs, across MCOs/PIHPs and fee-for-service
(FFS), as well as across a wide range of respondents and States.
When an existing survey instrument was used, the EQRO should document the extent of
reliability and validity testing of the survey instrument.
New survey instruments. In some cases, a survey sponsor may have chosen to create a new
survey that was developed specifically for a particular survey. Usually, new survey instruments
are developed and used when the purposes and objectives of the study require answers to
questions that are not addressed by existing instruments. Such survey instruments can capture
information that is of great interest and relevance to the questions under study. However,
assuring the reliability and validity of new surveys is costly and takes time. Without such
reliability and validity testing, surveys can have serious methodological shortcomings and be
vulnerable to criticism. A 1997 report by the Department of Health and Human Services (DHHS)
Inspector General, “Medicaid Managed Care: The Use of Surveys as a Beneficiary Protection
Tool” found that some State-specific surveys were designed to meet too many objectives and
provided little useful data. When external quality review is to incorporate the findings of a new
survey instrument and methodology, the EQRO should document the extent of reliability and
validity testing of the survey instrument and methodology. The results of the reliability and
validity testing are an important consideration in determining the extent to which the EQRO
incorporates the survey findings into its analysis and evaluation of information on the access,
quality and timeliness of health care services furnished to Medicaid beneficiaries enrolled in
MCOs/PIHPs.
Adaptations of existing surveys. Survey sponsors also may have decided to adapt an existing
questionnaire by adding or deleting items, modifying questions, or using only certain groups of
questions to respond to a specific issue under study. This approach provides flexibility to add or
change the data to be collected, while providing many of the advantages of using a pre-existing
questionnaire. However, this approach also can raise questions about the validity and reliability
of the new or modified questions, as well as the survey overall. Validated instruments are tested
“as a whole,” and any modifications can change the focus and purpose of the tool, and thereby
the validity and reliability of the tool as a whole and/or its individual questions. Adapting an
existing survey is easier if CAHPS is used, as the CAHPS survey instruments have been
specifically designed with opportunities to customize the questionnaire by adding questions from
a list of supplemental optional items, as well as State-designed questions. Most pre-existing
questionnaires are not designed to accommodate such modifications and require additional
testing. Modifications and additional testing are time intensive and will likely result in increased
costs.
The EQRO should review the evidence about the reliability and validity testing of all survey
instruments, regardless of whether they are existing, new, or adapted instruments. Each survey’s
reliability and validity testing should be an important element in determining to what extent the

31

EQRO relies upon the survey findings to inform its analysis and evaluation of access, quality and
timeliness of health care.

ACTIVITY 4:

Review the sampling plan.

When a survey sponsor decides to collect information directly from beneficiaries, providers, or
other unit of analysis, it must decide how many “units” will need to be surveyed. A survey
sponsor could decide to take a “census;” i.e., ask every individual entity which the survey is
designed to represent, to answer the survey questions. For example, a survey sponsor could
decide to survey all Medicaid beneficiaries or all Medicaid participating providers. However,
because the cost of contacting all these entities is expensive, surveys typically are administered
to only a portion (a “sample”) of the target population. Whenever a sample, as opposed to a
census, is used, it is very important to select the sample members in a way that represents as
closely as possible the population about whom information is desired - both those included in the
sample and those not selected to respond to the survey. The EQRO should review
documentation related to the sampling plan, particularly (1) the definition of the study
population; (2) specifications for the sample frame; (3) the type of sampling used; (4) the
adequacy of the sample size; and (5) sample selection procedures. The level of detail involved in
this review will require that the EQRO use professional statisticians. The EQRO will need to
make a judgement about whether the sample selected was sufficiently representative of the study
population for the EQRO to have confidence in the survey findings.
Step 1:

Identify the study population.

The EQRO should first document what population the survey was designed to study; e.g., all
Medicaid beneficiaries enrolled in managed care, or all children with special health care needs.
The “population” represents all units of analysis (sometimes referred to as “the universe”) about
which the survey questions are raised. A good sample is a small version of the population to be
studied and is representative of the population in terms of certain pre-identified characteristics
(e.g., race, age, MCO/PIHP affiliation.)
Step 2:

Review the sample frame.

Once the target population for the survey is identified, the EQRO must also identify and
understand the construction of the survey’s “sample frame.” The “sample frame” is a listing of
all members of the study population who are eligible for the study. From this listing, the sample
is selected. Questions used to define individuals eligible for the survey (the sample frame) might
include, for example:
Was a minimum period of enrollment in a MCO/PIHP necessary for individuals who
were surveyed? Did beneficiaries have to be continuously enrolled in a specific
MCO/PIHP for a specified period of time to be included in the survey? Frequently, a
minimum period of continuous enrollment, such as six months, is required to increase the
likelihood that enrollees have had contact with their specific MCO/PIHP.

32

Were any MCO/PIHP enrollees excluded? Why? For example, if a study aimed to
address children with special health care needs, was it limited only to children receiving
SSI or Title V, or did it address all children that a MCO/PIHP might identify as having
special health care needs, or all children whose family might identify them as having a
special health care needs?
Was only one member per household allowed to be surveyed?
Were children allowed to be surveyed?
When was open enrollment and how did it impact survey administration?
Step 3:Review the type of sampling used.
There are two basic types of statistical sampling: probability sampling and non-probability
sampling. Probability sampling leaves selection of the units to be included in the survey up to
chance. It is designed to prevent any biases in the sample selected by ensuring that each segment
of the study population is represented in the sample to the extent it composes part of the
population under study. Simple random sampling is the most common form of probability
sampling used in survey research. Population members are generally assigned a number, and
random numbers generated by a computer or table of random numbers are used to select
members from the population. This sampling approach ensures that all members of the target
population have an equal chance to be selected. The sample is thus assumed to be fully
representative of the population.
Stratified random sampling also may be used. This technique calls for dividing the population
(on the basis of prior knowledge about the population) into specific, pre-identified strata or
subgroups that are homogeneous within them with respect to certain characteristics (e.g.,
ethnicity, age, diagnoses). A random sample is then taken from each stratum or subgroup.
Stratification is done both to improve the accuracy of estimating the total population’s
characteristics and to provide information about the characteristics of interest within subgroups.
However, stratified random sampling requires more information about the population, and can
also require a larger total sample size. As a result, it is typically more expensive than simple
random sampling. Stratified sampling may also involve “weighting” the sample. In this process,
a survey would select a disproportionately larger number of units of analysis from one or more
of the strata to allow the survey to produce information on that particular strata; e.g., individuals
dually receiving both Medicare and Medicaid.
Non-probability sampling methods are based on the decisions of those administering the survey,
and not on random chance. For example, a State survey sponsor might wish to obtain
information about new Medicaid beneficiaries by surveying all new beneficiaries eligible in a
particular month. It might wish to obtain information on average stage of pregnancy at time of
the establishment of Medicaid eligibility by examining the records of women giving birth at a
particular hospital. In such cases, the information obtained by the State will have been biased.
Non-random sampling methods such as these also do not lend themselves to statistical analysis.
Because of the risk of biased results and the obstacles to statistical analysis, non-probability

33

sampling is discouraged. However, at times it can be an appropriate and efficient way of
collecting needed information.
Step 4:Review adequacy of sample size.
A sample only collects information from a fraction of the population. Based on data obtained
from the sample, estimates are produced about the utilization of health care services, satisfaction
with care, quality of care or other issues. However, these estimates are at risk of being
“inaccurate” to the extent that: (1) they are different from the population’s true values because
only a portion of the entire population was involved in producing the estimates, and (2) a
different sample will give different estimates. Fortunately, the risk of inaccuracy can be reduced
by carefully selecting the sample size.
Two factors influence the determination of the appropriate sample size for a survey 3 : (1) the
level of certainty desired for the estimate to be produced and 2) the margin of error that is
defined as acceptable. Combining these two factors establishes the parameters for the estimates
to be produced by the survey (e.g., a survey would target the answers to its questions to be 95%
accurate with a 5% margin of error). The greater the certainty and the smaller the margin of error
desired, the larger the sample will need to be (and the more expensive the survey will be). In
addition, over-sampling (sampling for a greater number of entities than are actually needed to
produce valid estimates) is sometimes undertaken, to compensate for an anticipated percent of
non-respondents to the survey. Because of these and other factors (e.g., whether the State
intended the survey to provide estimates of individual MCO/PIHP performance or whether the
survey findings were intended to compare MCP/PIHP performance), in order to review the
adequacy of the sample size, the EQRO needs to evaluate: (1) the level of certainty desired about
the estimate to be produced and 2) the margin of error that was defined as acceptable. The
EQRO will need a qualified statistician to review the adequacy of the sample size that was used
for the survey.
If the CAHPS survey is used, the EQRO can take advantage of the statistical analysis already
performed by the CAHPS’ developers. However, these will differ from the specifications
provided by the National Committee for Quality Assurance (NCQA) in the CAHPS 2.0H
manual. CAHPS 2.0H is an adaptation of CAHPS developed by the NCQA, in conjunction with
AHCPR, for use with NCQA’ Healthplan Employer Data and Information Set - HEDIS. The
CAHPS 2.0H questionnaire is identical to the CAHPS questionnaire except for the inclusion of
the HEDIS smoking cessation counseling questions. This difference produces different sampling
specifications for CAHPS 2.0 and for NCQA’s CAHPS 2.0H version. NCQA’s CAHPS 2.0H
requires a larger sample size in order to obtain adequate numbers of respondents who can answer
the smoking questions.

3

A third factor (the true frequency of the occurrence of a characteristic within a population) is also often
used to calculate the sample size for some studies. However, since a survey studies multiple questions, and the true
prevalence of responses to any of these questions may be variable across questions, and change over time, the most
prudent course of action is to assume the need for the maximum sample size by assuming that the presence or
absence of the characteristic in question is evenly distributed across the population.

34

Step 5:Review the sample selection procedures.
The EQRO will review the sample selection procedures. This includes reviewing the statistical
program or other process used to generate the sample. The EQRO will need to determine the
extent to which the selection of sample members from the sample frame was conducted in a
manner that protects against bias.

ACTIVITY 5:

Review the adequacy of the response rate.

The EQRO must assess the adequacy of the response rate. This includes a review of: 1) how
respondents in the sample were contacted; and, 2) what the response rate was and how it was
calculated.
Step 1:Review the strategy for contacting individuals participating in the survey.
Surveys can be administered in a face-to-face interview, by telephone interview, by a mailed
questionnaire to be completed by the respondent and mailed back, or by a combination of these
approaches. Because of the importance of achieving a high response rate, considerable research
has been conducted on these approaches to survey administration. Recent studies indicate that
the percentage of non-respondents is increasing, making survey research more difficult.
Although in the past survey researchers believed that face-to-face interviewing produced higher
response rates, recent research challenges this view. No single method of survey administration
is believed to be superior in all circumstances.
Step 2:Review the response rate.
The response rate generally is the number of entities that respond to the survey divided by the
number of entities selected to respond to the survey. However, the exact definition is generally
more detailed and complex that this; there are a number of different ways to calculate response
rates, as well as different strategies for reporting these rates. In addition, the definition of
response rate may differ depending on mode of administration. For example, mail surveys
typically may exclude non-deliverable addresses from the denominator, while telephone
interviews may exclude from the denominator people with non-working telephone numbers and
those who no longer reside in the household. The response rate is important because a low rate
reduces the ability to make accurate conclusions (statistical inferences) about the population
under study. Those individuals who do not respond to the survey may differ systematically from
individuals who respond, thereby biasing the results.
Medicaid beneficiaries typically have a low survey response rate. Medicaid populations tend to
have lower literacy levels than average, are more likely to speak primary languages other than
English, are highly mobile, and have high rates of inaccurate or unavailable telephone numbers
and mailing addresses. As a result, surveys of Medicaid beneficiaries tend to require extra effort
to maximize response rates. These efforts can include using an expansive contact strategy that
emphasizes tracking methods, and a mixed-mode approach to survey administration that
typically combines mail and telephone survey procedures. However, even with these additional

35

measures, response rates for Medicaid beneficiaries are likely to be lower than for many other
populations. The CAHPS developers suggest that the target response rate for administering
CAHPS to Medicaid beneficiaries should range from 40% to 60%. They also recommend that
survey vendors focus on strategies that promote high response rates and develop a plan of
correction actions to take if the response rate falls short of the goal.
The EQRO should evaluate the response rate and discuss with survey sponsors the potential
reasons for non-response, and the extent to which non-response may have introduced bias in the
survey findings. The EQRO also should examine the specific methods for calculating both raw
and adjusted response rates. For example:
Raw response rates can be calculated by:
Number of completed surveys
Total number of targeted survey respondents

Adjusted response rates can be calculated by:
Number of completed surveys
Total number of targeted survey respondents minus deceased and ineligible due to
hospitalization, cognitive impairment, or other reasons specified in the methodology)

The EQRO will need to assess the response rate, whether the survey employed a reasonable
method for calculating the response rate, potential sources of non-response and bias, and the
extent to which the response rate weakens or strengthens the generalizeability of the survey
findings.

ACTIVITY 6:

Review survey data analysis and findings/conclusions.

The EQRO should review the approach to analyzing the survey data and the
findings/conclusions of the survey.
Data quality control. The EQRO should review how the survey sponsor handled responses that
failed edit checks, the treatment of missing data, and procedures for determination of
usable/complete surveys.
Data analysis. The EQRO should review how the survey sponsor analyzed the survey data. Each
MCO/PIHP should have been treated as the basic unit of analysis and reporting. The analysis
should include simple statistical procedures such as measures of central tendency and frequency
distributions for specific survey questions. In addition, the findings may include a statistical
analysis of the extent to which there are differences in responses to questions according to

36

survey subgroups (e.g., MCOs/PIHPs). Any differences should be examined using standard
statistical tests such as the F-test and the T-test. The F-test may be used to determine if the
average scores of any entity (e.g., MCO or PIHP) differ significantly from that of other
MCOs/PIHPs. The F-test tests the hypothesis that all MCO/PIHP means (averages) are equal by
comparing the relative size of two variances. The F-test will be significant if at least one
MCO/PIHP mean is statistically different from any other MCO/PIHP mean. If the F-test
indicates that there are significant differences, a series of t-tests may be conducted to identify
which MCO/PIHP is different from other MCO/PIHPs. The t-test tests the hypothesis that two
specific MCO/PIHP means are significantly different from each other. A significant t-test
indicates that the two MCOs/PIHPs represent two populations with different means.
In addition, multivariate regression and/or analysis of covariance might have been used to further
identify meaningful differences between MCOs/PIHPs while controlling for respondent
characteristics. For example, a driver analysis and a quadrant analysis may have been used to
gain more insight into the survey measures and possible approaches to future improvement.
Driver analysis refers to statistical techniques that may be used to identify the important factors
or “drivers” that significantly affect some outcome measure. In this context, the driver analysis
will identify the most important factors (drivers) that significantly affect overall scores.
Individual questions can be analyzed to determine their effect on overall scores. These identified
drivers, along with their current measures will then be subjected to a quadrant analysis that
compares the importance of the drivers against the current survey measures on these drivers. For
example, if access to specialists has been identified as a driver of patient satisfaction, then
depending on the importance of access as a driver and the current performance on access, there
could be four different combinations or quadrants: (1) high importance and high performance,
(2) high importance and low performance, (3) low importance and low performance, and (4) low
importance and high performance. High importance means that access to specialists has an
important effect on satisfaction, and high performance means that access has been appropriately
provided. Therefore, quadrant (1) identifies strength, i.e. satisfactory performance on a crucial
driver. Similarly, quadrant (2) identifies areas needing improvement; quadrant (3) and (4) are
areas of low leverage and insignificance. The most important insight that can be gained through
quadrant analysis is the identification of improvement opportunities. MCO/PIHPs can then
make strategic decisions about where to devote their efforts and resources in order to improve
MCO/PIHP scores.
Consistent with the purposes and objectives of the survey, populations within each MCO/PIHP
could also be analyzed and reported separately. For instance, the State may be interested in
whether any particular finding differs significantly across geographic locations, demographic
groups, socioeconomic groups, or other identifiable subgroups (e.g., frequent and infrequent
users of health care services). Survey results also may have been adjusted for external factors
that influence results; e.g., studies suggest that age, education, and self-reported health status,
among others, have significant effect on patient satisfaction.
Regardless of the statistical analyses performed, the data analysis should yield clear information
about the performance of each MCO/PIHP and, as appropriate, differences among MCOs/PIHPs.

37

ACTIVITY 7:

Document evaluation of survey

Using the information obtained from activities 1 - 6, including the review of:
-

survey purpose, objectives and intended audience(s);
reliability and validity of the survey instrument;
adequacy of the sampling strategy and response rate; and
data obtained and its analysis,

the EQRO should make a determination of the generalizeability of the survey findings, and the
extent to which the survey findings can be relied upon to make inferences about the access,
quality, and timeliness of health care delivered to Medicaid beneficiaries enrolled in
MCOs/PIHPs. The EQRO should document these conclusions and provide written findings on:
1) the survey’s technical strengths and weaknesses; 2) the limitations /generalizeability of survey
findings; 3) conclusions drawn from the survey data; 4) a detailed assessment of each
MCO/PIHP’s strengths and weaknesses with respect to (as appropriate) access, quality, and/or
timeliness of health care furnished to Medicaid enrollees; and 5) as the State determines
methodologically appropriate, comparative information about all MCOs/PIHPs.

38

ATTACHMENT A
Works Consulted in Protocol Development
Babbie, E. “The Practice of Social Research.” 6th Edition. Chapter 10. Wadsworth Publication
Company: Belmont, CA. 1992.
Brown, J., Nederend, S., Hays, R., Short, P.F., & Farley, D. “Special Issues in Assessing Care of
Medicaid Recipients.” CAHPS Papers for Medical Care Supplement. December 1998.
Cella, D., Hernandez, L., Bonomi, A., Corona, M., Vaquero, M., Shiomoto, G., Baez, L.
“Spanish Language Translation and Initial Validation of the Functional Assessment of Cancer
Therapy Quality-of-Life Instrument.” Medical Care. Volume 36, Number 9: 1407-1418. 1998.
Department of Health and Human Services Office of Inspector General. “Medicaid Managed
Care: Use of Surveys As A Beneficiary Protection Tool.” 1996 Dec. OEI-01-95-00280.
Dillman, Don A., Sinclair, Michael D., Clark, Jon R. “Effects of Questionnaire Length,
Respondent-Friendly Design, and a Difficult Question on Response Rates for OccupantAddressed Census Mail Surveys”. Public Opinion Quarterly. Volume 57: 289-304. 1993.
Dillman, Don A. , Mail And Telephone Surveys: The Total Design Method. New York, A WileyInterscience Publication, 1978.
Donelan, Karen. “How Surveys Answer A Key Question: Are Consumers Satisfied With
Managed Care?” Managed Care, Volume 5, Issue 2: 17-24. February 1996.
Gallagher, P.M., Fowler, F.J., & Stringfellow, V. “Respondent Selection by Mail, To Obtain
Probability Samples of Enrollees in a Health Care Plan”. CAHPSTM Papers for Medical Care
Supplement. 1998 Dec. (In Publication).
Gold, Marsha & Eden, Jill. “Monitoring Health Care Access Using Population-Based Surveys:
Challenges in Today’s Environment, Information For Policymakers.” Methods Brief, August
1998.
Harris-Kojetin, L.D., Fowler, F.J., Brown, J.A., & Sweeney, S.F. “The Use of Cognitive
Testing for Developing and Evaluating CAHPS 1.0 Survey Items.” CAHPSTM Papers for
Medical Care Supplement. 1998 Dec.
Hays, R.D., Shaul, J.A., Williams, V.S.L., Lubalin, J.S., Harris-Kojetin,L., Sweeney, S.F., &
Cleary, P.D. “Psychometric Properties of the CAHPSTM 1.0 Survey Measures”. Revised paper,
98020; Medical Care. CAHPSTM Papers for Medical Care Supplement December 1998. (In
Publication).

39

Kasper, Judith D. “Part III: What Can We Realistically Expect from Surveys? Constraints Of
Data and Methods; Asking About Access: Challenges for Surveys in a Changing Healthcare
Environment.” Health Services Research. July 1997.
Kassberg, Maria & Wynn, Paul. What Falls Through The Cracks When Quality Is Measured?”
Managed Care, Volume 6, Issue 3. 22-28. March 1997.
Krowinski, W.J. & Steiber, S.R., “Measuring and Managing Patient Satisfaction.” Chapters 5 &
6. 2nd Ed., American Hospital Publishing, Inc. 1996.
Leedy, P.D. “Practical Research-Planning and Design.” 5th Edition. Chapter 2 & 8. MacMillan
Publishing Company: New York. 1993.
National Committee for Quality Assurance (NCQA), HEDIS 1999, HEDIS Protocol for
Administering CAHPS 2.0H Survey. Volume 3, 1998.
Weidmer, B., Brown, J., & Garcia, L. “Translation Issues: Translating the CAHPSTM Survey
Items into Spanish.” CAHPS Papers for Medical Care Supplement. December 1998.
Wimmer, R.D. & Dominick, J.R., “Survey Research Basics.” Mass Media Research,1997.
(http://web.utk.edu/~toddc/survey)

40

ATTACHMENT B
SURVEY VALIDATION WORKSHEET
SURVEY ELEMENT

DOCUMENTATION

Activity 1: Survey Purposes(s) and Objective(s)
-There is a written statement of survey
purpose(s) that addresses access,
timeliness, and/or quality of care.
-Unit(s) of analysis is clearly stated and
includes individual MCOs/PIHPs.

-Study objectives are clear, measurable, and
in writing.
Activity 2: Survey Audience(s)
-Audiences for survey findings are
identified.

Activity 3: Survey Instrument
-If an existing survey instrument was used,
there is evidence of its reliability.
-If an existing survey instrument was used,
there is evidence of its validity.

41

-If a newly developed survey was used, there
is evidence of reliability testing.

-If a newly developed survey was used, there
is evidence of validity testing.

-If using an adapted survey: there is
evidence of reliability testing.

-If using an adapted survey, there is
evidence of validity.

Activity 4: Sampling Plan
-Population to be studied was clearly
identified.
-Sample frame was clearly defined and
appropriate.
-Sampling strategy (simple random,
stratified random, non-probability) was
appropriate to study question.
-If random sampling is used, sampling
process was valid.

-If stratified random sampling is used,
sampling process was valid.

42

-If non-probability sampling is used, there is
clear and strong evidence why random
sampling was not feasible.
-Sample size was determined using
reasonable statistical parameters, as
appropriate to survey purpose and
objectives.
-Sample selection processes were sound.

Activity 5: Response rate
-Specify if mail, phone, face-to-face, or
combination strategy was used to
contact targeted survey respondents.
-Specifications for calculating raw and
adjusted response rates are clear and
appropriate.

-The response rate, potential sources of nonresponse and bias, and implications of
the response rate for the
generalizeability of survey findings are
assessed.

43

Activity 6: Review of Data Preparation and Analysis
-Quality control procedures were in place
for: administration of the survey;
receipt of survey data; respondent
information and assistance; coding,
editing and entering data; and
procedures for missing data and data
that fails edits.
-Methodologically appropriate data
preparation and analysis procedures
were used.
-Final report provided understandable and
relevant data.
Activity 7: Documentation of Validation of Survey. The EQRO documents its findings regarding its:
-Assessment of the technical methods of
survey implementation and analysis,
and the survey’s technical strengths and
weaknesses
-The limitations /generalizeability of survey
findings

-Conclusions drawn from the survey data.

-A detailed assessment of each
MCO’s/PIHP’s strengths and
weaknesses with respect to (as
appropriate) access, quality, and/or
timeliness of health care furnished to

44

Medicaid enrollees.
-Comparative information about all
MCOs/PIHPs (as directed by the State).

END OF DOCUMENT

45


File Typeapplication/pdf
File TitleADMINISTERING OR VALIDATING SURVEYS
AuthorHCFA Software Control
File Modified2008-12-29
File Created2008-12-29

© 2024 OMB.report | Privacy Policy