Supporting Statement Part A -- Consumer Choice Study 12 18 09

Supporting Statement Part A -- Consumer Choice Study 12 18 09.pdf

Study of Factors Influencing Consumer Choices Among Health Plans and Clinicians

OMB: 0935-0161

Document [pdf]
Download: pdf | pdf
SUPPORTING STATEMENT
Part A

Study of Factors Influencing Consumer Choices Among Health Plans and Clinicians

Version: December 18, 2009

Agency for Healthcare Research and Quality (AHRQ)

1

Table of contents
A. Justification .................................................................................................................3
1. Circumstances that make the Collection of Information Necessary......................3
2. Purpose and use of information .............................................................................7
3. Use of Improved Information Technology ............................................................7
4. Efforts to Identify Duplication...............................................................................8
5. Involvement of Small Entities ...............................................................................8
6. Consequences if Information Collected Less Frequently ......................................8
7. Special Circumstances ...........................................................................................8
8. Consultation outside the Agency ...........................................................................8
9. Payments/Gifts to Respondents .............................................................................8
10. Assurance of Confidentiality ...............................................................................8
11. Questions of a Sensitive Nature...........................................................................9
12. Estimates of Annualized Burden Hours and Costs ..............................................9
13. Estimates of Annualized Respondent Capital and Maintenance Costs .............10
14. Estimates of Annualized Cost to the Government.............................................10
15. Changes in Hour Burden....................................................................................11
16. Time Schedule, Publication and Analysis Plans................................................11
17. Exemption for Display of Expiration Date ........................................................11
List of Attachments...................................................................................................12

2

A. Justification
1. Circumstances that Make the Collection of Information Necessary
The Healthcare Research and Quality Act of 1999 (see Attachment A) states that the
mission of the Agency for Healthcare Research and Quality (AHRQ) is to enhance the
quality, appropriateness, and effectiveness of health services, and access to such services,
through the establishment of a broad base of scientific research and through the
promotion of improvements in clinical and health systems practices, including the
prevention of diseases and other health conditions. AHRQ shall promote health care
quality improvement by conducting and supporting:
1. research that develops and presents scientific evidence regarding all aspects of
health care; and
2. the synthesis and dissemination of available scientific evidence for use by
patients, consumers, practitioners, providers, purchasers, policy makers, and
educators; and
3. initiatives to advance private and public efforts to improve health care quality.
Also, AHRQ shall conduct and support research and evaluations, and support
demonstration projects, with respect to (A) the delivery of health care in inner-city areas,
and in rural areas (including frontier areas); and (B) health care for priority populations,
which shall include (1) low-income groups, (2) minority groups, (3) women, (4) children,
(5) the elderly, and (6) individuals with special health care needs, including individuals
with disabilities and individuals who need chronic care or end-of-life health care.
The Consumer Assessment of Healthcare Providers and Systems (CAHPS®) program is a
multi-year initiative of AHRQ. AHRQ first launched the program in October 1995 in
response to concerns about the lack of good information about the quality of health plans
from the enrollees' perspective. Numerous public and private organizations collected
information on enrollee and patient satisfaction, but the surveys varied from sponsor to
sponsor and often changed from year to year. The CAHPS® program was designed to:
•
•

Make it possible to compare survey results across sponsors and over time; and
Generate tools and resources that sponsors can use to produce understandable and
usable comparative information for consumers.

Performance reports on health plans and individual providers have become increasingly
available in recent years, but there is little evidence regarding how consumers understand
and use different types of performance information to make choices.
This study will use an experimental design to determine factors that influence consumer
understanding and use of performance information to select among health plans and
clinicians. It will include two parallel experiments, one designed to assess factors
3

influencing choice of health plans and one designed to assess factors influencing choice
of individual doctors. Respondents will be randomly assigned to one of six arms that vary
according to the type and complexity of performance information and the size of the
choice set (number of plans or doctors) included in the Web report.
For the clinician choice experiment study participants will see a web page labeled
“Performance Overview” that presents performance information for a set of primary care
doctors in a way that allows them to compare doctor ratings. Performance is summarized
by assigning one to five stars to show how each doctor compares with others in the same
geographic area. Participants can click on hyperlinks or a tab to see more detailed results.
The experimental arms differ in two respects: the type and amount of performance
information presented and the number of doctors listed.
The goals of the experiment are to assess the process of consumer choice and the extent
to which CAHPS-type measures are consulted, and to examine how consumers respond
to different types of information about doctor quality, including quantitative patient
experience measures, anecdotal reports from individual patients, and clinical performance
indicators. The post-test questionnaire will elicit participants’ understanding and
impressions of the material they saw on the Web site and inquire about how they made
their choice. Therefore, the post-test questions will differ across experimental arms.
The six arms of the clinician choice experiment are summarized below (see Attachments
B, C and E to K):
(1)

Baseline/Control Arm: participants see only ‘‘Service Quality’’ for each of 12
doctors in this arm. This includes a summary measure on the Performance
Overview page and more detailed measures corresponding to CAHPS composites
and an overall doctor rating on the single drill-down page. (n = 125)

(2)

Experimental Arm #1: Augmented Quantified Performance Measures: In this arm
participants will also see ‘‘Service Quality’’ on 12 doctors. In addition, they will
see a summary clinical performance measure labeled ‘‘Treatment Quality.’’ A
second drilldown page shows that this is based on clinical indicators for prevention
and screening, care for asthma, care for diabetes, and care for heart disease. (n =
125)

(3)

Experimental Arm #2: CAHPS plus Anecdotes: In this arm, participants will be
presented with ‘‘Service Quality’’ on 12 doctors. In addition, for each doctor, they
will see a tab labeled ‘‘Patient Reviews.’’ By clicking on this tab, they can see (drill
down to) from four to six patient comments describing patients’ experiences with
each doctor. Participants in this arm will not see clinical performance scores. (n =
125)

(4)

Experimental Arm #3: Augmented Quantified Performance Measures Plus
Anecdotes: In this arm participants will be presented with all three types of
information on 12 doctors: ‘‘Service Quality,’’ ‘‘Treatment Quality,’’ and ‘‘Patient

4

Reviews,’’ and therefore have a total of three drilldown pages from which they can
acquire more detailed information (n = 125)
(5)

Experimental Arm #4: CAHPS plus Anecdotes and Larger Choice Set: In this arm
participants will be presented with ‘‘Service Quality’’ and ‘‘Patient Reviews’’ on
24 doctors. (n = 125)

(6)

Experimental Arm #5: Maximum Cognitive Load: Large Choice Set and Three
Measures of Performance: In this arm, participants are presented with all three
types of information on 24 doctors: ‘‘Service Quality,’’ ‘‘Treatment Quality,’’ and
‘‘Patient Reviews.’’ (n = 125)

The basic design of the health plan choice experiment is similar to that used for the
clinician choice experiment. The key difference in the choice set is that – as is true in
real-world choices – health plan choice is made complex in the experiment by
introducing a larger number of measures of performance, compared to those available to
inform clinician choice. Even the simplest CAHPS-only arm has twice as many
component measures for health plans as for clinicians; the HEDIS scores also have
double the number of component measures. Reports from consumers include both
anecdotes and a count of aggregate complaints that have been filed against the plan.
Potentially offsetting the cognitive burdens caused by additional measures, health plan
choices typically involve fewer options than do clinician choices; in this choice
experiment participants will face choice sets involving either 4 or 8 health plans.
A second substantial difference exists between the health plan and clinician choice
experiments: the former assesses in an explicit manner the ways in which emotionality
affect how consumers make use of information. It will do so in two ways. First, the
counts of complaints mentioned above as an additional measure of plan performance
represent a quantitative score with a stronger emotional valence than either the CAHPS
or HEDIS measures. Second, two of the experimental arms will “prime” respondents to
think about health outcomes in a more emotionally laden manner, to see if this alters the
way in which they process this information, and – in particular, the role of information
with higher emotional valence (anecdotes and complaints) particularly in the most
information-dense choice sets.
Because we anticipate that the introduction of emotional priming will increase the
variance of consumer choices (some respondents will respond more powerfully to the
emotional priming than will others) we have increased the size of each experimental arm
from 125 to 150 subjects. The six arms of the plan choice experiment are summarized
below (see Attachments D and L to S):
(1)

Baseline/Control Arm: participants see only “Service Quality” for each of 4 plans in
this arm. This includes a summary measure on the Performance Overview page and
more detailed measures corresponding to two CAHPS domains (customer service
and accessibility of care) composites and corresponding plan ratings on the two
drill-down pages. (n=150)

5

(2)

Experimental Arm #1: Augmented Quantified Performance Measures: In this arm
participants will also see “Service Quality” on 4 plans. In addition, they will see
two summary clinical performance measure labeled “Treatment Quality, ” (HEDIS
-- The Healthcare Effectiveness Data and Information Set -- measures) one for
preventive care, the other for treatment of chronic conditions The drill-down page
for prevention will show preventive care scores of regular physical exams, and
screening for three common medical conditions. The drill down page for treatment
will include summary measures for heart problems, asthma, diabetes, and arthritis.
All told, respondents in this arm will have four drilldown pages with of detailed
performance measures (n = 150)

(3)

Experimental Arm #2: Augmented Quantified Performance Measures Plus
Consumer Reports: In this arm participants will be presented with CAHPS and
HEDIS scores (four aggregate measures, a total of 16 detailed measures on the four
drilldown pages) as well as two types of consumer reports: “Enrollee Complaint
Rates” and “Specific Enrollee Comments.” The actual wording of specific enrollee
comments will be accessed through a fifth drilldown page. This information will be
presented for 4 health plans. (n = 150)

(4)

Experimental Arm #3: Augmented Quantified Performance Measures and
Consumer Reports Plus Emotional Priming: In this arm participants will be
presented with same measures as in Experimental Arm #2 (a total of five drilldown
pages) for 4 health plans These respondents will be exposed to an emotional
priming exercise (see below) to heighten their emotional reactivity to health risks,
before being asked to evaluate their health plan options. (n = 150)

(5)

Experimental Arm #4: Maximum Cognitive Load: Large Choice Set and Full Set of
Performance Measures: In this arm, participants will be presented with all types of
information: “Service Quality,” “Treatment Quality” (both prevention and
treatment), “Patient Complaint Rates” and “Patient Reviews” on a total of 8 health
plans, doubling the information processing load from Experimental Arm #2. (n =
150)

(6)

Experimental Arm #5: Maximum Cognitive Load Plus Emotional Priming: In this
arm participants will be presented with same measures as in Experimental Arm #4
(a total of five drilldown pages) for 8 health plans These respondents will also be
exposed to an emotional priming exercise (see below) to heighten their emotional
reactivity to health risks, before being asked to evaluate their health plan options.
(n = 150)

Emotional Priming Protocol (Presented on Knowledge Networks site)

6

Now we’d like you to imagine that something has gone badly wrong with your medical
care. You haven’t been feeling right for months, but each time you go to see some
doctors, none of them can tell you what’s wrong. You can’t figure out whether your
doctors just aren’t very good, whether your health insurer won’t pay for some test that’s
needed, or if you just have a really complicated medical condition that’s hard to
understand. Whatever the cause, this has been going on for a while and you’ve run up
some hefty bills on a growing list of tests and treatments, none of which seem to help
much. It seems like this might go on for a long time.
If this were you, what would you be thinking? What sorts of emotions would you feel?
As you write about this, try to do so in a way that would help a reader envision what it
might be like to actually experience these sorts of problems.

Attachment W describes the methodology we will use to distribute CAHPS and HEDIS scores
in the experimental arms for both the clinician and health plan studies, and how anecdotes
will be assigned to clinicians and health plans based on their CAHPS scores. Attachment X
explains the process we followed in constructing the anecdotes for both studies. Attachment
Y provides a list of outcome and process variables for both studies.
2. Purpose and Use of Information
The results of this study will be used to develop recommendations for helping consumers
to better understand and more effectively use complex information to select health plans
and providers, with the aim of making performance information less burdensome and
more accessible, useful, and transparent to the public. In particular, the study findings
will inform the design and content of the growing number of Web-based reports on health
plan and provider performance. By adding to the evidence base on the types and
combination of information that are most salient and useful to consumers in choosing
among health plan and provider options, the study will make a significant contribution to
improving current reporting initiatives. In addition, the simulated Web-based reports will
be made available as examples for other report developers to use. This study is being
conducted pursuant to AHRQ’s statutory mandate to promote health care quality
improvement by conducting and supporting research that develops and presents scientific
evidence regarding all aspects of health care, 42 U.S.C. 299(b)(1), and to conduct
research on health care and on systems for the delivery of such health care, 42 U.S.C.
299a.
3. Use of Improved Information Technology
Participants will complete the experiment through a secure online connection from their
homes. Survey data are collected by a web-based survey system (internally referred to
as “Dimensions”). This application runs on top of a secured Windows environment that
7

has been hardened through various network and hosted-based security techniques.
Participants take online surveys by using a web-browser to access a unique, secured web
URL that is both emailed to them and made available through a secured web-portal. The
URL provides access to click through to a highly-available load-balanced farm of web
servers that hosts the online survey. This survey URL can be exposed via either standard
http or over SSL and TLS encrypted https, depending on the client requirements.
Throughout the interview process, questionnaire data are copied to a secured, centralized
database for data processing
4. Efforts to Identify Duplication
Work carried out under this clearance will be designed to reflect specific customer
population needs for which the work is being conducted and will not duplicate any other
work being done by AHRQ or other Federal agencies.
5. Involvement of Small Entities
Respondents are consumers of health care services offered by clinicians, practitioners,
and health plans. The study was designed to minimize burden on all respondents and will
not have a significant impact on small businesses or other small entities.
6. Consequences if Information Collected Less Frequently
This is a one-time data collection.
7. Special Circumstances
This request is consistent with the general information collection guidelines of 5 CFR
1320.5(d)(2). No special circumstances apply.

8. Federal Register Notice and Outside Consultations
8.a. Federal Register Notice
As required by 5 CFR 1320.8(d), notice was published in the Federal Register on
September 3rd, 2008 for 60 days (see Attachment T). Two comments were received and
are contained in Attachment U. AHRQ's response to these comments are in Attachment
V.

9. Payments/Gifts to Respondents
No payments or gifts will be given to respondents.

8

10. Assurance of Confidentiality
Individuals and organizations will be assured of the confidentiality of their replies under
Section 934(c) of the Public Health Service Act, 42 USC 299c-3(c). They will be told the
purposes for which the information is collected and that, in accordance with this statute,
any identifiable information about them will not be used or disclosed for any other
purpose.
Individuals and organizations contacted will be further assured of the confidentiality of
their replies under 42 U.S.C. 1306, and 20 CFR 401 and 4225 U.S.C.552a (Privacy Act
of 1974), and OMB Circular No.A-130. In instances where respondent identity is
needed, the information collection will fully comply with all respects of the Privacy Act.
11. Questions of a Sensitive Nature
There are no questions of a sensitive nature on this survey.
12. Estimates of Annualized Burden Hours and Costs
Exhibit 1 shows the estimated annualized burden hours for the respondents' time to
participate in this experiment. The entire experiment (including the design phase) will
not exceed two years. All participants will complete the pre-test, which is estimated to
require 5 minutes. As explained above, the experimental website varies by experimental
arm; however, based on preliminary testing, each participant will require about 10
minutes to review the information on the site. Exhibit 1 provides an average time
required to complete the post-test questionnaires. The total burden hours are estimated to
be 709 hours.
Exhibit 2 shows the respondents' cost burden for their time to participate in this
experiment. The total cost burden is estimated to be $13,887.
Exhibit 1. Estimated annualized burden hours
Experimental Group
Clinician Choice Experiment:
Pre-exposure questionnaire
Experimental Website
Baseline/Control Arm Post-test
Experimental Arm #1 Post-test
Experimental Arm #2 Post-test
Experimental Arm #3 Post-test
Experimental Arm #4 Post-test
Experimental Arm #5 Post-test
Health Plan Choice Experiment:

Number of
Responses

750
750
125
125
125
125
125
125

9

Number of
Total
responses Hours per
Burden
per
response
hours
respondent
1
1
1
1
1
1
1
1

5/60
10/60
7/60
8/60
8/60
12/60
12/60
14/60

63
125
15
17
17
25
25
29

Pre-exposure questionnaire
Experimental Website
Baseline/Control Arm Post-test
Experimental Arm #1 Post-test
Experimental Arm #2 Post-test
Experimental Arm #3 Post-test
Experimental Arm #4 Post-test
Experimental Arm #5 Post-test
Total

900
900
150
150
150
150
150
150
4950

1
1
1
1
1
1
1
1
na

5/60
10/60
7/60
8/60
12/60
12/60
14/60
14/60
na

75
150
18
20
30
30
35
35
709

Exhibit 2. Estimated annualized cost burden
Experimental Group

Number of
respondents

Total
burden
hours

Average
hourly wage
rate*

Total cost
burden

Clinician Choice Experiment:
Pre-exposure questionnaire
750
63
$19.56
$1,232
Experimental Website
750
125
$19.56
$2,445
Baseline/Control Arm Post-test
125
15
$19.56
$293
Experimental Arm #1 Post-test
125
17
$19.56
$333
Experimental Arm #2 Post-test
125
17
$19.56
$333
Experimental Arm #3 Post-test
125
25
$19.56
$489
Experimental Arm #4 Post-test
125
25
$19.56
$489
Experimental Arm #5 Post-test
125
29
$19.56
$567
Health Plan Choice Experiment:
Pre-exposure questionnaire
900
75
$19.56
$1,467
Experimental Website
900
150
$19.56
$2,934
Baseline/Control Arm Post-test
150
18
$19.56
$352
Experimental Arm #1 Post-test
150
20
$19.56
$391
Experimental Arm #2 Post-test
150
30
$19.56
$587
Experimental Arm #3 Post-test
150
30
$19.56
$587
Experimental Arm #4 Post-test
150
35
$19.56
$685
Experimental Arm #5 Post-test
150
35
$19.56
$685
Total
4950
709
na
$13,887
*Based upon the mean of the average wages, , “National Compensation Survey: Occupational Wages in the
United States, May 2007,” U.S. Department of Labor, Bureau of Labor Statistics.

13. Estimates of Annualized Respondent Capital and Maintenance Costs
Capital and maintenance costs include the purchase of equipment, computers or computer
software or services, or storage facilities for records, as a result of complying with this

10

data collection. There are no direct costs to respondents other than their time to
participate in the study.

14. Estimates of Annualized Cost to the Government
Estimated Annual Costs to the Federal Government
Exhibit 3 shows the total and annualized cost for developing and conducting both the
health plan and clinician choice components of this study, including the cost of designing
the experiments, developing the simulated Web-based reports, conducting usability
testing of the Web-reports, pilot testing the experiment, collecting the data, analyzing the
data, preparing reports and papers for journal submission, and the cost for AHRQ staff to
oversee the project. The total and annual costs are identical since data collection will not
exceed one year. The total cost is estimated to be $844,000.
Exhibit 3. Total and Annualized Costs
Cost Components
Experimental design
Development of simulated Web-based reports
Pilot testing
Usability testing of Web-based reports
Data collection via Knowledge Networks
Data analysis
Preparation of reports and journal papers
AHRQ project management
Total

Total
Cost
$168,900
$157,900
$56,000
$56,300
$126,000
$56,300
$112,600
$110,000
$844,000

Annual
Cost
$168,900
$157,900
$56,000
$56,300
$126,000
$56,300
$112,600
$110,000
$844,000

15. Changes in Hour Burden
This is a new collection of information.

16. Time Schedule, Publication and Analysis Plans
The results of this study will be used to develop recommendations for helping consumers
to better understand and more effectively use complex information to select health plans
and providers, with the aim of making performance information less burdensome and

11

more accessible, useful, and transparent to the public. The simulated Web-based reports
will be made available as examples for other report developers to use.
The forecasted timeline is as follow:
Recruit sample – 30 days from the date of OMB Clearance
Obtain experimental data – 40 days from the recruitment completion date
Analyze data – 25 days from the experimental data collection completion date
Publication summarizing the results – 250 days from the analysis completion date

17. Exemption for Display of Expiration Date
AHRQ does not seek this exemption.
List of Attachments:
Attachment A – The Healthcare Research and Quality Act of 1999
Attachment B – Clinician Choice Experiment Overview & Example Screenshots
Attachment B2 – Health Plan Experiment Overview & Example Screenshots.doc
Attachment C – Clinician Choice Experiment Invitation
Attachment D – Health Plan Experiment Invitation
Attachment E – Clinician Experiment – Pre-Test Questionnaire
Attachment F – Clinician Experiment – Post-Test Baseline
Attachment G - Clinician Experiment – Post-Test Experimental Arm 1
Attachment H – Clinician Experiment – Post-Test Experimental Arm 2
Attachment I – Clinician Experiment – Post-Test Experimental Arm 3
Attachment J - Clinician Experiment – Post-Test Experimental Arm 4
Attachment K – Clinician Experiment – Post-Test Experimental Arm 5
Attachment L - Health Plan Experiment – Post-Test Baseline
Attachment M – Health Plan Experiment – Post-Test Experimental Arm 1
Attachment N – Health Plan Experiment – Post-Test Experimental Arm 2
Attachment O - Health Plan Experiment – Post-Test Experimental Arm 3
Attachment P - Health Plan Experiment – Post-Test Experimental Arm 4
Attachment R - Health Plan Experiment – Post-Test Experimental Arm 5
Attachment S – Health Plan Experiment – Pre-Test Questionnaire
Attachment T – Federal Register Notice
Attachment U -- Public Comments from the ANA and HPNA
Attachment V -- Response to Public Comments from the ANA and HPNA
Attachment W – Distribution of CAHPS and HEDIS Scores and Assignment of
Anecdotes
Attachment X – Construction of Anecdotes
Attachment Y – Outcome and Process Variables

12


File Typeapplication/pdf
File TitleMicrosoft Word - Supporting Statement Part A -- Consumer Choice Study 12 18 09.doc
AuthorDale Shaller
File Modified2010-01-08
File Created2010-01-08

© 2024 OMB.report | Privacy Policy