HRSA memo 6-30-09

Response to OMB 06_30_2009_PNDP ICR.pdf

Patient Navigator Demonstration Program Evaluation

HRSA memo 6-30-09

OMB: 0915-0328

Document [pdf]
Download: pdf | pdf
30 June 2009
TO:

Karen Matsuoka

FROM:

Amanda Cash

SUBJECT:

Patient Navigator Demonstration Program (PNDP) – Response to Comments

Background
This program is authorized under the Patient Navigator Outreach and Chronic Disease
Prevention Act of 2005, P. L. 109-18, Section 340A of the Public Health Service Act (42 U.S.C.
256a). This Patient Navigator Outreach and Chronic Disease Prevention Demonstration Program
(PNDP) authorizes funds for the development and operation of demonstration projects to recruit,
assign, train and employ patient navigators who have direct knowledge of the communities they
serve to facilitate access to health care for individuals who are at risk for or who have cancer or
other chronic diseases, including conducting outreach to health disparity populations. Patient
navigator services were pioneered by the Patient Navigation Program for improved early
diagnosis and timely access to cancer care at The Harlem Hospital. The historic outcomes of this
project were the first to show what a critical role patient navigation services can play in
overcoming access barriers in the health care arena for underserved patients. This landmark
model opened the door to having the Patient Navigator and Chronic Disease Prevention Act
signed into law.
The PNDP is a pilot demonstration project for patient navigation. Its purpose is to develop
programs that improve the quality of care for health disparities populations using recentlydeveloped models of patient navigation. Navigation programs for the treatment of cancer are
relatively well developed, and established programs are currently undergoing rigorous evaluation
by NIH using confirmatory study designs involving specific hypotheses as well as pre/post and
control group comparisons.
However, under PNDP, the navigation model is being expanded to new areas of clinical care - to
preventive care and to chronic diseases other than cancer. PNDP programs are in development,
so there is no expectation that the interventions will be stable over the course of the project.
Because the programs are brand-new, and grantees are encouraged to improve (and change)
program policies and procedures as experience is gained, the focus of the evaluation is
exploratory and descriptive. The evaluation will describe whether there are indications that the
programs are successful in meeting quality improvement goals across grantees and sites. While
some comparisons to benchmarks and baselines will be made, these comparisons will be made
within the context of qualitative data that explain how the programs were implemented,
including the persons served, challenges encountered and lessons learned. The evaluation will
provide information about what aspects of navigation are most resource-intensive and should
provide guidance for development of future patient navigator programs.
Because the focus of the PNDP is development of program infrastructure and resources, and
evaluation of the program is exploratory and descriptive, every attempt has been made to
minimize the burden of data collection. This means that most if not all of the data are collected
1

as part of the clinical or administrative procedures already in practice at the sites. However, the
sophistication of information technology and administrative oversight varies considerably among
the sites. In sites where data collection procedures are new, implementation should provide
valuable, direct information to the programs for their own monitoring of project goals and
quality. The evaluation has not required implementation of standard measures across all sites
because (1) some measures would duplicate existing procedures and would create additional
burden; (2) some measures do not meet clinical or administrative needs, and (3) standardization
is difficult because the sites differ considerably in the type and stage of disease or condition that
is the focus of the program. For example, some sites are focusing on cancer prevention and
treatment, while others focus on diabetes screening, prevention, and treatment.
OMB Comment: I think this study requires a part B but none is included.
HRSA Response: This is not a research or confirmatory study, and does not involve sampling.
The PNDP is a pilot demonstration project and its evaluation will examine how the explicit
duties of a navigator (provided in the statute) are translated at the community level. There was no
statistical sampling used to determine how communities would receive funding; there was a
competitive grant process through which six sites were chosen based on funding appropriated for
the program.
OMB Comment: In order to assess the utility of this information collection, it would be
important for us to know how the evaluation will proceed. For example, will the grantees be
doing pre/post analyses?
HRSA Response: This pilot demonstration project is not a research study of a stable
intervention. Some sites will be comparing limited information prior to having a patient
navigator with the same information once the navigator is being used in practice, to see if there
are fewer broken appointments, for example, but much of the information does not lend itself to
pre/post analyses. HRSA will be describing data collected on the navigator activities, and
characteristics, and on some patient health indicators as patients are navigated. The program will
be compiling the data in a descriptive way to outline what the project achieved.
OMB Comment: Will there be randomized control groups? If not, how do we know whether
any improvements we see are attributable to the intervention?
HRSA Response: There will be no randomized control groups as the programs evaluated are
pilot demonstration programs. The navigator programs are still in development, and the
evaluation is a proposed assessment of whether funded grantees were able to achieve the
outcomes that were proposed in their grant applications. Findings from the PNDP evaluation
may be applied to create stable programs in the future which would possibly be appropriate for a
randomized control study. The legislation permitted grantees flexibility in that each application
could outline how the grantee would measure and evaluate program outcomes. Each grantee
developed benchmarks based on their community needs and their populations served.

2

OMB Comment: How will the patients be sampled?
HRSA Response: Patients will not be sampled for this project, not in the traditional sense of the
word. Grantees will enroll patients in the navigator program based on criteria outlined in the
application process. The grantees were funded based on a competitive grant process, and all
patients identified as meeting certain clinical criteria (which differ across sites) will be asked to
enroll in the program. The sites are using a combination of socio-demographic and clinical
criteria to identify patients at-risk for adverse health outcomes. Each site, in their grant
application, discussed the populations and needs of the communities they serve. One site may
have a greater number of patients with diabetes, for example, and their focus will be to enroll
diabetics in the PNDP, while another site may focus on the elderly.
Patients enrolling in the navigation program will be followed over time. Participation is
completely voluntary and any patient may decide to terminate their involvement with the project
at any time for any reason. While sites will track enrollment and drop out rates, obtaining follow
up information on patients that meet clinical criteria but that do not enroll or that drop out is
difficult in many sites given available data collection structures. Describing the challenges and
solutions to obtaining relevant outcome data will be one of the important findings to come of out
the evaluation.
OMB Comment: How will non-response be handled?
HRSA Response: Non-response for this project might be better described as non-participation,
since the aim here is to enroll patients in a program. We will report information on rates of
program non-participation, refusals, and drop-outs and will look for common factors across sites
that could help to explain patient non-participation.
OMB Comment: It is also not clear what the “matrix” is.
HRSA Response: The matrix is a framework the contractor and the program office are using to
guide evaluation of the program. The matrix includes a set of questions that are linked to the data
collected across the sites. An updated version of the matrix is attached with revisions that
provide greater clarity.
OMB Comment: And what does the italics and underlined text represent? What is the 3rd
column, for example, and how were they developed? (e.g. is “8% participate in clinical trials” a
baseline or a benchmark? How was the figure 8% picked? And is it practical to expect that PNs
will be able to get 75% of their patients insured? What if >25% of their patients are not eligible
for health care coverage or cannot afford what is available for them?)
HRSA Response: The italics and underlined text relate to the logic model and are not
significant; they have been removed to prevent further confusion. The third column represents
benchmarks and baseline indicators for the six sites conducting the program.
The benchmarks were developed from a review of the available literature, from grantee
proposals, and the contractor’s significant experience in this area. The contractor conducting this
3

evaluation conducted the initial Patient Navigator Program for NIH. The purpose of evaluation
is to determine whether all six sites can meet the benchmarks, and what factors, if any, prevent
the sites from meeting the benchmarks. In the case of insurance/health care coverage, most of
the sites have found ways to obtain at least limited coverage for their patients though Federal,
State, County, or private programs. The evaluation will examine what common factors
determine success in meeting this benchmark across sites, and what challenges prevent
successful attainment of the benchmark.
OMB Comment: It also seems a bit odd to assess the PNs against a benchmark: why not
compare against a control site? How do we know that a particular site wouldn’t have been able to
reach the benchmark without the intervention? Also, for each benchmark, please provide
information on what the numerator and denominators are.
HRSA Response: Comparisons with control sites are not part of this exploratory study because
the programs are in development and the design is an exploratory one.
The benchmark had not been previously reached prior to the development of PNDP because
improving benchmarks is a key goal of the navigation program at the site. In other words,
navigation is the quality improvement activity that targets many of the benchmarks. The
evaluation will not prove that the site could have achieved the benchmark with navigation. The
benchmarks in the matrix came from clinical experts at the sites, a review of the literature, and
the contractor’s previous experience with a patient navigator program at NIH.
The numerator and denominator are site-dependent. Some sites are seeking to improve
mammogram rates in their clinic population of eligible women; other sites are seeking to
improve the percentage of diagnosed diabetics with HbA1c in a healthy range.
The diversity of communities and variety of settings in which similar programs have been
managed successfully at the local level suggests that a “one size fits all” standardized approach
for this program does not work. A hallmark of the program is the guiding principle of a
significant degree of local control over the development and implementation of the PNDP grant.
Each of the grantees is developing its own protocol for the patient navigator to fit the needs of
the community and utilize and build on the available resources. Because no standardized
competency training or evaluation exists, each site has developed their own standards for
evaluating their patient navigator’s performance. Each grantee site is enhancing or building on
existing programs in their PNDP.
OMB Comment: Are there ICs missing? For example, the matrix talks about a “patient
interview by navigator.” I can’t seem to find this in ROCIS.
HRSA Response: There are no information collection requests missing. Information collected
about the patient is collected as part of the site’s routine clinical practice, such as intake forms
and administrative records, and this information will be abstracted from these existing records.
The wording of the matrix regarding some of the data sources was somewhat unclear, and has
been revised. The intake forms and other patient records are unique to the clinical practice of
each program.
4

OMB Comment: Who collects the data in the “common data elements data dictionary” and
when? Are these data collected every time a patient sees a PN? Why is health care coverage
questions asked twice (once in table 1 and another time in table 8)? How do these data elements
fit together to arrive at the measures specified in the matrix? (e.g. how do you measure # of
broken appointments)? How were the conditions and comorbidities selected? For example, why
is hypertension not listed as a possible comorbidity and why is mental illness only comorbidity?
What is the “comorbidity interference degree”?
HRSA Response: Data will be collected through multiple procedures at the sites. Most of the
data will be collected by the patient navigator in the course of providing navigation for services.
Data sources tapped during these activities include the patient, health care provider, medical
records, or other administrative data sources. In some sites, some information will be collected
from central administrative data bases. Many of the data elements are only collected once (e.g.
patient intake, PN demographics, utilization data). One data table (patient tracking log) is
collected by the navigator on each PN contact with or on behalf of any particular patient.
Health care coverage is asked multiple times because for many patients in these populations it is
a fluid variable and changes over time. Procedures for collecting the data elements differ at each
site, although the data elements are standardized. For example, broken appointments will be
tracked as part of navigator’s clinical duties involving patient self-report, a review of medical or
administrative records, or communication between PNs and health care providers. Some sites
have administrative systems that are able to track missed appointments.
The conditions being navigated were identified by each site in their grant application based on
their own assessment of their specific population. The comorbidity list is based on a
standardized set of comorbidities from the Charlson Comorbidity Scale1, modified to lessen the
burden for the navigator and to include comorbidities known to complicate disease management
some of which were added by the clinical experts at the sites. The Charlson Comorbidity Index
was originally designed as a measure of the risk of 1-year mortality attributable to comorbidity in
a longitudinal study of general hospitalized patients. It was then validated for the same outcome
in a cohort of breast cancer patients. It was subsequently adapted so that International
Classification of Diseases, Ninth Revision (ICD-9), codes could be used to calculate the Charlson
Comorbidity Index with existing administrative data.2
Hypertension and depression are both comorbidities we are tracking. The degree of interference
related to comorbidities is operationalized as a count of the number of comorbid conditions.
OMB Comment: According to the supporting statement this study appears like it will go on for
1.25 years (i.e. you’re collecting 5 quarters of data). However, the logic model says that it will
take at least 2 years to get some of the outcomes you are testing for (e.g. decreased time from
screening to diagnosis, participation in clinical trials). This seems to mean that you will not be

1

Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in
longitudinal studies: Development and validation. Journal of Chronic disease 1987; 40: 373-383.
2
Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative
databases. J Clin Epidemiol 1992;45:613–19.

5

able to obtain valid results on those measures unless the study was extended out to 2 years.
Please explain.
HRSA Response: The logic model provides a broad picture of the program and its goals, and is
a generalized statement of the timeframe. A timeline is attached for clarification. The projects
themselves are funded for two years. Data collection will take place for 1.25 years which is
scheduled to begin in August or September of this year. HSRA is required to conduct an
evaluation as mandated by the legislation, but this will not preclude measuring some outcomes
even though the grantees will not be funded past the end of September 2010. The evaluation will
report on what outcome data are available, and will explain why and to what degree the
information may be generalizable to future programs. Short-term outcomes, such as decreased
time screening to diagnosis and clinical trials participation, will be measured.
OMB Comment: Finally, it seems that the grants have been awarded and the entities picked:
however, there is no information on those sites and how they were picked.
HRSA Response: This is correct. The funding mechanism for this program was a competitive
grant opportunity for two-years of support. The program was authorized in FY 2005 with
appropriations for FY 2006 through FY 2010, and ends September 30, 2010. Federal funds were
available beginning in FY 2008, thereby limiting the PNDP to a two-year grant period with the
period of support for approved and funded projects beginning September 30, 2008 and ending
August 31, 2010. FY 2008 funding was $2.948 million and awarded to six patient navigator
grants.
Eligible applications were peer reviewed by an objective review committee, and the six awardees
are from across the country (CA, FL, GA, NY, SC and TX) representing a diverse range of
eligible organizations: academic health center, Federally qualified health center (FQHC),
hospital district, free clinic and community non-profit organization. Per the legislation, eligible
applicants included public or nonprofit private health centers (including a FQHC as defined in
section 1861(aa)(4) of the Social Security Act), health facilities operated by or pursuant to a
contract with the Indian Health Service, hospitals, cancer centers, rural health clinics, or
academic health centers. Nonprofit entities that enter into partnerships with or coordinate
referrals with such centers, clinics, facilities, or hospitals to provide patient navigator services
were also eligible. All awardees applied for and received a funding preference by indicating how
they will use patient navigator services to overcome barriers to improve healthcare outcomes in
their communities. Barriers cited by awardees related to residing in a MUA or HPSA,
geographic isolation, transportation, poverty (200% FPL), limited English proficiency, cultural
barriers, lack of health insurance and epidemic chronic disease levels.
OMB Comment: These seem like huge questions to me, which will probably take the
program/contractor several weeks to answer. If it will take more than 2 weeks, I would prefer
HRSA to withdraw this ICR, work on it, and resubmit it.
HRSA Response: We believe that the questions you have raised were a result of unclear and
perhaps inconsistent wording in some of the materials provided with this ICR, and that the
revised materials will resolve the major issues noted in the questions. Due to the short time6

frame of this project and the legislative requirement to conduct the evaluation, HRSA requests
that the ICR not be withdrawn. We apologize for the confusion with this ICR and would be glad
to discuss any outstanding questions at your discretion.
Patient Navigator Demonstration Program Timeline
Date

Description

2005
June 7

Program Authorized by Congress

2008
FY 08
Sept. 1

Funds appropriated to PNDP
Grantees awarded funds

2009
July 15
August
October 15
October

Grantee Quarterly Report (Quarter 2 2009) due
Target OMB approval
Grantee Quarterly Report (Quarter 3 2009) due
First data collected through Sept 2009 to HRSA
(assumes OMB approval in September)

2010
Jan 15
April 15
July 15
August 15
August 31
Sep 1
Sept 28

Grantee Quarterly Report (Quarter 4 2009) due
Grantee Quarterly Report (Quarter 1 2010) due
Grantee Quarterly Report (Quarter 2 2010) due
Grantee Final Report due
End Grantee Funding pursuant to authorizing legislation
Second Congressional Report Draft to HRSA
End of contract/Final Congressional Report to HRSA

2011
March 30

Final report due to Congress

7


File Typeapplication/pdf
File TitleMicrosoft Word - Response to OMB 06_25_2009 2pm.doc
Authoracash
File Modified2009-06-30
File Created2009-06-30

© 2024 OMB.report | Privacy Policy