CRC omb passback

CRC omb passback.doc

Health Care Systems for Tracking Colorectal Cancer Screening Tests

CRC omb passback

OMB: 0935-0146

Document [doc]
Download: doc | pdf

D R A F T #2–Oct 14, 2008

Questions from OMB


Please put this study within the context of the National Cancer Institutes (NCI) various projects regarding colorectal cancer screening (Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO). How does this study build on the results seen in those studies? How does this study complement NCI's data collection efforts (e.g., their sponsorship of a cancer screening module for the Health Interview Survey).

This project is designed to fit within the context of, and complement, other work being done within DHHS including at the NCI, at CDC, and through the US Preventive Services Task Force, among others.


The PLCO Cancer Screening Trial is a large-scale randomized clinical trial sponsored by NCI to determine whether certain cancer screening tests reduce the number of deaths from prostate, lung, colorectal, and ovarian cancer. For colorectal cancer (CRC), the trial is testing the effectiveness of flexible sigmoidoscopy for early detection of disease. The trial is taking place at 10 screening centers across the U.S., and has enrolled just under 155,000 healthy men and women between the ages of 55 and 74. Enrollment occurred between 1993, when the trial opened, and 2001. Screening was completed in late 2006 with follow up to continue for up to 10 years to 2016 to determine benefits and harms of screening. Data collection instruments included (1) a baseline questionnaire completed at the time of enrollment on demographics, personal and family history, lifestyle habits, and history of screening, (2) annual study update questionnaires to identify occurrence of and mortality from the cancers screened for, (3) dietary questionnaires to look at relationships between diet and cancer, and (4) a risk factor questionnaire mailed in 2006. Results of this trial to date have primarily identified number and rate of detected polyps. Results regarding the ability of screening to reduce morbidity and mortality will not be available for several more years.


This project, as distinct from the NCI’s PLCO trial, seeks to assess whether, to what extent, and how easily elements of an integrated health system redesign intervention – components of which were previously shown to improve screening rates in a large urban academic practice and to improve rates of diagnostic follow up for positive screens in practices affiliated with a large, for-profit managed care organization – can be transferred to a network of community-based practices and achieve similar rate improvements. It is intended to study the effectiveness of a delivery process for screening rather than the effectiveness of the screening modality itself. Thus, the two studies’ purposes are quite different and distinct, and do not significantly overlap. Accordingly, the required data collections are also quite different and distinct and are not significantly over­lapping.


NCI’s data collection efforts (e.g., a cancer screening module for the HIS) seeks to ascertain prevailing screening and diagnostic follow up rates through national surveys. Our study seeks to use such rates as a background or baseline with to which to compare the rates we achieve through our system redesign intervention. Further, our project was conceived as a next step to results of such data collection by NCI which found that positive CRC screenings are not being appropriately followed up in many cases. Although shown in clinical trials to be effective in detecting early disease, such screening is not effective in community based clinical practice outside of such trials if positive screens are not followed up. Our study seeks to assess whether a system redesign intervention intended to address tracking and follow up of CRC screening can be applied in a community setting.


The supporting statement says the CDC is very interested in this ICR. Has it been vetted with them?

Although this project is being conducted under contract with the AHRQ ACTION program, funding for it comes from CDC. [Dan and Doris: I took this out because we now describe ACTION in answer to a later question] As such, in addition to an AHRQ Task Order Officer (Cynthia Palmer), the project is being supervised by two technical advisors from CDC (Dr. Lisa Richardson and Dr. Brooke Steele). The technical advisors review all project material and approve all project status reports. They have vetted this ICR. They will join AHRQ on the conference call with OMB on Thursday, 16 October.


We see your statement in the introduction to part B of the Supporting Statement that "The study is using a purposeful sample, as opposed to a random sample, and study personnel are aware that the results will not be scientifically generalizable, but rather provide significant and important "lessons learned" for other delivery systems interested in implementing this interven­tion." However, this statement seems somewhat at odds with the emphasis on the sample design (interventions vs controls) and the goals (determining effectiveness under different settings). Given this confusion, we have a variety of questions about basis for the sample design.

The purpose of this project is primarily to identify insights and lessons learned regarding adopting a system redesign intervention to improve CRC screening rates, tracking of screening tests, and rates of diagnostic follow up of positive screens in order to encourage adoption and guide adoption decisions by potential future adopters. Both as a means of demonstrating and documenting that this intervention is, in fact, capable of achieving its intended improvements, and as a tool to encourage its adoption, the project needs to be able to confirm that the intervention achieves its intended outcomes. Thus, the project uses both a four-cell quasi-experimental (Campbell and Stanley nonequivalent control group) design to assess the outcome of the intervention and less statistically rigorous and more qualitative approaches to gaining insights into the intervention implementation process to help would be adopters. Both approaches are discussed in the Supporting Statement.


On page 9 you say this CRC screening program has previously been demonstrated to be effective. Please add a description of that study. What is the basis of the 'effectiveness' conclusion? Is this a numeric increase in the percentage of eligible patients that get screened? Will the prior level of effectiveness be the standard against which you determine 'effectiveness' in the current study?

The intervention is based on two prior studies conducted by project staff at Thomas Jefferson University. Those studies are described in the following publications (pdf versions of these articles are attached): (a) Myers RE, Sifri R, Hyslop T, et al, A Randomized Controlled Trial of the Impact of Targeted and Tailored Interventions on Colorectal Cancer Screening. Cancer, Vol 110 (9):2083-2091 (Nov 2007); (b) Myers RE, Turner B, Weinberg D, et al, Complete Diagnostic Evaluation in Colorectal Cancer Screening: Research Design and Baseline Findings. Preventive Medicine, 33:249-260 (2001); and (c) Myers RE, Turner B, Weinberg D, et al, Impact of a Physician-Oriented Intervention on Follow-Up in Colorectal Cancer Screening. Preventive Medicine, 38:375-381 (2004). Based on the findings reported in these publications, the project set effectiveness goals for the intervention of at least a 40% screening rate for the intervention group compared with the prevailing baseline rate for the control group (the previously reported rates were 46% and 33%, respectively, or about a 40 percent increase in rate) and a 65% diagnostic follow up rate (the previously reported rates were 63% for the intervention group and 54% for the control group for about a 17% increase in rate). Note that the project’s goal of a 40% screening rate is lower than the 46% achieved in the original study. The original study was conducted in a large, university-affiliated urban practice setting with a baseline screening rate of 33%. Information available to project staff strongly suggest that the baseline rate in the network of community-based practices from which study participants will be selected is below 30%. Since the baseline starting point will be lower, the goal for the intervention is set correspondingly lower. The original study achieved a 40% increase in screening rate from 33% to 46%. Applying that same 40% increase to a baseline of 28%-29% yields a target screening rate of 40%. If the baseline rate found by this project turns out to be higher, the target improvement will be adjusted upward accordingly.


Please explain the power of this study to addressing the goals articulated in the Supporting Statement. What is the basis for selecting 20 intervention sites and 5 controls - what it the power associated with this design (what size differences will you be able to detect). Were your power calculations based on the number of patients, the number of providers, or the number of sites?)

The project is sized to be able to detect expected intervention effect size as described above (i.e., a 40% percent increase in screening rate from 28%-29% to 40% and a 20% increase in follow up rate from 54% to 65%). The project’s focus on gaining insights and lessons learned regarding the process of adopting and implementing the intervention guided the decision to have many more intervention practice sites (20) than control sites (5). Power was estimated based on the average LVPHO primary care practice size (measured as number of patients), the expected percentage of those patients that will meet the inclusion/exclusion criteria of the study, and the expected percentage of screenings and of positive screens. At a 20 vs 5 split for intervention vs control practices, respectively, there should be sufficient power (80% - 90%) to detect expected intervention effects at an Alpha level of 0.05. [I think we talked about adding something like “Given the number of possible designs needed for assessment, AHRQ and CDC decided to include more intervention sites to meet the program goals…” I may have gotten this wrong.]


On page 9 you say that the goal is to determine if this existing program is effective in other settings. How was LVH selected as the site for this demo? How is it like/unlike the site where the screening intervention has already been shown to be effective? How is it like/unlike other clinic/hospital settings? For example, the policy LVH has of incentivizing its physicians to undertake quality improvement activities seems very unusual and may lead to more widespread implementation than in most other settings.

The LVPHO network of primary care practices was selected based on a number of considerations. It is a network consisting of a good mix of types of primary care practices, it is a practice-based research network (PBRN), and it is affiliated with an integrated delivery network (IDN). AHRQ supports PBRNs, IDNs and other care delivery networks such as ACTION (Accelerating Change and Transformation in Organizations and Networks), the network within which this project is being conducted. Agency support for these networks is provided in response to 1999 legislation (Public Law 106-129). In reauthorizing and renaming the Agency, this legislation directed AHRQ to employ research strategies and mechanisms that link research directly with clinical practice in geographically diverse locations throughout the country, including the use of "provider-based research networks." PBRNs, IDNs and other care delivery settings within ACTION are tasked to answer clinical and organizational questions about how to improve health care delivery. They are frequently intended to study the effectiveness of delivery processes and organization rather than the effectiveness of the evidence-based practice(s) themselves. AHRQ provides funding for more than 50 PBRNs and 15 ACTION partnerships nation-wide.

The LVPHO network of primary care practices It differs significant­ly from the sites where components of the intervention were previously tested. The original site for the test of the screening intervention was a large, urban, university-based practice. The original site for the test of the follow up intervention was a geographically dispersed group of practices that each provided care for patients insured by a large for-profit Health Maintenance Organization. The LVPHO site is a network of smaller, less urban, more geographically compact community-based practices serving the Lehigh Valley and joining together with the Lehigh Valley Hospital to offer an insurance product to local employers. It is like other PBRNs, IDN-affiliated networks, and PHO-affiliated networks each of which are a very common model. For example, a recently published study (Green LA and Hickner J, A Short History of Primary Care Practice-based Research Networks, Journal of the American Board of Family Medicine, 19:1-10 (2006)), identified 111 PBRNs nationwide, 87 of which participated in a descriptive study revealing that they contained a total of 2,724 practices in 44 states and Puerto Rico with 12,954 physicians caring for 14.7 million patients. We will note that physicians were compensated for quality improvement when we report our results.


At the bottom of page 4/top of page 5 of Part A of the Support Statement, you list 5 attributes that you will use to select the 25 hospitals. This seems like more stratification variables than a sample size of 25 can support. Will you have the power to attribute differences based on size, EMR system use, and practice type in the intervention group? What about the controls?

The project will recruit 25 primary care practices (not hospitals) from among the 111 such practices in the LVPHO using the attached practice recruitment matrix as a guide (see Attachment A). Note that this is a recruitment process rather than a sampling process. Practices must consent to participate. The intent is to recruit practices in such a way as to achieve a mix of practices across each of the matrix’s dimensions rather than to achieve a stratified sample with sufficient statistical power to detect differences. Screening rates and rates of diagnostic follow up of positive screens will be compared between patients of intervention practices as a group vs patients of control practices as a group, and there is expected to be sufficient power for those comparisons. Statistically rigorous comparisons will not be made to attribute differences in rates based on stratification variables. Qualitative analysis will be performed to gain insights into how, how well, and how easily different types of practices incorporated the system redesign intervention and changed their screening behavior.


How will LVH determine which practices are the controls and which are the intervention sites?

A purposive sample of 25 practices that meet the project’s target mix of ownership, size, specialty, and geographic location criteria will be recruited Twenty will be randomly assigned to the intervention group and five will be randomly assigned to the control group


It seems as though your conclusions would be anecdotal at best. Will such anecdotal data allow you to meet the goals of the study?

* What are the distributional and representativeness assumptions underlying the Campbell and Stanley design?

* What sample design assumptions would be necessary for drawing conclusions from these data about the business case of this intervention and the cost effectiveness of this intervention entail?

* The plans to submit manuscripts to the high-level peer reviewed journals listed on page 11 of Supporting Statement B, suggest that the conclusions might be used as empirical support for policy decisions.

The aspects of this project referred to in this set of questions appear to assume that the goals of the study are to conduct a statistically rigorous clinical trial requiring strict consideration of factors fostering internal and external validity. This is not the case. The goals of this project are to assess whether, to what extent, and how easily a health system redesign intervention previously shown to improve screening rates and rates of diagnostic follow up for positive screens can be transferred to another clinical setting and achieve similar rate improvements. The project is not designed to determine if early screening for CRC is effective or which screening modality is most effective. Other studies have established that screening for CRC can lower mortality if positive screens are appropriately followed up. Rather, the project is designed to ascertain whether expected intervention results found in one setting can be achieved in a different setting and if so, what setting attributes appear to have contributed to effectiveness so that other would-be adopters of the intervention can judge whether it is likely to be effective in their setting or how to re-engineer their setting to increase the likelihood of effectiveness. Publication in peer reviewed journals does not signal intention to use the results of this project as empirical support for policy decisions. Such publication is intended to inform potential adopters of the intervention (especially other PBRNs and IDNs) of the results of this attempt to implement it as a guide to their decision about adopting it.


If patients are opting out of the study through the SEA form (rather than opting in), how will you separate out the individuals who simply didn't respond to the SEA form from those who failed to follow their providers' recommendations to get the screening?

This is a population-based study assessing an intervention’s ability to achieve expected screening and follow up rates. The denominator for those rates will include those who opt out. Thus, there is no need to separate out those who fail to complete the SEA form from those who fail to get screened. Screening rates will be calculated as the number of eligible patients who get screened divided by the number of eligible patients regardless of whether or not they opt out.


Who is actually sending the mailings and the follow-ups (e.g. attachment D: if a patient is recommended for screening but the patient doesn't follow through)? Is it the practice or the study personnel? We understand that the mailings will be signed by the practice and have every appearance of being sent by the practice, but is it actually the study personnel who is in charge of getting those mailings sent out?

The project personnel rather than practice staff will be conducting all of the mailings on behalf of the participating practices. A major feature of the intervention being implemented by this project is to facilitate population-based screening and follow up diagnostic evaluations of positive screens in cooperation with a network of practices. As such, project personnel will centrally screen patients from all participating intervention practices for eligibility (through both the electronic records review and the SEA form), send out all mailings, track who does and does not get screened, monitor screening results, provide screening result feedback to clinicians, track follow up, and the like.


Who is actually screening the patients for screening eligibility? Is it the study personnel or the practices?

As indicated in answer to the previous question, project personnel will conduct both the electronic records review to preliminarily screen patients for eligibility and then review the SEA forms to confirm eligibility.


Who is doing the chart audits? Is it study personnel or the practice staff?

Project personnel will be conducting the chart audits.


How will you track patients who opt to get their screenings from providers outside the LVH? Is HIPAA authorization required to obtain patient data from outside clinics/hospitals?

Screenings offered through this project will be of two types: stool tests or colonoscopy. Stool test kits supplied by a cooperating private clinical laboratory will be mailed to all patients as part of the intervention being studied. These kits include use and return instructions and a pre-addressed return envelop for patients to use to send the tests to this lab for processing. The laboratory will provide LVPHO personnel information regarding who did and did not return the kits and the test results. We do not expect patients to use an outside lab for their stool tests. In addition to the stool test kits, the names and office contact information of colonoscopists referred to by each participating practice will also be provided to patients of that practice to use if they prefer colonoscopy rather then stool testing. (Patients will also be informed that they can use any other colonoscopist who is not on the list supplied by their practice if they choose.) Claims for screening colonoscopy for patients who are insured through an LVPHO insurance product (an expected majority of the patients who will be included in the study) will be available for electronic review by study personnel affiliated with the PHO regardless of which provider they use for the colonoscopy. LVPHO personnel associated with this project will also monitor electronic medical records and conduct chart audits to track colonoscopy screenings and diagnostic follow up for patients not insured through the PHO. HIPAA compliance was carefully considered in the design of this project and was carefully and fully reviewed by the IRBs at both LVH and Thomas Jefferson University. Both IRBs concluded that there were no compliance violations.


How will the act of a provider giving a recommendation for screening be flagged in the patients' records? Is there any situation where a recommendation is made (e.g. orally) but is not documented?

The health system redesign intervention being studied by this project is a population-based screening program. The recommendation/invitation for screening will come through a letter mailed to patients and signed by providers in participating practices. As such, the recom­mendation/in­vi­ta­tion does not come directly from the provider and is not noted in the patient record by the provider. Some providers may recommend CRC screening to some of their patients on their own outside of this interven­tion. Such recommendations may be made by providers in intervention or control practices and are part of the environmental context of the population-based intervention of interest. The project will seek to assess the effect of the population-based system redesign intervention in an environment in which some providers will recommend screening to their patients and some will not (or some providers will recommend screening to some of their patients and not to others). However, the project will not track such provider-level recommendations.


How will the individuals for each set of focus groups be chosen?

The project will conduct focus groups with two types of populations: (a) practice providers and staff and (b) patients. The intended population for each one of the focus groups with practices is all of the providers and clinical and non-clinical staff of that practice. There will be no selection process; all providers and staff will be invited. This will be true for both pre and post intervention focus groups at intervention practices and pre-intervention focus groups at control practices. Participants for the patient focus groups will be selected at random from those meeting the selection criteria for each focus group (e.g., screeners vs non-screeners) split by geographic residence into those living nearer to one focus group site than the other.


Seven types of activities are listed on pages 7-9 of Part A. What is the difference between the practice focus groups (item 3 on page 7) and the patient focus groups (item 7 on page 9)? Are the practice focus groups the providers (doctors and nurses)?

Practice focus groups will be conducted with providers and staff at the participating practices. The intent of these focus groups is to gain insights into baseline screening knowledge, attitudes, and behavior at intervention and control practices and to compare them pre and post intervention for the intervention practices to assess the impact of the intervention on those practices. Patient focus groups will be conducted with patients of intervention and control practices (in the absence of practice providers and staff) to gain insights into patient knowledge, attitudes, and behaviors associated with screening and (for patients of intervention practices) how the intervention affected them.


How was it determined that 50 chart audits will be sufficient? What level of accuracy are you seeking?

Chart audits will only be performed to determine whether a complete diagnostic evaluation was performed as follow up to a positive stool test screen for CRC for those cases of positive screens for which the electronic record is incomplete or inconclusive. The project conservatively estimated an average of as many as 50 such instances per practice. This average results from a mix of EMR-present and EMR-absent practices. The presence of an EMR is expected to greatly reduce the number of incomplete or inconclusive records.


It seems like it would be especially useful to include patients diagnosed with cancer during the intervention in the focus groups. Without these folks you will get a very skewed view - folks who were negative all along having to go through hoops and scares aren't necessarily the best cheer leaders for early screening. However, it does make sense to have two separate support groups - those for whom cancer was found and those for whom it wasn't so as not to make those for whom it was found feel bad. If the reason for excluding them is concern that they would perceive the focus groups to be self-help sessions, couldn't this be clarified for them?

The IRB reviewing the project’s research protocol advised against including patients diagnosed with cancer from participation in the focus groups. Even with clarification of the purpose of the group, the IRB was concerned that harms might outweigh benefits of participation and that clinicians might need to be present at the groups to provide clinical advice or psychological support, which is outside the scope of the intended focus groups. AHRQ and CDC agree with the IRB.


Why are the patient focus groups not practice specific? Wouldn't it be helpful to know patient experiences of the implementation of the screening program on a practice-by-practice basis?

This project is not attributing results on a practice-by-practice basis. Screening rates and rates of follow up to positive screens will be compared between intervention practices as a group and control practices as a group. The patient focus groups are only intended to illuminate how patients feel about CRC screening and (for intervention practice patients) how they feel about the intervention – regardless of the type of practice at which it was implemented – and how the intervention may have affected their experience in getting screened and followed up compared with how control patients feel. Thus, it would not greatly add to the study to conduct patient focus groups by practice.


Does the PHSA allow AHRQ to withstand a FOIA request?

Under the PHSA, AHRQ would not be permitted to release information identifying persons (patients) or establishments (providers or provider groups) in the event of a FOIA request.


Please make sure that the race/ethnicity questions conform to OMB standards (e.g. there should be no "other" categories and no "more than one race" categories).

Project staff have revised the SEA form questions on race/ethnicity to conform to OMB standards. The form no longer contains the response category of “other” and the remaining response categories use conforming wording for the five race categories (American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, and White).


Is there actually burden on the physicians since the LVH is compensating them for their time (i.e., the LVH is considering it part of the physicians' job).

The LVPHO considers participation in studies related to improving health care and health outcomes to be a value. As such, it compensates physicians in affiliated practices for participating in studies sponsored by the PHO. It does this as a quality improvement initiative. Thus, it could be argued that there is no burden because compensation is given. However, the compensation is not actually payment for providing data but rather payment for activity related to improving quality. Thus, it could be argued that providing the data is a burden. It is a judgement call. We took a conservative approach and figured it as a burden so as not to underestimate the overall burden. The trade off is that we may have overesti­mated the burden.


Consent form seems to convey the impression that the participant should be more concerned about risk than one would expect if participants are not providing consent for the screening, only for taking part in the focus group and to answer a questionnaire, the focus. We would suggest that both the long paragraph under Research Study and the text under Injury/Disclaimer suggest that the participant should expect some level of physical risk. If your IRB does not want you to remove these two long paragraphs, we suggest shortening them to a single sentence.

The language used in the patient focus group consent form is that which is required by the IRB at LVH (the governing IRB for this study) as the standard necessary for informed consent.


Attachment A. PRACTICE RECRUITMENT MATRIX


  1. Ownership and/or Affiliation

    1. MATLV: 6-10


    1. LVPG: 6-10


    1. PBS: 5-7


    1. Hospital clinics: all 3


    1. Independents: 3-6


  1. Specialty (General Internal Medicine vs. Family Medicine)

    1. General Internal Medicine: 9-15


    1. Family Medicine: 12-17


  1. Practice Size

    1. Size = 4 or more practitioners: 11-17


    1. Size < 3 practitioners: 8-13


  1. Geography

    1. Urban: 7-12


    1. Suburban: 10-15


    1. Semi-rural/Rural: 6-10



Page 10 of 10

File Typeapplication/msword
File TitleFrom: Matsuoka, Karen Y
AuthorTechnology Center
Last Modified Bywcarroll
File Modified2008-11-26
File Created2008-11-26

© 2024 OMB.report | Privacy Policy