1. Universe and Sampling Method
The colorectal cancer screening intervention will be conducted in two large managed care organizations, Henry Ford Health System (HFHS) and Lovelace Health Systems (LHS), selected for diversity of the patient population, including socio-economic status and urbanicity. After matching (or blocking) on baseline CRC screening rate, primary care clinics from LHS and HFHS will be randomly assigned to the following study arms:
Clinic-focused intervention alone
Combined clinic- and patient-focused intervention
No intervention control group
An Augmented Youden Squares design will be used in randomizing clinics to the study conditions.49 This design allows one to match study units in block sizes smaller than the number of study conditions. This is particularly useful in studies where it is difficult to block sufficient numbers of study units due to substantial variation in the blocking variables. As applied in this study, we will match study clinics in blocks of three and thereby provide structure for control of clinic variation. We will apply this design with seven rows in blocks of three to assign 21 clinics to the three study conditions. Blocking will be done on baseline CRC screening rate or other control measures with statistical adjustment made for the blocking measures during the analysis. Youden Squares design has been broadly applied for the last half century to randomized control trials in human & animal clinical trials and in program evaluation.50-52
HFHS has 21 primary care clinics and LHS has 10 primary care clinics that are available to be selected and assigned to the study conditions. A conservative estimate (see section B.2) indicated that a total of 19.3 clinics would be sufficient power. Thus, 20 will be sufficient to evaluate a single arm intervention effect against the control condition. We plan to include 21 clinics to assign 7 clinics to each study arm and to ensure sufficient power in the unlikely event of loss of a clinic to the study (though we conservatively have already discounted the numbers of patient surveys per clinic).
Site visits by CDC and Battelle investigators to HFHS and LHS found strong support for participation in the study. The research directors at HFHS and LHS obtained support and approval for the study from the medical directors at each MCO and assurance of MCO participation. Additionally, meetings with clinicians and clinic support staff at multiple clinics demonstrated strong interest in CRC screening and commitment to participate in a randomized control study. Clinicians and support staff also indicated strong interest in the opportunity to participate in the clinic-focused training sessions and to obtain continuing education (CE) credit.
The intervention evaluation will use a pre-post design with a control group, with survey data collected from primary care clinicians and support staff at each clinic, and from patients who receive their health care from each clinic. Table B.1A1 summarizes the universe and samples for the clinician, clinic support staff, and patient surveys. Below we describe the sampling methods and sample sizes for clinicians, clinic support staff, and patients.
Type of Study Participant |
Number of Entities in Universe |
Pre-Intervention Survey Sample Size (response rate) |
Post-Intervention Survey Cohort Size (response rate) |
Additional Post-Intervention Survey Sample Size (response rate) 2 |
Clinicians in each of 21 clinics (Mean) |
7.4 |
n/a |
6.7 (90%) 1 |
n/a |
Total clinicians |
155 |
n/a |
140 (90%) 1 |
n/a |
Clinic support staff in each of 21 clinics (Mean) |
7.4 |
n/a |
6.7 (90%) 1 |
n/a |
Total clinic support staff |
155 |
n/a |
140 (90%) 1 |
n/a |
Eligible patients in each of 21 clinics (Mean) |
1578 3 |
225 (70%) |
66.1 (70%) |
158.9 (70%) |
Total eligible patients |
33,138 |
4,725 (70%) |
1,388 (70%) |
3,337 (70%) |
1 Up to 10% attrition and replacement of clinicians and support staff is expected.
2 An additional post-intervention sample is planned for patients.
3 Eligible patient populations range from 800-3000 with a mean of 1578 eligible patients per clinic.
Clinician and Clinic Support Staff Survey Sample
All clinicians and clinic support staff in the 21 study clinics will be asked to complete a post-intervention survey one year after the implementation of the intervention.
The universe of clinicians and clinic support staff will be included, rather than a sample, for several reasons. First, patient survey and MCO electronic data will be used to compute CRC screening rates for each clinician. Second, utilizing the universe, rather than a sample, facilitates MCO-clinician communication and coordination (e.g., documents can be sent to the clinic instead of to individually-participating clinicians). Third, if a sample were to be used, clinicians will question why some receive the survey and some do not, particularly in the intervention clinics where all clinicians attend the provider training sessions. Fourth, the numbers of clinicians and clinic support staff are relatively small, so there would be little savings in cost or burden from drawing a only a sample of the clinicians and clinic support staff.
Approximately 155 clinicians practice in the 21 study clinics, and all of them will be asked to complete the post-intervention surveys. At least 140 (90%) are expected to complete the post-intervention survey (mean of 6.7 per clinic).
Patient Survey Sample
Patient eligibility for participation in the pre- and post-intervention surveys is as follows:
1. Patient pre-intervention (baseline) survey: Active patients age 50-80 in each of the study clinics, identified based on having visited the clinic for any reason within the previous year.
2. Patient post-intervention (follow-up) survey: Average-risk patients aged 50-80 years, seen by primary care physicians for a non-acute ambulatory care visit during the 12 months after intervention implementation, and due for CRC screening. “Due for CRC screening” will use NCQA guidelines and will be defined as having no record of any of the following screening tests:
no record of FOBT in the past year,
no record of flexible sigmoidoscopy in the past 5 years,
no record of colonoscopy in the past 10 years, and
no record of double contrast barium enema in the past 5 years
The pre-intervention patient survey will be administered to a sample of eligible patients approximately two months prior to implementation of the intervention. The post-intervention survey will be administered to samples of eligible patients on a quarterly basis during the year after initiation of the intervention.
A patient cohort with pre-intervention and post-intervention data from each respondent is not feasible for the following reasons. A survey respondent size of at least 150 patients per clinic is required to detect a CRC screening rate difference between study arms of approximately 8.5% (See “Sample Size and Power” in Section B.2). The pre-intervention patient survey will be conducted with a random sample of active patients (defined as having visited the clinic within the previous year) aged 50 to 80. About 70% of these baseline survey patients are expected to remain enrolled in the MCOs during the intervention year, and 60% of these are expected to visit the clinic for a non-acute ambulatory care visit during the year following intervention initiation. Thus, after also accounting for 70% survey response rate, a sample of 730 patients per clinic would be required at baseline in order to achieve a cohort of at least 150 respondents per clinic at follow-up. This sample size would be prohibitive.
Clearly, a patient cohort with all patients completing both pre- and post-intervention surveys is not feasible. Therefore, our power analysis on the primary outcome (patient CRC screening self-reported behavior) and secondary outcomes (discussing CRC screening and scheduling CRC screening), conservatively assumes independent patient samples for the pre- and post-intervention patient surveys. The power analysis determined that a respondent size of 150 patients per clinic for each survey will be sufficient. Assuming 70% response, sample sizes of 225 patients per clinic will yield an average of 157.5 respondents per clinic and will provide sufficient power for detecting the dichotomous primary and secondary outcomes.
For analysis of the intermediate patient survey measures (e.g., attitudes, beliefs, opinions, perceived support), a sub-sample of approximately 40 patients per clinic with both pre- and post-intervention survey data (i.e., cohort) will be sufficient (See Section B.2). Since it is expected that some patients who participate in the pre-intervention survey will visit the clinic for a non-acute ambulatory care visit during the intervention year and will therefore be eligible to participate in the post-intervention survey, such a sub-sample can be obtained. In order to obtain this sub-sample, while still surveying at least 150 patients on the post-intervention survey (needed for the primary and secondary behavioral outcomes), we propose to use an “open cohort” sampling design.
This “open cohort” patient survey design will be carried out as follows. A sample of 4,725 active patients due for CRC screening (225 in each clinic) will be selected for the pre-intervention patient survey, with 3,308 (70%) respondents (mean of 157.5 per clinic). Based on the above rates of continuous enrollment (70%) and non-acute ambulatory care visits (60%), it is expected that 1,388 respondents to the pre-intervention survey (mean of 66.1 per clinic) will have a non-acute ambulatory care visit during the following year. All of these patients will be selected for the post-intervention survey. In order to survey a total of 4,725 patients with the post-intervention survey (i.e. the same as for the pre-intervention survey), an additional sample of 3,337 patients (mean of 158.9 per clinic) due for CRC screening, who visit for non-acute ambulatory care during the year after implementation of the intervention (and are not part of the cohort), will be selected for the post-intervention patient survey. Thus, a total of 4,725 patients (225 per clinic) will be selected to receive the post-intervention survey, of whom 1,388 (mean of 66.1 per clinic) will have also completed the pre-intervention survey. Based on a response rate of 70%, it is expected that this will result in 3,308 post-intervention survey respondents (mean of 157.5 per clinic); 972 respondents (mean of 46.3 per clinic) will have also completed the pre-intervention survey, and 2,335 respondents (mean of 111.2 per clinic) will have completed only the post-intervention survey.
In sum, the average of 157.5 patient post-intervention survey respondents per clinic will be of sufficient size to detect meaningful differences in the primary and secondary outcomes of self-reported CRC screening behaviors. The average cohort sub-sample of 46.3 respondents per clinic (participating in pre- and post-intervention surveys) will be sufficient to assess the intervention effect on intermediate outcomes (e.g., attitudes, beliefs, opinions). Table B.1A1 shows the study participant universe and sample sizes, and Table B.1C presents the total number of respondents expected for each survey sample.
Type of Study Participant |
Pre-Intervention Survey Respondents |
Post-Intervention Survey Cohort Respondents |
Additional Post-Intervention Survey Sample Respondents 1 |
Clinicians in each of 21 clinics (Mean) |
n/a |
6.7 |
n/a |
Total clinicians |
n/a |
140 |
n/a |
Clinic support staff in each of 21 clinics (Mean) |
n/a |
6.7 |
n/a |
Total clinic support staff |
n/a |
140 |
n/a |
Patients in each of 21clinics (Mean) |
157.5 |
46.3 |
111.2 |
Total patients |
3,308 |
972 |
2,335 |
1 An additional post-intervention sample is planned for patients.
2. Procedures for the Collection of Information
This section describes (1) an overview of the intervention implementation and evaluation survey methods, (2) determination of sample size and the power that is expected for statistical tests of hypotheses, (3) survey collection procedures for clinicians and clinic support staff, and (4) survey collection procedures for patients.
Overview of Intervention Implementation and Evaluation Surveys
1. Pre-Intervention Surveys
The pre-intervention surveys will be carried out in order to obtain baseline measures of primary and secondary behavioral outcomes and intermediate outcomes among patients in the 21 primary care clinics to be included in this study. Additionally, electronic record data from each MCO will be obtained and used to compute baseline CRC screening rates and to validate clinician and patient self-report of CRC screening (primary outcome). Next, the 21 primary care clinics will be matched (blocked) on baseline CRC screening rate and assigned to the study conditions as described in Section B.2.
2. Intervention Implementation
The study will next implement the clinic-only or the combined patient- and clinic-focused interventions depending upon study arm randomization. The patient-focused intervention includes the mailing of CRC education packets to patients, with a letter signed by the primary care physician or MCO Medical Director. The clinic-focused intervention includes provider training for clinicians and clinic support staff, focusing on both person-level skills building and systems-level office changes (e.g., reminder systems). The intervention will be carried out for one year.
CRC education packets, based upon previously developed CDC CRC patient education materials, have been adapted for each MCO site. The patient CRC intervention packet is presented in Attachment 8. These CRC packets will be mailed to average-risk patients aged 50-80 years who are due for CRC screening (based on electronic records) and who schedule a non-acute ambulatory care visit in clinics assigned to the patient-focused intervention. An explanatory and motivating letter signed by the patient’s primary care physician or the MCO Medical Director will be included with each packet. The packets will be mailed approximately 1 week before the medical appointment.
The clinics randomized to the combined intervention will be informed of the salient aspects of their study arm assignment. Clinicians and clinic staff in these clinics will be informed of the mailings so that they will be prepared to take incoming calls from patients who have received the patient education materials and prepared to answer patient questions when patients come for their appointments. Clinicians and clinic staff – as part of this preparation – will be given copies of the patient intervention materials. In addition, the CDC has developed FOBT kit instructions that are easier to understand and of a larger font than those included with FOBT kits by the kit-manufacturers. These instructions have been further adapted by the MCOs for their setting. These improved instruction sheets will be provided to the patient-focused intervention clinics so that they can be handed out with FOBT kits.
Clinics that are randomized to the clinic-focused intervention and the combined intervention will be informed of salient details regarding their respective study arm assignments. The clinicians and clinic staff in each of these clinics will then be scheduled for training sessions. The provider trainings, specifically targeting clinicians and clinical staff, will be conducted in two separate sessions of approximately 3 hours total duration. The two sessions will be scheduled to accommodate the clinic schedule, but will be held with no more than 30 days between them.
Attachment 7 presents a summary of the curriculum training modules and their objectives, and the curriculum presentations. The curriculum and materials used for the provider training (e.g., training curriculum and handouts) have been reviewed by clinicians and clinic staff at each participating MCO, and their comments and recommendations were incorporated in finalizing the curriculum. Clinicians and clinic staff will receive the complete curriculum with presentations and resources (handouts) in hard copy form. Modules will be presented by peers at all clinics assigned to the clinic-focused intervention, and participants will receive CE credit. The patient education materials (used in the patient-focused intervention packets) will be given to the clinicians and clinic staff in the training sessions, as potential tools they can use to initiate conversations and discuss CRC screening with patients. The improved FOBT instruction sheets will also be provided for clinicians to hand out with FOBT kits to patients.
3. Post-Intervention Surveys
The post-intervention clinician and clinic staff surveys will be conducted one year after initiation of the intervention. The post-intervention patient survey will be administered to patients who are due for CRC screening (based on electronic records) and visit the study clinics for a non-acute ambulatory care visit during the one-year period after intervention initiation. Patient surveys will be conducted quarterly in order to ensure that patients do not complete the survey more than three to four months after their medical visit. They will also be scheduled so that patients do not receive the survey earlier than one month after their medical visit to allow sufficient time to schedule an endoscopic appointment.
The following describes the calculation of sample size requirements for this evaluation study using the Dunlap-Kennedy Rule.53 The primary assumption is that interest is in detecting a 0.085 change in CRC screening (by any of the recommended modalities), from a nominal 0.30 baseline, using an α=0.05 two-tailed comparison between an experimental and the control condition with (1-β) power at least 0.80.1
The number of primary care clinics required was based on the following calculations with the assumption that random samples of N=150 patients are surveyed at each clinic in estimation of the Overall Screening Rates (conservative as N>157 are expected per clinic).2
1. No-Intervention single clinic variance for 0.3 rate CRC screening with medical visit (after exclusions)
≈ (0.3)(1-0.3)/150 and Variance of difference with baseline year (0.3) Rate would be
≈ (0.3)(1-0.3)/75 ≈ 0.002803
2. Intervention single clinic variance for (0.30+0.08 =0.38) rate CRC screening with medical visit (after exclusions)
≈ (0.385)(1-0.385)/150 and Variance of difference from baseline year (0.3) Rate would be
≈ [(0.385)(1-0.385)/150 + (0.3)(1-0.3)/150] ≈ 0.00298
Thus, the variance of an average of K no-intervention clinics would be 0.00280/K and the variance of K independent intervention clinics would be 0.00298/K. Hence for difference of K-sample means Z= (0.085)/SQRT[(0.00298+0.00280)/K] =2.0. This implies that K ≈ 3.22. Applying the Dunlap Kennedy Rule,52 then the required sample size would be double this, so K ≈ 6.44 would on average be required for the three conditions, for a total of 19.3 clinics. Thus, 20 will be sufficient to evaluate a single arm intervention effect against the control condition. We plan to include 21 clinics to assign 7 clinics to each study arm and to ensure sufficient power in the unlikely event of loss of a clinic to the study (though we conservatively have already discounted the numbers of patient surveys per clinic).
Below we describe the determination of sample-size requirements for numbers of patients, clinicians, and staff in each clinic in order to detect differences of interest with adequate levels of statistical power. As will be seen, this is considered separately for three main outcome categories as the data source and analytic design differ: 1) Dichotomous patient primary and secondary behavioral outcome measures; 2) Patient intermediate outcomes measures; and 3) Clinician and clinic support staff primary and secondary behavioral outcome measures, and intermediate outcome measures.
Patient Primary and Secondary Behavioral Dichotomous Outcome Measures
Post-intervention patient survey data – on whether the patient received each screening test – will be used for these analyses, after validating against MCO electronic records. Dichotomous secondary outcomes measured in the patient survey (e.g., discussion of CRC screening, appointment scheduled for colonoscopy, etc.) also will be part of these analyses. The sample size requirements calculation assumed a cross-sectional design, comparing post-intervention data of patients in any two study arms. HFHS and LHS, it is noteworthy, provided unpublished data that indicate the following current annual screening rates:
0.12 for patients screened with FOBT in the past year;
0.05 for patients screened with flexible sigmoidoscopy in the past year (the lower flexible sigmoidoscopy rate reflects a recommended screening interval of 3-5 years);
0.30 for any of the recommended screening modalities in the past year.
The sample size requirement for FOBT screening was calculated to detect a difference between 0.12 in the control arm and 0.18 in any of the intervention arms: an increase in screening of 0.06. For flexible sigmoidoscopy, the sample size was calculated to detect a difference between 0.05 in the control arm and 0.085 in any of the intervention arms: an increase in screening of 0.035. Here we provide the assessment of the sample size required for the flexible sigmoidoscopy effect since it is the more conservative test. The following heuristic formula26 was used to calculate respondent size per clinic (N) to detect the above 0.035 flexible sigmoidoscopy difference for a 2-tailed test, with alpha = 0.05 and power ≥ 80%:
(N/2)= 4[P1(1- P1) + P2(1-P2)]/[(P2-P1)2K] [Eq. 1]
where P2 and P1 are the respective flexible sigmoidoscopy target (0.085) and current rates (0.05) in overall screening on an annual basis. Using the above heuristic method with K ≈ 6.75, the number of post-intervention survey respondents estimated to be required per clinic is N =122 for the 0.035 difference in target and current rates (P2-P1). However, as seen in our various sensitivity analyses, sample size is somewhat sensitive to changes in assumed rates (e.g., N = 159 for a difference of 0.030), which argues for maintaining our target number of patients per clinic of N =150. This number, it is noteworthy, is also more than adequate for FOBT. It should be noted that we intend to capture the baseline screening rates at each of the individual clinics and explore their utility as covariates in the respective analyses. This, of course, is expected to significantly increase power beyond the nominal 0.80; albeit the extent remains to be determined. Adjusting for response rate, the required post-intervention patient survey sample size per clinic is thus 225 (total = 4,725 patients).
Patient Intermediate Outcome Measures
A pre- and post-intervention cohort design will compare differential changes in patient intermediate outcomes (e.g., attitudes, beliefs, opinions). Difference scores between pre- and post-intervention survey measures will be computed for each patient, and – for purposes of our power calculations – it is assumed the analyses will determine whether change in patient intermediate outcome measures is greater in each of the intervention arms compared with the control arm. The two intervention study arms will also be compared to each other.
This design has far greater power than post-intervention comparisons alone, because the adjustments for the pre-intervention scores will substantially reduce within group variance. Most of the opinion and attitude measures on the patient survey use 5-point Likert scales, and a ½-point change in the mean score is typically considered to be “meaningful.” To provide a basis for our power analysis calculations, results of individual Likert-item performance in a previous study of sigmoidoscopy attitudes and opinions in 2,728 patients were examined.24 This provided for estimation of the largest expected item variance (σ2), and worst-case reliability – which was conservatively halved as an estimate of the correlation across repeated administrations (ρ).
Results from this previous study also suggested the evaluation of four subgroups in each clinic (2 gender levels by level of a second individual difference variable). Applying the Dunlap Kennedy Rule53 to a difference score where variances are equal and the measures are correlated (ρ), the following heuristic formula (Eq. 2) was then iteratively used (starting with an initial value of t.05 = 2.0) to calculate a converging succession of N estimates of the patient cohort respondent size per clinic to detect a ½ point change in an opinion measure for a 2-tailed test, with alpha = 0.05, and power = 80%:
(N/2)1/2 = t.05 [2σ2(1-ρ)]1/2]/[(µ1-µ2)K)1/2] Eq. 2
where µ1 and µ2 are the respective base and change (base+ ½ ) opinion and attitude values, K=6.75 is the average number of clinics in each arm of the study, ρ is the correlation between the repeated administrations and t.05 is the critical value for the derived value. Our power calculations, aimed at detection of a change of ½ Likert-scale value in a ¼ clinic subgroup with alpha = 0.05 and power > 0.80 for our worst-case item, indicate a respondent size of N=40 per clinic (rounding up slightly). It is noteworthy that we anticipate combining individual items into composite 5-point scales that would generally be expected to have higher reliabilities than their component scales.48 Hence, N = 40 would certainly be expected to provide sufficient (>0.80) power for a ½ point change with alpha = 0.05. Adjusting for response rate, the required post-intervention patient cohort survey sample size per clinic is 57. Therefore the minimal estimate of 66 patient pre-intervention survey participants per clinic – who are expected to visit for a non-acute ambulatory care visit and be asked to participate in the post-intervention survey – should be more than sufficient.
Clinician and Clinic Support Staff Outcome Measures (Primary and Secondary Behavioral, and Intermediate)
A pre- and post-intervention cohort design will be used to compare study arm change in the clinician primary behavioral outcome of CRC screening rate, which will be obtained pre- and post-intervention from electronic encounter and claims data. Post-intervention survey measures will be used to assess the effect of the intervention on secondary behavioral outcomes (e.g., CRC screening discussion/recommendation) and intermediate outcomes (e.g., clinician beliefs and attitudes). The power analysis focused on the post-intervention survey to assess study arm differences in secondary behavioral and intermediate outcomes since this design is less powerful than the pre- and post-intervention cohort design that will be used to assess change in clinician screening rates. For power calculation purposes it is assumed that analyses will be conducted to effectively determine whether the mean outcome scores differ among the pairs of study arms. To further facilitate power analysis calculations, results of individual Likert-item performance in a previous study of sigmoidoscopy attitudes and opinions in 60 physicians were examined.18 This, much as above, provided for estimation of the largest expected item variance (σ2), and worst-case reliability (ρ). With this information, variants of equations 1 and 2 above were used to calculate the minimum clinician size per clinic to detect a ½ point difference in an opinion measure between two study arms, with alpha = .05 and power = 80%. The joint clinician respondent size of N = 6 clinicians per clinic was found to be sufficient. Power is further improved if we assume: 1) clinic randomization to study conditions within blocks (based on pre-intervention screening rates), and 2) use of clinician post-intervention screening rates, with pre-intervention rates partialed out, to control for any secular trend in the control group. Because the number of clinic support staff per clinic is expected to be the same as the number of clinicians per clinic, there should be sufficient numbers of clinic support staff to support adequate power in analyses of their response data (assuming performance akin to that provided by the clinicians).
Table B.2 shows the statistical power of 80% at the sample size proposed, and the sample sizes at alternative levels of statistical power. The first line of this table presents this information for the patient primary and secondary screening behavior analysis. The other two lines provide similar information for (1) assessing differences in patient intermediate outcomes (beliefs and attitudes), and (2) assessing differences in clinician and staff primary and secondary screening behavior outcomes and intermediate (e.g., beliefs, attitudes) outcomes.
Outcome Analysis |
Level of Statistical Power |
|||
70% |
80% |
90% |
95% |
|
Patient primary and secondary behavioral outcomes |
119 |
150 |
201 |
249 |
Patient intermediate outcomes |
32 |
40 |
54 |
67 |
Clinician and clinic support staff behavioral and intermediate outcomes |
5 |
6 |
11 |
14 |
About four months prior to implementation of the intervention, all clinicians and clinic support staff in the 21 clinics randomized to the three study conditions will be contacted by the research offices of their respective MCO. The intervention will be described, including the surveys to evaluate the effect of the intervention. These clinicians and clinic support staff will be told that they will be asked to participate in the post-intervention clinician and clinic support staff surveys. In addition, clinicians and clinic staff will be informed that a sample of their patients will be surveyed as part of the evaluation of the intervention.
One year after initiation of the intervention all clinicians and clinic support staff in the participating clinics will be sent a post-intervention survey. This will include a cover letter from their respective MCO research office explaining the purpose of the survey and informed consent, and an incentive of $50 for clinicians and $25 for clinic support staff. A reminder postcard, second survey mailing, and second reminder postcard will be sent to clinicians and clinic support staff. 60 It is expected that virtually all clinicians and clinic support staff will participate in the post-intervention surveys because of strong MCO management support, and clinician and support staff interest in improving CRC screening. Table A.16B2 presents the expected number of clinician and clinic support staff respondents to the post-intervention survey. The post-intervention surveys for clinicians and clinic support staff are presented in Attachment 4, and the survey cover letters are presented in Attachment 5.
About four months prior to implementation of the intervention, the pre-intervention patient survey will begin. All active patients age 50 to 80 in each of the 21 study clinics will be identified based on having visited the clinic for any reason within the previous year. A sample of 225 patients from each clinic will be randomly selected from these active patients, totaling 4,725 patients. These patients will be mailed the pre-intervention survey with a cover letter from their respective MCO research office. The cover letter will explain the purpose of the survey, and indicate that their primary care clinician supports the survey. An informed consent statement will also be included explaining their rights as human subjects, and that return of the survey provides their consent. An incentive of $10 will be included with the survey. A reminder postcard will be sent to all patients 2 weeks after the initial survey. A second copy of the survey and cover letter will be sent to non-responders approximately one month after the initial survey. Finally, a third survey will be sent to any remaining non-responders one month after the second mailing. Thus, all survey mailings to patients will be complete within approximately a two-month period. It is conservatively expected that at least 70% of patients will complete the survey, resulting in an average of 157 pre-intervention survey respondents in each clinic, totaling 3,308 respondents. The intervention will begin approximately two months after the final patient survey mailing.
The post-intervention patient survey will be administered to a sample of patients who have a non-acute ambulatory care visit during the one-year period after implementation of the intervention, and who are due for CRC screening at the time of their clinic visit (determined by electronic record data). Rather than send the patient surveys to patients on an ongoing basis after their clinic visits, we will conduct the post-intervention patient surveys quarterly (i.e., at four time points after implementation of the intervention). This will simplify the process of administering the survey, and at the same time ensure that the surveys are completed by patients within a relatively short time period after their non-acute ambulatory care visit (i.e., at least 1 month after their medical visit but within a maximum of 4 months).
At each quarterly survey point, clinic visit records will be used to determine whether any of the pre-intervention survey participants had a non-acute ambulatory care visit. All of these patients will be sent the post-intervention survey. It is expected that approximately 66 of the expected 157 pre-intervention survey participants in each clinic will have a non-acute ambulatory care visit and will be sent the post-intervention survey during the follow-up year (see calculations in section B.1). Thus, an average of 16 pre-intervention patient survey participants in each clinic will be sent the post-intervention survey each quarter. These patients will be sent a post-intervention survey with a cover letter from the MCO research division. The cover letter will thank the patient for previously completing the pre-intervention survey. It will indicate that the patient was selected to be sent the survey because they recently visited for a medical visit, and will explain the purpose of the survey. Patients will be given an incentive of $10. Follow-up reminder postcard and survey mailings to non-respondents will be sent using the same procedures as for the pre-intervention patient survey.60 It is expected that a total of 1,388 patients (66 per clinic) will make up this cohort of patients who will be sent a post-intervention survey. Assuming a conservative 70% response rate for these patients, an average of 46 patients per clinic are expected to respond, totaling 972 patients who complete both pre- and post-intervention surveys (see section B.1).
As described in section B.1, an additional post-intervention survey sample of 3,337 patients (average of 158 patients per clinic) will be randomly selected from all patients age 50 to 80 who are not selected for the pre-intervention survey, are seen for a non-acute ambulatory care visit during the 1-year intervention period, and are due for CRC screening at the time of the visit. In order to achieve this on a quarterly survey schedule, 834 patients (average of 39 per clinic) meeting these criteria and seen for a medical visit during the previous three months will be selected at each quarterly survey time point. These patients will be sent a post-intervention survey with a cover letter from the MCO research division. The cover letter will indicate that the patient was selected because they recently visited for a medical visit. The cover letter will explain the purpose of the survey and indicate that the patient’s primary care clinician supports the survey. Follow-up reminder postcards and survey mailings to non-respondents will be sent using the same procedures as for the pre-intervention patient survey. Assuming a 70% response rate, it is expected that a total of 2,335 additional patients who are not part of the cohort (average of 111 per clinic) will complete the post-intervention survey during the year following intervention initiation. Combining these with the patient cohort will result in a total of 3,308 completed post-intervention patient surveys (average of 157 per clinic). Table B.1C shows the expected numbers of patient respondents to the pre- and post-intervention surveys among the cohort, and the additional patient post-intervention sample. The pre- and post-intervention patient surveys are presented in Attachment 4, and the survey cover letters are presented in Attachment 5.
Methods to Maximize Response Rates and Deal with Nonresponse
Clinicians who spend most of their time on direct patient care are a particularly difficult group to survey. These clinicians are inundated with mail, faxes, and telephone calls from patients, pharmaceutical companies, sales representatives, researchers, and colleagues. Most clinicians’ offices have administrative personnel assigned to sort through these various incoming messages and only pass on to the physician those most in need of their direct attention. Consequently, surveys of practicing physicians generally result in lower response rates than surveys of other groups of respondents, including other professionals. Like clinicians, patients often receive requests to participate in research studies that require the completion of a questionnaire. Although these requests are often made on behalf of their health care provider, patients may feel overburdened and therefore reluctant to participate. Nevertheless, reviews of survey methods clearly point to a number of procedures that improve response rates among physicians and patients. The proposed plan for data collection incorporates these proven methods.
In the past, collecting data by mail has been shown to be the best approach among a variety of groups. This is particularly true for clinicians. Other alternatives, including face-to-face interviews and computer-assisted telephone interviews, have their own advantages and disadvantages, strengths and weaknesses. For example, personal face-to-face interviewing has generally resulted in the highest response rates (between 70-90%) but is also the most expensive type of data collection effort and takes the greatest amount of time to complete. The costs of using this method for this survey would be considered prohibitive. Telephone surveys have traditionally had response rates comparable to face-to-face interviews (between 70-90%) while costing substantially less to conduct. However, telephone interviews must be kept shorter. It is more difficult to keep a respondent's attention while on the telephone than in a face-to-face interview situation. Methods researchers recommend that telephone interviews be kept to 20 minutes for an optimal response rate. Response rates for telephone interviews have traditionally been high because telephone norms in our society generally do not condone hanging up on a caller.28 However, there is evidence that telephone norms and practices are changing. Survey researchers also find that they are spending more time screening for valid telephone numbers because of the growth of new telephone numbers due to pagers, modems, and faxes. In addition, many individuals have caller identification, telephone answering services or voice mail, allowing them to screen out unwanted calls. With new telephone norms and multiple unusable numbers, telephone data collection is becoming less efficient and more costly. In particular, the cost and effort of contacting clinicians and scheduling a personal or telephone interview would be very high.
Mailed surveys are the least expensive form of data collection, but researchers have usually had to contend with lower response rates than telephone survey and in-person interview methods. One limitation of mailed surveys is that single mailings with no follow-up may yield response rates approximately 20-40 percentage points lower than one mailing with additional contacts.29 An advantage is that for the same response time burden, one can ask more questions with a self-administered mail survey than in a telephone interview, thus allowing self-administered questionnaires to be longer than telephone interviews, although not as long as in-person interviews. Research has shown that self-administered mail surveys can be longer if the topic is of high interest or importance to respondents.
To overcome the low response rates typically encountered with mail surveys, Dillman proposed a mailed survey methodology that is based on social exchange theory.28 His method, called the Total Design Method (TDM), has been shown to increase response rates among mail survey respondents to as high as 77%, comparable to telephone and in-person response rates.29 The Total Design Method described by Dillman in 1978, now called the Tailored Design Method, consists of a number of suggested steps to improve survey response rates. The basis for TDM is that researchers can encourage higher response rates through the use of social exchange theory by rewarding respondents through non-monetary or monetary means, reducing perceived costs to respondents by reducing effort, and establishing trust through treating the respondent as a partner in the process. Dillman recommended that in operationalizing these factors based on social exchange theory, researchers must pay attention to the details of contact with respondents, wording of letters, incentives related to completion, length of questionnaires, mailings, and follow-up with study participants.29
Multiple methods studies, reviews and meta-analyses have been conducted to determine which factors lead to an increase in response rates in mail surveys. Generally, studies show that preliminary notification, multiple follow-ups with respondents, monetary and non-monetary incentives, use of first class stamped envelopes and appropriate salutations have positive effects on response rates among physicians and patients.26-28,39,42,54,55 Other variables, such as sponsorship or endorsement, use of personalization techniques in mailings, and length of questionnaires, have shown inconsistent effects on response rates.27,28,56 Yammarino et al. and Fox et al. conducted meta-analyses of the published survey methods literature, comparing all the factors in these studies.57,58 Studies reviewed using experimental or quasi-experimental study designs manipulated a wide variety of factors [17 by Yammarino et al.; 9 by Fox et al.] thought to be related to survey response rates. The meta-analyses conducted were multi-factorial allowing all variables of interest to be compared with each other for effects on response rates. These researchers concluded that preliminary notification, follow-up, return envelope and postage, and monetary incentives were effective in increasing response rates. Yammarino et al. showed that these factors increased response rates by 28.5%, 30.6%, 18.4% and 2.4% respectively.57 Fox et al. found that sponsorship of surveys by organizations increased response rates, but this was not found in the Yammarino et al. meta-analysis.57,58
Previous surveys of physicians have examined the effect of endorsements, reminders, type of survey, and incentives on response rate. Overall, these studies had response rates that ranged from 11% to 92% with a mean physician response rate of 52%. The higher response rates were obtained with special populations such as graduates of certain programs or members of specific organizations. In general, most of these studies did not follow the procedures recommended by Dillman27,28 in his Total Design Method, or procedures shown by survey methods researchers to be effective in increasing response rates.
As with physicians, response rates to mail surveys among the general population can often be maximized through the utilization of Dillman’s techniques such as pre-notification, multiple follow-ups with respondents and incentives. For example, two studies assessing the effect of pre-notification found that the use of a pre-notification letter increased overall response rates by 3 - 6 percentage points compared to no pre-notification.59,60
This study will use Dillman’s techniques to maximize response rates to the clinician, clinic support staff, and patient surveys. In addition, this study will take advantage of a close working relationship with the participating MCOs to increase response rates. Specifically, as described in section B.2., the research personnel from the respective MCO research offices will personally meet with clinicians and clinic support staff to describe the intervention and evaluation and solicit their participation. Patient response rates will benefit from endorsement by their clinician or clinic.
The methods proposed for this study will also include multiple follow-up procedures by mail after the initial survey has been sent, inclusion of stamped return envelopes, and monetary incentives to participate, based on Dillman’s Tailored Design Method28 and a thorough review of survey methods described above. This plan represents the best approach to balancing the need to control costs with the desire to achieve high response rates. The methods proposed for this study have been highly successful in achieving 70% to 81% response to national surveys of clinicians,35,61 80% response to a Washington State survey of primary care clinicians, and 71% response to a survey of patients obtaining care in primary care clinics in Washington State.43
A reminder postcard will then follow the initial mailing by two weeks and will be sent to all sampled patients. This postcard will thank all respondents who have completed the survey and ask those who have not yet responded to please do so promptly. A phone number will be included in case an initial mailing did not reach the intended patient. A second mailing, similar to the first but with a different cover letter, will then be sent by first-class mail to all non-respondents within four weeks of the first mailing. All patient non-respondents will be sent a third survey mailing.
To encourage participation, the survey introductions provide estimates of the time needed to complete the entire survey. In addition, the surveys are designed in sections for ease of navigation and completion.
One year after intervention implementation, all clinicians and clinic support staff will be asked to complete the post-intervention survey. As an incentive, $50 will be included in the mailing to clinicians and $25 will be included in the mailing to clinic support staff. Follow-up procedures (similar to those used for the patients for the pre-intervention) will be employed. Using these methods we expect to obtain at least 80-90% response to the clinician and clinic support staff surveys.
Data collection for patients will utilize an open cohort follow-up design. Those patients who participated in the pre-intervention survey and who visit their clinics for a non-acute ambulatory care visit during the intervention period will be sent the post-intervention survey. In addition, a sample of patients who visit the clinic for a non-acute ambulatory care visit but who did not participate in the baseline survey, will be surveyed to increase the post-intervention survey sample size (see Section B.1. for a complete discussion of sample size). Again, a $10 incentive will be included in the survey packet for patients asked to complete a post-intervention survey. Using this methodology, it is expected that a 70% response rate will be obtained for the pre-intervention and post-intervention patient surveys.
Tests of Procedures or Methods to be Undertaken
The study instruments include patient, clinician, and clinical staff surveys. These surveys were designed to be administered pre- and post- implementation of the intervention (for the patients) and at post-intervention (for the clinicians and the clinic support staff). The surveys were developed through a five-step process: 1) a survey outline draft was compiled based on desired evaluation outcomes; 2) literature was reviewed and existing questionnaire items compiled; 3) patient and clinician surveys were drafted by choosing, adapting and designing appropriate questions; 4) an internal Battelle survey review was conducted, and 5) external survey review was conducted by members of the target audience.
The external survey review was carried out in two steps. Nine practicing physicians, nine patients, and four clinic support staff participated in a preliminary review of the survey instruments in Fall of 2001. They completed the survey in order to measure respondent burden and provide comments and advice about the format, appropriateness, and relevance of survey questions. Their recommendations were incorporated into revisions of the instruments. The revised instruments were next further reviewed by the respective target audience at each MCO during a Materials Review Phase. Totals of six clinicians, seven clinic support staff, and seven patients at LHS and HFHS carefully reviewed the respective survey instruments and provided recommendations through personal or telephone interviews between April and August, 2004. Their comments were incorporated to finalize the survey instruments.
Main sections of the final instruments are listed below, with complete survey instruments included in Attachment 4. The instruments were designed so that many constructs are measured on five-point Likert-type scales, a response format that most people are very familiar with.
1. Clinician Survey
Part I: Clinician Characteristics (demographics)
Part II: Preventive Services Opinions (prioritization of CRC tests relative to others)
Part III: CRC Screening Training and Experience
Part IV: CRC Screening Practices
Part V: Beliefs about CRC Screening Tests
Part VI: Facilitators and Barriers to CRC Screening
Part VII: Factors that Affect CRC Screening (e.g., social support, environmental resources)
Part VIII: Satisfaction with CRC Screening Training and Resources
2. Clinical Staff Survey
Part I: Clinical Staff Characteristics (demographics)
Medical Appointment Responsibilities
Part II: Preventive Services Opinions (prioritization of CRC tests relative to others)
Part III: CRC Screening Training and Experience
Part IV: CRC Screening Practices
Part V: Beliefs about CRC Screening Tests
Part VI: Facilitators and Barriers to CRC Screening
Part VII: Factors that Affect CRC Screening (e.g., social support, environmental resources)
Part VIII: Satisfaction with CRC Screening Training and Resources
3. Patient Survey
Part I: Patient Characteristics
Part II: Personal Cancer Experience and Family History of Colon Cancer
Part III: Experience with Tests and Screenings (including CRC screening)
Part IV: CRC Screening Experience (satisfaction)
Part V: Colon Cancer Knowledge
Part VI: Your Opinions (e.g., beliefs about CRC screening tests, perceived facilitators and barriers)
Part VII: Social Support (perceived support for each CRC screening test)
Part VIII: Motivation to Talk about Colon Cancer or Get Screened
Part IX: Exposure and Reaction/Satisfaction with Intervention
B. Data Collection Procedures
All data collection procedures, question formats, and response scales to be used in this study have been previously tested by Battelle in three important studies involving surveys of clinicians and patients. One 23-page national survey of physicians and another 21-page survey of primary care clinicians obtained 70% and 81% response rates, respectively.26,35,61 Another 43-page survey of Washington State primary care clinicians obtained 80% response, and a survey of patients receiving care from those clinicians obtained 71% response.43 These procedures, used to design questionnaires relevant to practicing clinicians and patients, and to obtain high response rates, have been described in conference presentations including an invited symposium on methods to maximize clinician survey response.62
C. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
Battelle Centers for Public Health Research and Evaluation (CPHRE) staff worked with staff from CDC to design the study protocol and data collection instruments. Daniel Montaño, Ph.D. (206-528-3105; [email protected]) and Danuta Kasprzyk, Ph.D. (206-528-3106; [email protected] ) led the Battelle effort to design this protocol and data collection instruments. William Phillips, M.D., MPH (206-528-3126; [email protected] ) assisted in the design of the survey instruments. Diane Burkom, M.A. (410-377-5660; [email protected]) assisted in the design of the data collection procedures. Alvah Bittner, Ph.D. (206-528-3263; [email protected]) assisted in the design of the sampling plan. Steve Leadbetter, M.S. (770-488-4143; [email protected]), a statistician in CDC’s DCPC, assisted in review of the sampling and analysis plans.
Battelle will work with HFHS and LHS to carry out the intervention, and to collect and analyze the data for CDC. The overall data collection and analysis effort will be directed by Drs. Montaño and Kasprzyk. Battelle will work closely with the Dr. Elston-Lafata (Research Director at HFHS: (313) 874-5454; [email protected]) and Dr. Gunter (Research Director at LCF: (505) 262-7857, [email protected]) to carry out the data collection. Drs. Montaño and Kasprzyk will analyze the data with assistance from Dr. Bittner ((206) 528-3263; [email protected]) and collaboration with Drs. Elston-Lafata and Gunter. Drs. Montaño and Kasprzyk will be responsible for writing the Final Report.
Judith W. Lee, Ph.D., DCPC, CDC, is the technical monitor who will approve and receive all contract deliverables, and collaborate with data analysis (770-488-4864; [email protected]).
Bibliography
American Cancer Society. Cancer facts and figures, 2007. Atlanta, Georgia: American Cancer Society, 2007.
Winawer SJ, Fletcher RH, Miller L, Godlee F., Stolar MH, Mulrow CD, Woolf SH, Glick SN, Ganiats TG, Bond JH, Rosen L, Zapka JG, Olsen SJ, Giardiello FM, Sisk JE, Van Antwerp R, Brown-Davis C, Marciniak DA, and Mayer RJ. Colorectal Cancer Screening: Clinical Guidelines and Rationale, Gastroenterology, 112: 594-642, 1997.
Wingo PA, Tong T, and Bolden S. Cancer statistics. CA Cancer J Clin, 45: 8-30, 1995.
US Preventive Services Task Force. Guide to clinical preventive services: Report of the US Preventive Services Task Force (2nd edition). Baltimore (MD): William & Wilkins, 1996.
Frazier AL, Colditz GA, Fuchs CS, Kuntz KM. Cost-effectiveness of screening for colorectal cancer in the general population. JAMA, 284:1954-61, 2000.
Smith RA, Cokkinides V, von Eschenbach AC, Levin B, Cohen C, Runowicz CD, Sener S, Saslow D, Eyre HJ. American Cancer Society guidelines for the early detection of cancer. CA Cancer J Clin, 52(1):8-160, 2002.
Selby JV, Friedman GD, Quesenberry JR CP, et al. A case-control study of sigmoidoscopy and mortality from colorectal cancer. New Engl J Med, 326:653-7, 1992.
Centers for Disease Control and Prevention. Colorectal cancer test use among persons aged >50 years-United States. MMWR 52:193-196, 2001.
Bejes, C. and Marvel, M.K. Attempting the improbable: Offering colorectal cancer screening to all appropriate patients. Fam Pract Res J, 12: 83-90, 1992.
Montaño DE, Kasprzyk D, Phillips WR, John L. Evaluation of physicians’ knowledge, attitudes, and practices related to screening for colorectal cancer. Final Report to the American Cancer Society and the Centers for Disease Control and Prevention, 1998.
Sladden MJ, Ward JE. Australian general practitioners’ views and use of colorectal cancer screening tests. Med J Aust, 170: 110-3, 1999.
Richards C, Klabunde C, O’Malley M. Physicians’ recommendations for colon cancer screening in women. Too much of a good thing? Am J Prev Med, 15: 246-9, 1998.
Williams RB, Boles M, Johnson RE. A patient-initiated system for preventive health care. A randomized trial in community-based primary care practices. Arch Fam Med, 7: 338-345, 1998.
Mahajan RJ, Marshall JB. Prevalence of open-access gastrointestinal endoscopy in the United States. Gastrointest Endosc, 46: 21-6, 1997.
Saad JA, Pirie P, Sprafka JM. Relationship between flexible sigmoidoscopy training during residency and subsequent sigmoidoscopy performance in practice. Mam Med, 26: 250-253, 1994.
Cooper GS, Fortinsky RH, Hapke R, Landefeld CS. Factors associated with the use of flexible sigmoidoscopy as a screening test for the detection of colorectal carcinoma b primary care physicians. Cancer, 82:1476-81, 1998.
Polednak AP. Screening for colorectal cancer by primary-care physicians in Long Island (New York) and Connecticut. Cancer Detection and Prevention, 13: 301-9, 1989.
Montaño DE, Phillips, WR, Kasprzyk, D. Explaining Physician Rates of Providing Flexible Sigmoidoscopy. Cancer Epidemiology Biomarkers and Prevention, 9:665-669, July, 2000.
Paskett ED, Rushing J, D’Agostino R, Tatum C, Velez R. Cancer screening behaviors of low-income women: the impact of race. Women’s Health, 3:203-26, 1997.
Swan J, Breen N, Coates RJ, Rimer BK, Lee, NC. Progress in cancer screening practices in the United States. Cancer, 97:1528-40, 2003.
Brenes GA, Paskett ED. Predictors of stage of adoption for colorectal cancer screening. Prev Med, 31(4):410-16, 2000.
Wardle J, Sutton S, Williamson S, Taylor T, McCaffery K, Cuzick J, et al. Psychosocial influences on older adults’ interest in participating in bowel cancer screening. Prev Med, 31(4):323-34, 2000.
Montaño DE, Thompson B, Taylor VM, Mahloch, J. Understanding mammography intention and utilization among women in an inner city public hospital clinic. Prev Med, 26: 817-24, 1997.
Montaño DE, Selby JV, Somkin C, Bhat A, Nadel M. Acceptance of flexible sigmoidoscopy screening for colorectal cancer. Cancer Detect Prev, 28: 43-51, 2004.
Nadel MR, Shapiro JA, Klabunde CN, Seeff LC, Uhler R, Smith RA, Ransohoff DF. A national survey of primary care physicians' methods for screening for fecal occult blood. Ann Intern Med,18;142(2):86-94, 2005.
Kasprzyk D, Montano DE, St. Lawrence J, Phillips WR. The effects of variations in mode of delivery and monetary incentive on physicians’ responses to a mailed survey assessing STD practice and patterns. Evaluation and Health Professions, 24(1):3-17, 2001.
Dillman DA. Mail and Telephone Surveys. New York, NY: John Wiley & Sons, 1978.
Dillman DA. Mail and Internet Surveys: The Tailored Design. New York, NY: John Wiley & Sons, 2000.
Everett SA, Price JH, Bedell AW, Telljohann SK. The effect of a monetary incentive in increasing the return rate of a survey to family physicians. Evaluation & the Health Professions, 20(2):207-214, 1997.
Tambor ES, Chase GA, Faden RR, Geller G, Hofman KJ, Holtzman NA. Improving response rates through incentive and follow-up: The effect on a survey of physicians' knowledge of genetics. American Journal of Public Health, 83(11):1599-1603, 1993.
Berk ML, Edwards WS, & Gay NL. The use of a prepaid incentive to convert nonresponders on a survey of physicians. Evaluation & the Health Professions, 16(2):239-245, 1993.
Gunn WJ & Rhodes IN. Physician response rates to a telephone survey: Effects of monetary incentive level. Public Opinion Quarterly, 45:109-115, 1981.
Weber SJ, Wycoff ML, Adamson DR. The Impact of Two Clinical Trials on Physician Knowledge and Practice. Arlington, VA: Market Facts, Inc., 1982.
Irwin KL, Anderson L, Stiffman M, et al. Leading barriers to STD care in two managed care organizations: Final results of a survey of primary care clinicians. 2002 National STD Prevention Meeting, March 4-7, San Diego, CA Abstract P96.
Montaño DE, Kasprzyk D, Carlin L, Freeman C, Christian J. HPV clinician survey: Knowledge, attitudes, and practices about genital HPV infection and related conditions. Final Report to CDC, 2005.
Berry SH & Kanouse DE. Physician response to a mailed survey: An experiment in timing of payment. Public Opinion Quarterly, 51:102-116, 1987.
Guglielmo W. Physicians' Earnings: Our exclusive survey. Medical Economics, 80:71, 2003.
Dietz VJ, Baughman AL, Dini EF, Stevenson JM, Pierce BK, & Hersey JC. Vaccination practices, policies, and management factors associated with high vaccination coverage levels in Georgia public clinics. Georgia Immunization Program Evaluation Team. Archives of Pediatrics and Adolescent Medicine, 154(2):184-9, 2000.
Collins RL, Ellickson PL, Hays RD, McCaffrey DF. Effects of incentive size and timing on response rates to a follow-up wave of a longitudinal mailed survey. Evaluation Review, 24(4):347-63, 2000.
James JM, & Bolstein R. Large monetary incentives and their effect on mail survey response rates. Public Opinion Quarterly, 56:442-453, 1992.
Lesser V, Dillman DA, Lorenz FO, Carlson J, & Brown TL. The influence of financial incentives on mail questionnaire response rates. Paper presented at the meeting of the Rural Sociological Society, Portland, OR, 1999.
Shaw MJ, Beebe TJ, Jensen HL, & Adlis SA. The use of monetary incentives in a community survey: impact on response rates, data quality, and cost. Health Services Research, 35(6):1339-1346, 2001.
Montaño DE, Kasprzyk D, Phillips WR. Primary Care Providers’ Role in HIV/STD Prevention. Final Report to the National Institute of Mental Health. Grant No. 5 R01 MH52997-04, 2003.
Cabana MD, Rand CS, Power NR, Wu AW, Wilson MH, Abboud PC, Rubn HR. Why don’t physicians follow clinical practice guidelines? A framework for improvement. Journal of the American Medical Association, 282(15):1458-1465, 1999.
Dictionary of Occupational Titles, US Department of Labor, 2001.
Scheffe’ H. The Analysis of Variance. New York: NY: John Wiley & Sons, 1959.
Winer BJ, Brown DR, Michels KM. Statistical Principles in Experimental Design, 3rd ed. San Francisco, CA: McGraw-Hill, Inc, 1991.
Cronbach LJ, & Furby L. How to measure change-or should we? Psychological Bulletin, 74:68-80, 1970.
Kotz S, Johnson NL, Read CB. Youden square. In: Encyclopedia of Statistical Sciences, Vol 9 (pp. 663-664). New York, NY: John Wiley & Sons, 1988.
Esnault VLM, Ekhas A, Delcroix C et al. Diruaetic and enhanced sodium restriction results in imroved antiproteinuric response to RAS blocking agents. J Am Soc Nephrol, 16:474-481, 2005.
Dolphin A, Jenner P, Marsden CD et al.. Pharmacological evidence for cerebral dopamine receptor blockade by metoclopramide in rodents. Psychopharmacology, 41:133-138, 1975.
Day, S. Dictionary for Clinical Trials. New York: John Wiley & Sons, 1999.
Dunlap WP and Kennedy RS. Testing for Statistical Power. Ergonomics in Design, 6-7:31, 1995.
Baron G, De Wals P, Milord F. Cost-effectiveness of a lottery for increasing physician’ responses to a mail survey. Evaluation and the Health Professions, 24(1):47-52, 2001.
McLaren B, Shelley J. Response rates of Victorian general practitioners to a mailed survey on miscarriage: randomised trial of a prize and two forms of introduction to the research. Australian and New Zealand Journal of Public Health, 24(4):360-364, 2000.
Maheux B, Legault C, Lambert J. Increasing response rates in physicians’ mail surveys: An experimental study. Am J Pub Health, 79:638-639, 1989.
Yammarino FJ, Skinner SJ, Childers TL. Understanding mail survey response behavior: A meta-analysis. Public Opinion Quarterly, 55:613-619, 1991
Fox RJ, Crask MR, Kim J. Mail survey response rate: A meta-analysis of selected techniques for inducing response. Public Opinion Quarterly, 52:467-491, 1988.
Dillman DA, Clark JR, & Sinclair MA. How prenotice letters, stamped return envelopes, and reminder postcards affect mailback response rates for census questionnaires. Survey Methodology, 21:1-7, 1995.
Dillman JJ, & Dillman DA. The influence of questionnaire cover design on response to mail surveys. Paper presented at the International Conference on Measurement and Process Quality, Bristol, England, 1995.
St. Lawrence JS, Montaño DE, Kasprzyk D, Phillips WR, Armstrong KA, Leichliter J. STD Screening, Testing, Case Reporting, and Clinical and Partner Notification Practices: A National Survey of US Physicians. American Journal of Public Health, 92(11):1784-1788, 2002.
Kasprzyk D, Montaño DE, Phillips WR, and Armstrong K. System for Successfully Surveying Health Care Providers. Invited symposium at the American Public Health Association meeting, November 2000, Boston, MA.
1 This would correspond to using a directional (one-tailed) test with α=0.025 with corresponding power
2 See Table 8.
3 HFHS and LHS, provided unpublished data that supported the 0.30 rate.
File Type | application/msword |
Author | Dvv1 |
Last Modified By | shari steinberg |
File Modified | 2009-10-14 |
File Created | 2009-10-14 |