0990-XXXX ehradoption B 5.14.2007 doc

0990-XXXX ehradoption B 5.14.2007 doc.doc

EHR Adoption in Ambulatory Physician Care Practices

OMB: 0990-0312

Document [doc]
Download: doc | pdf


B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS

  1. Respondent Universe

1.1 Target Population and Data Sources and Estimated Response Rates. The target population is practicing physicians (excluding those in the practices of anesthesiology, radiology, and pathology) either in solo or group practices. The proposed sampling frame is based on the American Medical Association’s Physician Masterfile. The Masterfile includes physicians providing office-based patient care in multiple specialties and is the source of sampling for NAMCS. The primary motivation for our definition of target population and choice of sampling frame is the desire to make the proposed study design as comparable to NAMCS as possible, while still able to achieve specific goals of the study.

Adoption of electronic health records for physicians in group practices is most often not an individual decision. Therefore, collecting data from individual physicians on this topic is not necessarily the most efficient or even logical approach for studying adoption in groups. However, as noted in Health Information Technology in the United States: The Information Base for Progress (cited above), “There currently is no obvious sample of physician group practices in the United States.” In addition, there is not an obvious sampling frame with the desired completeness available (p. 7:69). Therefore, our target population and sampling frame are chosen both for comparability to NAMCS, and to meet policy objectives in the absence of another high-quality frame.

Though drawing from the same sample frame, we are not deliberately contacting physicians for both surveys. Any overlap would be coincidental.

1.2 Response Rates. Response rates in physician surveys have been falling steadily, to approximately 50 percent, with variations among specialties (Asch et al. 1997).8 As a consequence, we will include design features known to improve response rate s.9 Our design features are intended both to minimize burden and to maximize response rates. With the features described below, it is hoped that the EHR Survey will achieve a response rate of 60 percent of the whole sample. The target final sample size is 3000, which is 60 percent of the starting number of 5000 physicians. This assumes a 1:1 match of both the physician portion and practice manager portions of the survey. We will select a stratified random sample of physicians licensed to practice in any one of the 50 United States. We will collect information on non-respondents from the AMA Masterfile in order to analyze non-respondent characteristics and ensure that they do not systematically differ from respondents.

  1. Procedures for the Collection of Information

2.1 Mail Survey Procedures. A four-wave mailing is proposed. Office managers will be reached through the sampled physician, so that the physician passes part of the questionnaire to the practice manager. During follow-up, attempts will be made to acquire contact information for practice managers that will permit direct contacts. The list from which we are sampling does not contain practice manager information.

After OMB clearance is received, the first wave, an advance letter, will be mailed to physicians approximately three business days prior to shipment of the survey instrument. The advance letter will be addressed to the physicians by name and will contain the OMB approval number. It will discuss the purpose of the survey, the voluntary nature of the respondent’s participation, the rights of research

  1. A sch DA, Jedrziewski MK, Christakis NA (1997). “Response rates to mail surveys published in medical journals” Journal of Clinical Epidemiology;50:1 129–36.

  2. Hogan, SO and Loft, JD (2005). Current Issues in Response Rates in Physician Surveys, paper presented to the Midwest Association for Public Opinion Research, Chicago IL. November 2005).

subjects, and an estimate of the time commitment. It will provide the address of a password-protected Web page for the Internet version of the questionnaire. It will contain a password for the sampled physician. Finally, the letter will contain contact information and signatures of the ONC and/or GWU­MGH senior project personnel.

The second-wave mailing will contain the $20 honoraria in the form of a check, the survey questionnaire, a supporting enclosure, and a cover letter. This cover letter will include the recommended deadline date for returning the completed survey. The package materials again will describe the purpose of the research, the rights of human subjects, an estimate of time burden, and instructions for completing and returning the completed questionnaire.

The third wave will be a postcard reminding respondents who have yet to return the survey to complete the forms, and thanking those who have returned them. The fourth wave will be sent to non-responders and will include a copy of the instrument and enclosures.

This survey differs from NAMCS in that it will rely more heavily on self-administered survey methods. NAMCS is also largely self-administered; however, in NAMCS an interviewer visits the physicians’ offices to provide instruction on using the forms. Data collection procedures for this project do not require such a high level of training for respondents. We expect there to be little differences between the EHR Survey and NAMCS that can be attributed to modality alone. Beebe et al.’s experiment with varying modes in a physician survey provides evidence that responses to key items are similar, regardless of modality (2006).10 Similarly, McMahon et al. (2003) explored differences using multiple modes and concluded that only one of 12 key questions differed at a statistically significant level. One might expect that there is a potential that those who respond by Internet are familiar with computer use and, therefore, reported EHR adoption might be highest among them.

2.2 Methodology for Stratification and Selection. Stratification will be used to ensure the resulting sample will have sufficient sample size for the required subdomain analyses. Subdomains of interest include the four Census Regions and vulnerable populations (Medicaid recipients, racial and ethnic minorities, and un- or under-insured populations). Since there is no known data source to allow for identification of physicians who serve vulnerable populations, we will use Census data to achieve the desired over-sampling. The stratification and selection of the sampling units will proceed in the following manner:

  1. Primary stage units (PSUs) will be formed using zip code, defined by the U.S. Postal Services (USPS).

  2. PSUs (or zip codes) will be stratified by the Census Region and three to four strata according to racial composition of the residents and their social economic status (such as median income, median housing value). Data from commercial vendors, such as that of Claritas, will be used for the creation of such strata.

  3. Sample size will be allocated to the strata created according to the desired sample size and expected hit rate of physicians who treat a large number of patients of interests. Groups of patients of interest include minority racial groups (African Americans and Hispanics) and under- or uninsured persons.

1 0 Beebe, T.J. et al. (2006 in press). “Mixing Web and Mail Methods in a Survey of Physicians.” Health Services Research. Kasprzyk, D. et al. (2001). “The Effects of Variations in Mode of Delivery and Monetary Incentive on Physicians' Responses to a Mailed Survey Assessing STD Practice Patterns.” Evaluation & The Health Professions, v. 24, n. 1, p. 3-17.

  1. The within-stratum and within-PSU sample sizes will be optimized so that the overall design effect due to over-sampling will be minimized.

  2. Physician records from the AMA Masterfile will be linked to the PSUs (zip codes) and a simple random-sample of physicians will be selected within a PSU according to the pre-determined sample sizes.

Though we do not have direct information regarding residents’ insurance status within a PSU, the SES variables used to create the stratification of PSUs are highly correlated with the insurance status and can be used as a proxy.

It is estimated that approximately 50 percent of the AMA Masterfile records contain the physician’s practice address. If the physician’s record contains the address of the group practice to which s/he belongs, then the practice address will be used instead of the physician’s home address for linking to a PSU. Our assumption is that physicians usually live in areas where they practice.

The design described above deviates from the NAMCS sample design in several aspects. First, the NAMCS draws a subsample of PSUs from the National Health Interview Survey (NHIS) as its PSUs. Those PSUs are typically MSAs, counties, or groups of counties. Instead, our proposed PSUs (zip codes) are much smaller in geographic area. Given the size of the NHIS PSUs, they will not be very effective in identifying providers of health care to vulnerable populations. As a result, we propose to use zip codes as a way to identify areas where we expect to have a higher probability of selecting providers of interest. Due to the smaller size of the proposed PSUs, we expect to experience higher intra-cluster correlation than that experienced in NAMCS at the PSU level. However, the average sample size allocated to each PSU will be small, and we will be able to draw considerably more PSUs in the sample. The expected overall impact of the clustering on the precision of the results and statistical power of tests is small.

The second difference is that NAMCS is a visit-based survey as well as a physician-based survey (patient visits are selected during a third stage of sampling). The sample allocation of our design may differ from that of NAMCS for this reason.

2.3 Estimation Procedures. The estimation procedure will follow the standard procedures used in analyzing complex surveys, by taking into consideration design features, such as clustering and over-sampling. Analysis weights will be created based on the initial sampling probability and subsequent adjustments such as non-response and post-stratification adjustments. All analyses will be conducted as weighted analyses using SUDAAN, with the appropriate specification of design parameters.

2.4 Degree of Accuracy. We plan to conduct the survey using the commonly used margin of error—the radius of a Normal distribution based 95% confidence intervals—for estimates of the overall physician population. The margin of error, based on the expected effective sample size (in terms of completed interviews), is determined by the following formula:

2 2 2

n = z á ó / 2 / e

2

where zá / 2 is the Normal percentile at 0.025, 2

ó the population variance, and e the assumed margin of error. For a binary variable whose true value is 0.5, Exhibit B-2.4 displays the nominal sample sizes for some possible design effects.

Exhibit B-2.4. Nominal Sample Size Requirements for Assumed Design Effect and Margin of Error




Margin of Error



Design Effect

0.03

0.04

0.05

0.06

0.07

1

1,067

600

384

267

196

1.2

1,281

720

461

320

235

1.5

1,601

900

576

400

294

1.7

1,814

1,020

653

454

333

2.0

2,134

1,201

768

534

392



The proposed sample size of 3,000 completed interviews will provide overall estimates that exceed a margin of error of ± 3%, which was the margin of error deemed appropriate by the Expert Consensus Panel for this project, and plenty of opportunities for subgroup analysis. For example, under an assumed design effect of 1.2, this sample size will allow us to achieve a margin of error of 4% for estimates at Census Region level. Although it is expected that design effects for estimates of vulnerable population sub-domains are likely larger, the sample size can still provide sufficient precision for them if at least a third of all surveyed physicians served a large number of targeted patients.

2.5 Unusual Problems Requiring Specialized Sampling. None other than those discussed under 2.4 and 2.5.

2.6 Use of Periodic Data Cycles. There will be none.

3. Procedures for Maximizing Response Rates

To assure the validity of the response to the survey, we will strive to achieve a response rate of 60 percent. Our methods to reduce respondent burden work in conjunction with those methods we use to maximize response rates. To summarize, we plan a mixed-mode approach (self-administered mail, Internet-based and telephone follow-up of non-responders). Additionally, a $20 incentive is being included in the second wave of mail, rather than after the completion of the survey. Each of these features is associated with improved response rates (for example, see Delnevo et al 2004,11 and Kasprzyk 200412). At approximately the time of the third-wave mailing, prompting calls will be placed to physicians and, when identification is available, to practice managers. No more than four calls will be placed to either physicians or managers. Calls will cease when a sample member has completed a survey, has expressed his or her refusal to participate, or is determined to be ineligible. Some research suggests that the most computer-savvy physicians will not respond to surveys unless allowed to do so by Internet (Olson et al., 2000,)13 so this option will be included. The phone is not preferred, but is helpful in prompting late-responding physicians to turn in mailed questionnaires.

1 1 Delnevo, C.D. et al (2004). “Physician Response Rates to a Mail Survey by Specialty and Timing of Incentive.,” American Journal of Preventative Medicine, v. 26, pp. 234-236.

12 Kasprzyk, D. et al. (2001). “The Effects of Variations in Mode of Delivery and Monetary Incentive on Physicians' Responses to a Mailed Survey Assessing STD Practice Patterns.” Evaluation & The Health Professions, v. 24, n. 1, p. 3-17.

13 Olson, L, Srinath, KP, Burich, MC, Klabunde, C (2000). “Use of a Web site Questionnaire as one Method of Participation in a Physician Survey,” paper presented to the 1999 meeting of the American Association of Public Opinion Research.

Survey weights will be constructed to adjust for non-response. These methods are described below.

3.1 Dealing with Non-Response. Non-response is a serious problem in physician surveys (Asch et al. 1997).14 Our Internet and phone center data-collection plans, outlined below, detail our follow-up intended to contact those who do not respond in the initial wave of survey mailing. Our final effort to deal with non-response is through the use of post-stratification statistical weighting, which is described in Section 2.4.

3.2 Phone Center Procedures. RTI will select phone center staff experienced in interviewing professionals. These experienced interviewers will place calls to late responding physicians in order to encourage participation. Prior to placing calls, these interviewers will undergo four hours of project-specific training. During this training they will be introduced to the EHR Study program, the survey instrument, Web address, and responses to frequently asked questions. The training will also cover human subjects research ethical principles. A variety of training techniques, emphasizing simulation of actual calling situations, will be used.

A script will be provided to the interviewing staff. Further, a computer-assisted telephone interviewing (CATI) system will allow interviewers to conduct phone interviews with respondents wishing to do so. This method will pose the same question set as that which is mailed and available on the Internet.

We anticipate making up to three telephone attempts to reach those sampled physicians who have not yet responded. We will place up to three calls to each practice manager if reliable contact information can be acquired. We will leave messages with office staff or on voice-mail systems in order to urge participation. We will offer the option of interviewing by phone, for both in-bound and out-bound calls, as well as prompting through the telephone effort.

3.3 Web-Based Interviewing. Some research (Olson et al., 2000) indicates that about one in five physicians who responded to a survey via the Web said they would not have participated by any other form. In this research, conducted in 1999, about 7% of the responses were received via the Web. Several of RTI’s recent surveys (2005, 2006) received between 10% and 15% of responses via the Web. These results indicate that failing to offer a Web-based option could lead to an under-representation of HIT-savvy, younger, U.S.-educated male physicians, and therefore perhaps to a misunderstanding on the part of facilitators to EHR adoption.

4. Tests of Procedures or Methods to Be Undertaken

Three evaluation methods were employed to measure usability and ensure that data collection systems are fully functional. The first method was utilizing a Questionnaire Appraisal System (QAS). This is a systematic review process to identify common sources of errors in surveys. Second, cognitive interviews replicated the respondent-survey interaction to identify any possible troubles the respondent may have with the questionnaire content and wording. This system ensures that we are asking for the most knowledgeable persons capable of responding. The final method was pilot testing, which was designed to simulate the actual survey process to identify any problems that might occur outside the laboratory setting and to fine-tune the methodology before administering the survey.

4.1 Questionnaire Appraisal System. RTI’s QAS is a structured, standardized instrument review methodology that assists a survey design expert in evaluating questions relative to the tasks they

1 4 Asch, DA, Jedrziewski, MK, and Christakis, NA. (1997). Response Rates to Mail Surveys Published in Medical Journals. Journal of Clinical Epidemiology. 50(10). 1129-1136.

require of respondents, specifically with regard to how respondents understand and respond to survey questions. A survey methodologist with extensive experience and training in designing and evaluating surveys conducted the QAS and make suggestions for improving data quality and reducing respondent burden. The QAS allowed the reviewer to evaluate the structure and effectiveness of the questionnaire form itself. It is essentially a coding system (i.e., item taxonomy) that describes the cognitive demands of the questionnaire and documents the question features that are likely to lead to response error. These potential errors include errors related to comprehension, task definition, information retrieval, judgment, and response generation (Willis and Lessler, 1999).15 This appraisal was used as a starting point for identifying particular instructions, questions, or response categories that may be problematic and could compromise the quality of the data in surveys. The QAS was then used to guide revisions of the questionnaire to improve questions for follow-up pretesting (e.g., cognitive testing and pilot testing) and survey administration.

4.2 Cognitive Testing of Data Collection Instruments. To improve efficiency and accuracy and to reduce respondent burden, we conducted cognitive pre-testing with a combination of physicians and practice managers, a total of nine in each category. Physicians and practice managers were tested using only those questionnaires intended for their particular group. This testing process allowed us to assess our questionnaire as well as support documents that are being sent to sampled physicians as part of the data collection effort. This allowed us to search for evidence that respondents understand the questions and their intended meaning. We verified response meanings to ensure that the questionnaire is understood correctly. We noted where respondents incorrectly follow instructions. This process allowed us to measure the time necessary to complete the form, its formatting, and word choice and response categories.

4.3 Formatting and Testing of the Internet and Mailed Survey. The Web-based and the mail instruments are designed for efficiency, clarity and comprehension. The Web-based version was tested by computer programmers and research staff.. This ensured the presence of all required questions, the display of appropriate follow-up questions, functionality of response categories, and response ranges of the questionnaire.

The performance of electronic systems was tested prior to deploying the questionnaire. RTI ensures that data collection systems function as expected and that range data and skip patterns apply properly in CATI and Web-enabled versions of the questionnaire. RTI ensured that data entry programs for mailed-in forms function similarly well. Tracking systems ensure that mail-receipt and outgoing mail systems can identify items sent and received before sending the notification letters.

4.4 Pilot Testing. After cognitive testing, we pilot tested the survey with a replicate sample of up to 30 sample members. This allowed us to verify that our procedures are foolproof under the stress of actual data collection.

This pretest followed the same protocols and procedures that will be utilized in the main study with only minor differences. We used research staff, rather than a call center, for follow-up calls. This helped evaluate problems and address them immediately. Second, we expect to proceed through three waves of mailing rather than four because the principal lessons of the pretest have been learned. The principal lessons to be learned from pretesting were gauging response rates and the flow of information

1 5 Willis, G. B. and Lessler, J.T. (1999). Question Appraisal System BRFSS-QAS: A guide for Systematically Evaluating Survey Question Wording. A report prepared for CDC/NCCDPHP/Division of Adult and Community Health Behavioral Surveillance Branch.

(Mangione, 1995), identifying design flaws not revealed in cognitive testing (Mangione, 1995), identifying areas in our training that might require modification, ensuring that our data management software operates as intended under the stress of actual data collection activities, and examining data to detect systematic item non-response or extremely low variance in responses to various items.


4.5 Estimated Burden that Will Be Added to NAMCS. Only a small number of the total EHR Survey questions would be absorbed into NAMCS in 2008—five to six questions related to adoption of EHRs. These are shown in Exhibit A- 12.2. This is an expansion of the current question number 21a in the 2007 NAMCS. New text and items are marked in bold. This would be expected to increase NAMCS by two to four minutes. This additional segment of the NAMCS survey would not require record keeping.

Exhibit B-4.5. EHR Items Planned for NAMCS

Does your practice have a computerized system for each of the following? For those features that you have, indicate the extent to which you use them.


Availability


Use


Yes


No

Don’t Know


I do

not use

I use some of the time

I use most or all of

the time

Not applicable to my practice or specialty

a) Patient Demographics



b) Patient problem lists



b) Orderc) Orders for prescriptions?



d) If yes – are there warnings of drug interactions or contraindications provided



e) If yes - Are prescriptions set electronically to the pharmacy?



f. Orders for laboratory tests?



g) If yes – are orders sent electronically?



h) Orders for radiology tests?



i) If yes, Are orders sent electronically?



j) Viewing Lab results?



k) If yes – are out of range levels highlighted?



l) Viewing Imaging results



m) If yes – are electronic images returned?



n) Clinical notes?



o) If yes – do they include medical history and follow up notes?



p) Electronic lists of what medications each patient takes



q) Reminders for guideline-based interventions and/or screening tests?



r) Public health reporting?



s) If yes Are notifiable diseases sent electronically?




5. Individuals Involved in Statistical Design, Data Collection, and/or Data Analysis

Exhibit B-5.1. Consultants

Name

Phone

Number

Member Expert

Institutional Affiliation Consensus Panel

Federal Government

Cheryl Austein-Casnoff

301-443-0288

DHHS-HRSA

Karen M. Bell , MD

202-619-0257

DHHS-ONCHIT

Catharine Burt

301-458-4126

DHHS-CDC

Kelly Cronin

202-260-5992

DHHS-CMS

Gail Graham

202-273-9220

VA-VHA

Jim Kretz

240-276-1755

DHHS-CMS, SAMHSA

Jayne Orthwein

301-975-3176

DOC-NIST

Celia Quivers

202-762-6104

Navy-Bureau of Medicine and Surgery

Edward Sondik

301-699-2164

DHHS-NCHS

Jon White

301-427-1171

DHHS- AHRQ

Janet Woodcock

301-827-3310

DHHS-FDA

Universities

Andrew Bindman, MD

415-206-6095

University of California-San Francisco *

Paul Cleary

203-785-2867

Yale School of Public Health *

Mark Pauly

215-898-2838

University of Pennsylvania-Wharton School *

Bruce Siegal, MD

202-530-2399

George Washington University-Department of *

Associations

Carmella Bocchino

202-778-3278

Health Policy

American Health Insurance Plans *

Francois DeBrantes

203-270-2906

Bridges to Excellence *

Terry Hammons, MD

303-397-7862

Medical Group Management Association *

Bernard L. Hengesbaugh

312-464-5360

American Medical Association *

Kevin Kearns

305-599-1015

Health Choice Network *

Mark Leavitt

503-647-7568

Certification Commission for Healthcare *

John R. Lumpkin, MD

609-627-5724

Information Technology

Robert Wood Johnson Foundation *

Sally C. Morton

919-316-3423

Research Triangle Institute (RTI International) *

Craig A. Hill

919-541-6327

Research Triangle Institute (RTI International)

Michael W. Painter, MD

609-627-7659

Robert Wood Johnson Foundation *

Mary A. Pittman

312-422-2622

Health Research & Educational Trust *

Paul Tang, MD

650-254-5200

Palo Alto Medical Foundation *

Chantal Worzala

202-626-2319

American Hospital Association *

(continued)

Name

Phone

Number

Member Expert

Institutional Affiliation Consensus Panel

Project Team

David Blumenthal, M.D.

617-726-5212

Massachusetts General Hospital

Sara Rosenbaum

202-530-2343

George Washington University

Catherine Desroches

617-724-6958

Massachusetts General Hospital

Lee Repasch

202-530-2338

George Washington University

Melissa Goldstein

202-416-0780

George Washington University

Karen Donelan

617-726-0681

Massachusetts General Hospital

Timothy Ferris, MD

617-724-4648

Massachusetts General Hospital *

Alexandra Shields

617-724-1048

Massachusetts General Hospital

Eric Campbell

617-726-5213

Massachusetts General Hospital

Sowmya Rao

617-726-6055

Massachusetts General Hospital

Doug Levy

617-643-0657

Massachusetts General Hospital

Sarah Johnson

617-726-7886

Massachusetts General Hospital

Renee Betancourt

617-724-1044

Massachusetts General Hospital

Ashish Jha, MD

617-432-5551

Harvard University School of Public Health *

David Bates, MD

617-732-5650

Brigham and Women's Hospital *

John D. Loft

312-456-5241

Research Triangle Institute (RTI International)

Jun Liu

919-541-5902

Research Triangle Institute (RTI International)

Sean O. Hogan

312-456-5265

Research Triangle Institute (RTI International)






0


File Typeapplication/msword
Authorsxp1
Last Modified Bysxp1
File Modified2007-05-14
File Created2007-05-14

© 2024 OMB.report | Privacy Policy