PPHC OMB Part B revised as of 09 17 2010

PPHC OMB Part B revised as of 09 17 2010.docx

Patient Perpective of Delivery of Health Care Through the use of an Electronic Health Record Survey

OMB: 0990-0361

Document [docx]
Download: docx | pdf

B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS

1. Respondent Universe and Sampling Methods

a. Respondent Universe

Patient Survey

The respondent universe includes all patients who visited a primary care practice in the states of New York, Minnesota, Oregon, and North Carolina. These states were selected using subjective criteria to represent the four U.S. Census regions. Specifically, the states were selected to ensure the sample included: (1) a sufficient number of practices in each of three study groups (early EHR adopters, recent EHR adopters, and non-adopters) and (2) a diversity of primary sampling units (PSUs) in terms of urbanization.

Table B.1 (shown below) shows the prevalence of EHR use by state and region, the number of primary care medical practices in the state, the percentage of the state that is urbanized, the number of counties with an urbanized population greater than 50 percent of the total population, and the number of counties with an urbanized population less than 50 percent. Table B.2 gives the same information for the four states from which practices and patients will be drawn.

The list of practices will be drawn from a database provided by SK&A Information Services, an organization that maintains comprehensive databases with information about practices and physicians across the United States. SK&A maintains a database of 56,600 medical group practices (some of which are multi-site, as there are 34,317 “headquarters” practices), and over 140,000 dual and solo practices. This database combined information from government agencies, professional associations, white and yellow page directories, trade publications, mergers and acquisitions announcements, state licensing information, company and corporate directories, and internet websites. SK&A verifies information by calling every practice site every six months, so its data should be accurate and current.

While there is no “official list” of physician practices in the United States from which to draw a sample (Blumenthal et al. 2006), we have compared the number of physicians and practices stored in SK&A’s database to other sources to assess whether it is complete. For example, the Medical Group Management Association (MGMA) assembled a database from multiple sources in 2005, including the MGMA membership list, commercial databases, and several professional associations including the American Medical Association. The MGMA database contained 34,490 group practices (Gans et al. 2005),1 similar to the number of headquarters for group practices in SK&A’s list (34,317). Similarly, the SK&A database appears to have a comprehensive list of solo and group practices, having slightly more than the 132,300 estimated by Hing and Burt (2008) (according to their analysis of the National Ambulatory Medical Care Survey).

For our purposes, SK&A will be able to provide the contact information, practice size (defined by the number of physicians), patient volume, and EHR use for each primary care practice in our selected states.

Table B.1. EHR Prevalence and Urbanization, by State and Region

Region

State

Number of Primary Care Practices

Percentage of Practices Using EHRs

Percentage of State Urbanized

Number of Urbanized Counties*

Number of Non-Urbanized Counties*

Northeast

Connecticut

706

14.7%

83.6%

5

3

Northeast

Maine

314

36.6%

24.6%

1

15

Northeast

Massachusetts

1,059

30.3%

88.8%

9

5

Northeast

New Hampshire

227

30.8%

44.7%

3

7

Northeast

New Jersey

2,289

16.7%

92.2%

17

4

Northeast

New York

4,307

20.1%

81.7%

24

38

Northeast

Pennsylvania

3,128

19.0%

66.9%

22

45

Northeast

Rhode Island

235

22.6%

88.5%

4

1

Northeast

Vermont

129

25.6%

17.3%

1

13

South

Alabama

891

21.0%

43.7%

12

55

South

Arkansas

549

24.8%

32.2%

8

67

South

Delaware

211

21.3%

67.8%

2

1

South

Florida

4,415

19.1%

84.3%

32

35

South

Georgia

1,573

26.4%

61.2%

30

129

South

Kentucky

770

21.3%

38.8%

12

108

South

Louisiana

791

21.5%

56.7%

13

51

South

Maryland

1,218

18.6%

80.2%

11

13

South

Mississippi

514

23.2%

23.9%

6

76

South

North Carolina

1,543

29.6%

46.7%

20

80

South

Oklahoma

739

19.5%

43.0%

5

72

South

South Carolina

765

23.8%

46.7%

12

34

South

Tennessee

1,206

21.9%

52.1%

15

80

South

Texas

4,192

31.2%

71.0%

35

219

South

Virginia

1,295

26.3%

66.6%

40

95

South

West Virginia

496

22.4%

28.3%

9

46

Midwest

Illinois

2,381

21.4%

78.4%

21

81

Midwest

Indiana

1,221

21.4%

56.1%

21

71

Midwest

Iowa

483

27.1%

38.1%

9

90

Midwest

Kansas

469

25.8%

44.9%

5

100

Midwest

Michigan

2,114

19.2%

66.2%

18

65

Midwest

Minnesota

370

35.9%

55.1%

10

77

Midwest

Missouri

1,058

26.2%

55.2%

12

103

Midwest

Nebraska

339

23.3%

47.0%

4

89

Midwest

North Dakota

70

24.3%

35.9%

4

49

Midwest

Ohio

2,224

21.1%

64.4%

23

62

M

Table B.1 (continued)

idwest

South Dakota

139

25.9%

25.8%

2

64

Midwest

Wisconsin

534

27.9%

53.0%

16

56

West

Alaska

891

21.0%

44.3%

2

25

West

Arizona

989

29.8%

76.2%

3

12

West

California

6,332

18.3%

88.4%

29

29

West

Colorado

760

34.1%

74.7%

11

52

West

Hawaii

276

13.8%

69.0%

1

4

West

Idaho

272

27.6%

46.7%

6

38

West

Montana

141

29.1%

26.0%

3

53

West

Nevada

358

24.6%

83.9%

3

14

West

New Mexico

304

24.3%

47.4%

4

29

West

Oregon

540

34.1%

57.8%

7

29

West

Utah

253

41.9%

78.3%

6

23

West

Washington

847

30.3%

73.0%

14

25

West

Wyoming

94

27.7%

25.5%

2

21

*Urbanized counties are counties where the urbanized population is over 50 percent of the total population. Non-urbanized counties include the remainder of counties.

Table B.2. EHR Prevalence and Urbanization in Selected States

Region

State

Number of Primary Care Practices

Percentage of Practices with EHRs

Percentage of State Urbanized

Number of Urbanized Counties*

Number of Non-Urbanized Counties*

Northeast

New York

4,307

20.1%

81.7%

24

38

South

North Carolina

1,543

29.6%

46.7%

20

80

Midwest

Minnesota

370

35.9%

55.1%

10

77

West

Oregon

540

34.1%

57.8%

7

29

* Urbanized counties are counties where the urbanized population is over 50 percent of the total population. Non-urbanized counties include the remainder of counties.

Patient Focus Groups

In order to gather a more in-depth perspective about EHR use from patients, we will conduct four focus groups with 10 patients each whose primary care doctor uses an EHR system as part of their patient care. The focus group participants will not be statistically representative of any group. The focus group selection will be purposive, with solicitation of patients at practices using EHRs who have been going to that practice for at least a year, in order to be consistent with the selection of respondents for the patient survey. We will also attempt to recruit a demographically diverse set of participants for each group by age, gender, and race. Focus group participants will be recruited from two nearby medical practices (one of early adopters and one of late adopters) in two states (New York and Minnesota) in which we will recruit participants and hand out patient surveys. Our goal is to recruit 16 patients for each focus group to ensure that 10 show up.

2. Procedures for the Collection of Information

a. Sampling Methods

A three-stage sample will be used to select patients for the patient survey, with geographically based PSUs at the first stage, practices at the second stage, and patients at the third stage.

Stage 1

PSU formation

In the first stage, PSUs will be formed using counties or groups of counties. Census data will be used to determine which counties have urbanized populations above or below 50 percent, as shown in Table B.2. We will obtain estimates of EHR prevalence for every county in each of the four states from SK&A Information Services. These will be combined, as necessary, to form PSUs that have practices in each of the three EHR categories given above.

Primary (Explicit) Stratification

Explicit primary strata will be defined by classifying PSUs into urbanized and non-urbanized PSUs. (Urbanized PSUs are defined as areas where the urbanized population is greater than 50 percent of the total, and non-urbanized PSUs are defined as areas where the urbanized population is less than 50 percent of the total). In addition, states will form explicit strata.

First Stage Sample Selection

Eight PSUs will be selected, two within the non-urbanized strata (0.5 per state) and six within the urbanized strata (1.5 per state). The samples of PSUs will be selected first within the urbanized stratum. The number selected within each state will be determined by stochastically rounding the 1.5 allocation to 1 or 2, so that the total number selected adds up to 6. The resulting selection of PSUs will occur using statistical control on EHR prevalence, where the EHR-use categories act as implicit strata. We propose to use a random sequential selection algorithm (Chromy 1979). This algorithm will essentially result in a proportional allocation across the EHR groups within the urbanized PSUs with two urbanized PSUs selected in two states and one urbanized PSU selected in the other two states. Those states for which only one PSU was selected will be allocated the non-urbanized PSUs. In a manner similar to the selection of the urbanized PSUs, the selection of the non-urbanized PSU will use statistical control on EHR prevalence, where the EHR-use categories act as implicit strata.

Stage 2

Secondary (Explicit) Stratification

Practices will be selected in the second stage of selection. Once the PSUs have been selected, we will use practice-level EHR-use information within the PSUs to classify practices into one of the two EHR categories (EHR adopters and non-adopters). The sample of practices will be selected within explicit strata based on the two EHR-use categories and two practice size categories.

Second Stage Sample Selection

We are targeting the participation of 84 practices, 21 in each state. Based on our previous experience with surveys of physicians and practices, we assume 50 percent of practices will agree to participate (see Section 3, Methods to Maximize Response Rates). Therefore, we anticipate contacting 168 practices (42 in each state and 21 in each PSU). Practices will be selected using statistical control on the specific practice size and EHR use information, where practice size and EHR use information act as implicit strata; this will ensure a selection of a diversity of practices on these measures. Practices will be selected with equal probability, regardless of practice size. In case the participation rate differs from 50 percent, we will draw a larger sample and release the sample of practices in waves, to ensure a final sample of 21 practices in each state. For the purposes of variance estimation, we are assuming that practices will be selected with replacement. Because of the expected low response rate, we will conduct a nonresponse bias analysis (see Section B.3.)

Stage 3

The third stage will select patients within the participating practices to interview for the patient survey. For each practice, depending on patient flow, one or more time periods will be selected in a nonprobabilistic manner, where time periods are defined for each practice based upon the typical patient volume. (Information about patient volume is available from SK&A; we also will be receiving information about patient volume during different time periods in the practice screening survey.) Approximately one week before each practice’s selected time period, interviewers will ask the practice to estimate how many patients they will see during that time period. Then, interviewers will go to the practice during the selected time period and approach patients as they enter the waiting area to participate in the patient survey. If possible, we will add a random component to the patient selection (approaching every other patient in the waiting area, for example). Interviewers will approach and attempt to screen and recruit all patients entering the waiting room within the selected time period. Those patients who are under age 18 or who have not been with the practice for at least a year will be screened out as ineligible prior to the patient’s appointment. The target number of patient respondents is 20; if the number of patient respondents is not sufficiently close to twenty after the initial visit to the practice, we will add smaller time blocks on subsequent days and interview all the patients in those smaller time blocks. Patient volume information will be used to calculate the weights to be applied to these responses. To estimate the degree of nonresponse, interviewers will record the number of patients who refuse to participate in the survey. To assist in the calculation of nonresponse adjustments, interviewers will record whatever information they can about nonrespondents (estimated above or below age 65, race/ethnicity, gender).

Patient Focus Groups

The focus group participants will not be statistically representative of any group. The focus group selection will be purposive, with solicitation of participants at practices using EHRs who have been going to that practice for at least a year, in order to be consistent with the selection of respondents for the patient survey. We will also attempt to recruit a demographically diverse set of participants for each group by age, gender, and race.

b. Estimation Procedure

Plans for the statistical analyses of the data are presented in Part A. Statistical sampling software that accommodates the sampling design will be used to provide standard error estimates.

c. Degree of Accuracy Needed for the Purpose Described in the Justification

We calculated minimum detectable differences (MDDs) for comparing binary outcome responses among two of the three research groups (for example, patients of early-EHR adopting practices to non-adopting practices). For these calculations, we assume 20 patients in each of 84 practices, or 1,680 patients, will respond to the survey. Tables B.3 and B.4 show MDDs for binary outcome responses compared between two of the following three study groups: patients from practices that (1) do not use EHR, (2) adopted EHR recently, or (3) adopted EHR early. Table B.3 shows the MDDs for proportions for the outcome measures equal to 0.5, 0.35, and 0.25, assuming equal proportions of practices in each of the three study groups. Table B.4 shows the same information, except that it assumes 40 percent of practices fall into groups 1 and 3, and 20 percent fall into group 2.

As shown in Table B.3, for variables with a mean of .5, we have 80 percent power to detect a difference of .122 (at the 5 percent level for a two-tailed test) between the outcomes of patients of early adopting practices and the outcomes of patients of non-adopting practices (assuming all study groups are of equal size). As shown in Table B.4, if the size of the study groups is unequal, we will be able to detect effects of .112 or greater when comparing the two larger groups to each other, but the difference would have to be as high as .137 when comparing one of the larger groups to the smallest group.

Table B.3. MDDs for Comparing Binary Proportions with Varying Values, Assuming Equal Size Practice EHR-Use Categories

Proportion

No EHR Use vs. Early Adopter

No EHR Use vs. Recent EHR Adopter

Recent EHR Adopter vs. Early Adopter

P = 0.25

0.106

0.106

0.106

P = 0.35

0.117

0.117

0.117

P = 0.5

0.122

0.122

0.122

Notes: Assumes there are 84 practices with 20 patients per practice, and that 33 percent of practices fall into each group of practices. MDDs assume 80 percent power, a two-tailed test at the 5 percent level, and an intercluster correlation coefficient of .06.

Table B.4. MDDs for Comparing Binary Proportions with Varying Values, Assuming Unequal Practice EHR-Use Categories

Proportion

No EHR Use vs. Early Adopter

No EHR Use vs. Recent EHR Adopter

Recent EHR Adopter vs. Early Adopter

P = 0.25

0.097

0.119

0.119

P = 0.35

0.107

0.131

0.131

P = 0.5

0.112

0.137

0.137

Notes: Assumes there are 84 practices with 20 patients per practice, and that 40 percent of practices are non-adopters, 40 percent are early adopters, and 20 percent are recent adopters. MDDs assume 80 percent power, a two-tailed test at the 5 percent level, and an intercluster correlation coefficient of .06.

d. Unusual problems requiring specialized sampling procedures

No specialized sampling procedures will be used to accommodate unusual problems.

e. Use of periodic (less frequent than annual) data collection cycles to reduce burden.

Periodic data collection is not required since this is a one-time data collection.

Implementing Physician Practice Survey

Mathematica’s goal is to recruit 84 practices (21 practices in each of the four states selected for the study). The physician practice screener will be fielded approximately 13 months after the start of the project (in October 2010). Mathematica will use a telephone-administered paper-and-pencil survey as the data collection strategy for the physician practice screener. The physician practice screener is included in Appendix C. The screener collects data on the following topics:

  • Introduction. This section introduces the study sponsor, contractor, and study goals. It explains what participation in the study involves and asks if the practice is willing to participate in the study.

  • Practice Location. This section asks how many full-time and part-time physicians work at the current practice location.

  • Use of Electronic Medical Records. This section asks about the practice’s experience adopting and using electronic medical records.

Mathematica will mail an advance letter to all physician practices selected for recruitment using official ONC letterhead and envelopes. The advance letter will include a toll-free number giving practices the option to call with questions or to make an appointment to complete the survey. In addition, practices will be sent a sheet of frequently asked questions about the study.

The initial mailing to practices will occur in October 2010. One week after the initial mailing, Mathematica will begin telephone contact to conduct screening interviews with sampled practices. This effort will continue for 8 weeks—from mid-October through mid-December 2010. Mathematica will train survey staff experienced in interviewing physician practice managers to conduct the estimated 15-minute interview. About midway through the recruitment period, Mathematica will mail a second letter appealing to practices that have not completed a survey or scheduled an appointment. Mathematica expects that 50 percent of the practices contacted will agree to participate in the study and will complete a survey (see Appendix D for the physician practice advance letter, fact sheet, and second appeal letter).

Implementing Patient Survey

A self-administered survey will be the primary data collection mode for the patient survey. The survey will start 14 months after the beginning of the project (in November 2010), and data collection will continue through January 2011. Respondents will be approached in the physician practice waiting room by a trained Mathematica data collector. The data collector will introduce herself and describe the study, sponsor, and goals; solicit the patient’s participation; and screen to determine whether the patient is age 18 or older and that he or she has been a patient at the practice for more than a year. The data collector will hand the patient an introductory letter (printed on ONC letterhead and signed by the ONC Privacy Officer) describing the survey, and a fact sheet of commonly asked questions and their answers to review while waiting to be seen by the provider. Patients who agree to participate will be given the questionnaire and asked to complete it after their visit with the provider. The patient questionnaire is included as Appendix E to this submission; a copy of the patient letter and fact sheet is included in Appendix F.

Mathematica expects that patients will be able to complete the survey in 15 minutes or less. The questionnaire and all accompanying survey materials will be available in both English and Spanish. The following topics will be covered by the patient survey:

  • Section A: Health Status. This section collects self-reported health status and obtains information about medical diagnoses and knowledge of health conditions.

  • Section B: Today's Visit. This section collects information on the level of satisfaction with various aspects of medical care received, frequency of health care visits, and the procedures followed and advice obtained during physician visits. It also includes items specific to health care providers’ use of computers during the patient visit.

  • Section C: Comparing Today’s Visit to Visits a Year or More Ago. This section collects information on the change in level of satisfaction with various aspects of medical care received between today’s visit and a visit a year or more ago.

  • Section D: Care Coordination. This section collects information about patients’ perception of their provider’s knowledge of their health information and transfer of health information between care providers.

  • Section E: Background Information. This section collects information on patients’ age, gender, race, level of education, primary language spoken, marital status, employment status, and income.

Mathematica’s goal is to complete surveys with 1,680 eligible patients (20 per practice in each of the 84 practices), for a 70 percent response rate. The patient survey will be administered over a 12-week period. We expect to collect each practice’s 20 patient surveys in a one-day visit to the practice. The data collector will go to each practice on the day and time determined by the Mathematica statistician and will solicit all patients in the waiting room to participate in the study until 20 questionnaires have been completed.

Patients for Focus Groups

We will conduct four focus groups during December 2010 and January 2011, two in New York and two in Minnesota, with patients whose providers are using EHRs (two focus groups with early adopters and two with late adopters). Each focus group will last 90 minutes. The goal of the groups is to gather in-depth, qualitative data regarding patients’ perceptions of EHRs and their understandings of how EHRs affect the provision of health care. Specifically, we will ask how the providers’ use of EHRs may have affected the (1) quality of the patient-physician encounter, (2) physicians’ and patients’ access to information, and (3) coordination of care.

Table B.5 shows the data collection schedule for the surveys and focus groups.

Table B.5. Data Collection Schedule

Data Collection Activity

Start Date

End Date

Practice Screener

October 2010

December 2010

Patient Survey

November 2010

January 2011

Focus Group Discussions

December 2010

January 2011


3. Methods to Maximize Response Rates and Deal with Nonresponse

Physician Practice Survey

After the sample of physician practices is drawn, Mathematica will mail an advance letter to all sampled practices. The letter will be printed on ONC letterhead, personally addressed, and signed by the ONC Privacy Officer. It will include the email address and telephone number of Karen Bogen, Mathematica’s survey director for the study, whom practices can call for assistance. Accompanying the advance letter will be a list of the screening questions, and a fact sheet with answers to commonly asked questions. A few days after the advance letter is mailed, Mathematica staff will begin calling all sampled practices to recruit them into the study and screen them to determine which of the three study groups they will be assigned to (early adopters, late adopters, or non-adopters). We assume that 100 percent of the sampled practices will be eligible for enrollment in the study, and that we will successfully recruit 50 percent of the practices (84 practices in total, 21 practices in each state). We anticipate that physician practices will be motivated to participate in the study and complete the telephone screener survey due to the study sponsor, the salient subject matter, and the minimal burden placed on them to participate.

We expect that most if not all practices will require multiple telephone attempts by Mathematica staff in order to recruit and screen them into the study. The target person for the practice screener survey is the practice manager or office administrator. Some practice managers will need to talk with the lead physician or other senior administrator in the practice before agreeing to participate in the study.

During our recruiting calls, staff will describe the overall goals of the study, its policy relevance, and the data collection process; request their participation; and answer any questions they may have. We will provide assurances that individual survey responses will be kept confidential to the extent allowable by law, practice-level findings will be aggregated across practices with similar characteristics, and results for a single practice will not be identified or released. We are not collecting the names of survey participants. Practices will be offered $100 for participating in the study in order to ensure an adequate and timely response to our recruitment efforts.

Once practices have agreed to participate in the study, we will immediately administer the screener questions and assign them to one of three study groups based on their responses (early adopters, late adopters, and non-adopters). We will mail the incentive payment to recruited practices along with a letter thanking them for their participation and describing the next steps in the study process they can expect (see Appendix G for a copy of the practice thank you letter).

Patient Survey

Our goal is to collect questionnaires from 20 patients at each recruited practice over an eight-week field period for a total of 1,680 patient interviews. Mathematica will hire and train up to 16 local field staff to collect the patient surveys, four data collectors per state. We will send one or two field staff to each practice, depending upon the size of the practice and the number of patients seen, and on certain days and times as determined by the Mathematica statistician. Field staff will approach all patients who arrive in the waiting room during the selected time period. The field staff will describe the purpose of the survey and study, screen for eligibility, and explain the voluntary nature of the study and that the eligible patient will receive a $10 incentive for completing a paper survey after his or her visit with the provider. (See Appendix H for the questionnaire and focus group recruitment script.) Field staff will assure patients that the individual survey responses will be kept confidential and that we will not collect their names or share responses with the medical practice. They will give the patients an introductory letter on ONC letterhead and a fact sheet to read before deciding whether to participate. Patients who agree to participate will be given a paper questionnaire to complete once their visit with the provider has concluded. The questionnaire, letter, and fact sheet will be available in English and Spanish. Upon completing the questionnaire, the patient will hand it back to the field staff person who will place the questionnaire in an envelope and seal it. Each survey participant will receive a $10 gift card upon completing the survey and will sign a form indicating that he or she received payment.

Field staff will record the number of patients approached, the number who consent to participate, and the number who are ineligible (patients who have been going to the practice for less than a year or are under age 18). Field staff will also record the number of patients who refuse to participate or otherwise do not complete a survey, noting their gender, approximate age (above or below 65 years of age), and race/ethnicity (see Appendix I for a copy of a tally sheet). This information will be used to conduct a nonresponse analysis and to prepare nonresponse adjustments to the weights that will be applied to the data file for analysis.

Patient Focus Groups

Our goal is to conduct four focus groups, two in New York and two in Minnesota, with up to 48 patients total (12 per group) whose providers are using EHRs (practices in the early adopter and late adopter study groups). Field staff will recruit patients from survey nonrespondents on the same day they recruit patients for the survey. Patients who agree to participate in the survey will not be asked to participate in the focus group. Patients who do not want to participate in the survey will be asked if they are interested in participating in a discussion group. The discussion group purpose and process will be explained along with assurances of confidentiality. Focus group attendants will receive a $40 gift card payment to partially reimburse them for time and travel expenses. Patients who agree to participate in a discussion group will be asked three demographic questions and asked to give their telephone number, email address, home address, and contact preference. Interested patients will be contacted two times prior to the discussion group: (1) one week before the focus group they will be sent a letter with details on when and where the discussion will take place and directions to the location; and (2) a day or two before the focus group they will be contacted to remind them of it and make sure they still plan to attend. See Appendix J for a copy of the focus group fact sheet and contact information postcard.

Two Mathematica staff will conduct the focus groups; one will moderate the group and one will take notes. See Appendix K for a copy of the focus group moderator’s guide. The focus groups will be audio-recorded to assist with review and analysis.

Nonresponse Bias

Nonresponse is possible at the second and third stage of selection. For the second stage of selection, nonresponse weights will be calculated using information about the practices from the sampling frame as covariates in logistic regression models with a binary indicator of whether the practice participated or not as the dependent variable. By choosing covariates that are related both to the outcome variables of interest and to the propensity to respond, nonresponse bias will be reduced. At the third stage of selection, no sample frame is available to provide information about nonresponding patients. We will be forced to depend upon information provided by the interviewer about the nonrespondents’ age, race/ethnicity, gender, and other visible attributes of the nonrespondents.

Although the use of nonresponse weights will reduce nonresponse bias, it will not be possible to remove nonresponse bias entirely. As part of a nonresponse bias analysis, we will compare responding and nonresponding practices on information available from the sampling frame. We will also compare frame values with weighted values from responding practices, with weights adjusted and unadjusted for nonresponse. We will do the same for responding and nonresponding patients, to the degree possible, limited by the lack of available data on nonresponding patients. The comparison between sample values using adjusted and unadjusted weights will allow us to (1) see the potential bias with nonrespondents removed and no nonresponse weight adjustment and (2) assess the potential of the nonresponse bias adjustment to remove any bias or introduce bias. As far as practices are concerned, these comparisons will include sociodemographic characteristics of the practice locations as well as other practice characteristics (for example, practice size and patient volume). Although using these variables in the nonresponse weight adjustment models will alleviate nonresponse bias, the risk of nonresponse bias is still increased if response rates differ between subpopulations defined by the different levels of these variables.

4. Tests of Procedures or Methods to be Undertaken

Mathematica conducted a small pretest to assess the clarity of questions, identify possible modifications to question content and/or sequence, and estimate respondent burden for the medical practice and patient survey instruments. A convenience sample of patients from a single medical practice was used for the pretest. The pretest mirrored the data collection strategy planned for the main survey to the extent possible. Nine patients completed the self-administered questionnaire and then were debriefed for about 15-20 minutes about the questions. The interview length at the pretest was 13.5 minutes and no new questions were added since then. After the pretest, the wording of a number of questions was revised modestly to address inconsistencies in interpretation across respondents.

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The following people have contributed to the study design and to the design of the survey instruments, discussion guides, and focus group protocols:

  • Ms. Betsy Ranslow, Project Officer, Office of the National Coordinator for Health Information Technology, (202) 205-4387

  • Ms. Martha Kovac, a Mathematica associate director of survey research and study project director, (609) 275-2331

  • Ms. Stacy Dale, a Mathematica senior health researcher and study principal investigator, (609) 936-2726

  • Dr. Karen Bogen, a Mathematica senior survey researcher and study survey director, (617) 674-8355

  • Dr. Lorenzo Moreno, a Mathematica senior health researcher and study quality assurance reviewer, (609) 936-2776

  • Dr. Eric Grau, a Mathematica senior statistician, (609) 945-3330

  • Ms. Barbara Lepidus Carlson, a Mathematica senior statistician, (617) 674-8372

  • Dr. Ann Bagchi, a Mathematica senior health researcher and study task leader for focus groups, (609) 716-4554

  • Ms. Premini Sabaratnam, a Mathematica survey specialist and study project manager, (617) 674-8359

  • Dr. Jelena Zurovac, a Mathematica health researcher, (609) 275-2383

REFERENCES

Baron, R., E.L. Fabens, M. Schiffman, and E. Wolf. “Electronic Health Records Just Around the Corner? Or Over the Cliff?” Annals of Internal Medicine, vol. 143, 2005, pp. 222-226.

Blumenthal, David. “Stimulating the Adoption of Health Information Technology. New England Journal of Medicine.” Perspective, vol. 360, no. 15, March 25, 2009, pp. 1477-1479.

Blumenthal D, DesRoches C, Donelan K, et al. “Health Information Technology in the United States: The Information Base for Progress.” Robert Wood Foundation. 2006.

Chromy, J.R. “Sequential Sample Selection Methods.” Proceedings of the American Statistical Association, Survey Research Methods Section. 1979, pp. 401-406.

Gans, David, Kralewski, Terry Hammons and Dowd, Bryan. “Medical Groups’ Adoption Of Electronic Health Records And Information Systems.” Health Affairs, vol. 24, no. 5 (2005), pp. 1323-1333.

Hing, E. and Burt, CW. “Characteristics of Office-Based Physicians and Their Practices: United States, 2005-2006.” Vital and Health Statistics. Series 13, No. 166, April 2008.

Irani, Jihad S., Jennifer L. Middleton, Ruta Marfatia, Evelyn T. Omana, and Frank D’Amico. “The Use of Electronic Health Records in the Exam Room and Patient Satisfaction: A Systematic Review.” Journal of the American Board of Family Medicine, vol. 22, 2009, pp. 553–562.

U. S. Congress. American Recovery and Reinvestment Act of 2009, Public Law No. 111-5, February 17, 2009.

1 We considered using the MGMA database, but it contained data only on practices with 3 or more physicians, and it is not updated regularly. In contrast, SK&A contains data on solo and dual practices and is updated every six months.

1

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorBarbara Collette
File Modified0000-00-00
File Created2021-02-02

© 2024 OMB.report | Privacy Policy