Ssb

SSB.docx

Survey of Sexually Transmitted Diseases (STD) Provider Policies and Practices in the United States

OMB: 0920-1207

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT FOR THE


SURVEY OF STD PROVIDER POLICIES AND PRACTICES

(OMB No. 0920-XXXX)



PART B: SPECIFIC INSTRUCTIONS – COLLECTIONS EMPLOYING

STATISTICAL METHODS

NEW SUBMISSION



February 3, 2017



Submitted by:

Division of Sexually Transmitted Disease Prevention

Centers for Disease Control and Prevention

Department of Health and Human Services




Refer questions to:


Jami Leichliter, PhD

Policy Science, Team Lead

Division of STD Prevention

Centers for Disease Control and Prevention


1600 Clifton Rd NE, MS E-02


Atlanta, Georgia 30329

(404) 639-1821

FAX (404) 639-8622

Email: [email protected]









List of Attachments

  1. Authorizing Legislation

  2. 60-Day FRN

  3. Mailed Survey

  4. Web Screen Shots

  5. Mail Survey Invite

  6. Mail Survey Reminder 1

  7. Mail Survey Reminder 2

  8. Web Survey Invite

  9. Web Survey Reminder

  10. Thank You Letter $40 Token

  11. IRB Approval

B.1 Respondent Universe and Sampling Methods

The target population for this survey will be U.S. physicians working in public and private settings across several specialties. Specifically, a sample of U.S. physicians will be selected from the American Medical Association (AMA) Master File, restricted to the target specialties and stratified along two dimensions. First, we will classify physicians based on their specialty, to include internal medicine, general or family practice, obstetrics and gynecology, emergency medicine, and pediatrics. Second, we will classify physicians based on whether they have a public or private practice, using primary practice location or another similar variable available in the AMA Master File.

Exhibit 1 illustrates the sampling frame we will create based on physician specialty and type of practice. The exhibit also shows population totals for the 5 physician specialties targeted for the study, and a tentative proportional allocation to primary strata. It is worth noting that the population size is much larger for the first two primary strata, or specialties, and substantially smaller for two of the specialties (Ob/Gyn and emergency medicine). As argued below, an equal allocation to strata would be inefficient for overall estimation. (This is also true for the public and private sub-strata of very unequal sizes.)

Exhibit 1. Sampling Frame for STD Provider Survey


Population Total

Proportional Allocation

Physician Specialty



Internal medicine

178,497

1,857

General/family

127,662

1,328

Obstetrics and gynecology

48,761

507

Emergency

47,065

490

Pediatrics

78,662

818

Total

480,647

5,000



To meet the Division of STD Prevention (DSTDP) at CDC’s dual analytical objectives of 1) computing national estimates of maximum precision, and 2) computing subgroup estimates of required precision levels, we will design and implement a disproportional allocation, oversampling smaller strata among the primary strata to ensure the precision of estimates for key subgroups defined by specialty and by public or private practice type. Simultaneously, we will maximize the precision of overall estimates through proportional allocation of the sample to strata based on predominant or less predominant specialties, and distributed across substrata defined by setting (public/private). Exhibit 1 presents a proportional allocation to primary strata. We can only finalize the allocation once we purchase an updated file of eligible physicians.

We will select a stratified random sample of 5,000 physicians in expectation of a 70% response rate. The target number of completed surveys would be 3,500. We acknowledge that this response rate does not meet the OMB standard of an 80% response rate. However, this target is particularly difficult to obtain with physicians. Historically, the average response rate for a physician survey has been only 54% to 58%.1,2,3 In a recently published review of health care provider surveys in the United States, the most prevalent response rate ranged between 60% to 79%.4 A response rate of 80% or more occurred in only 15% of the reviewed physician surveys. The review also noted that there was a modest downward trend in response rates from 1998 – 2008. Factors that tend to improve response include government sponsorship,4,5 the use of telephone, paper/mail, and mixed mode formats,4,6 the use of mail for initial contact4 and personalized communications,5,6 and the use of an incentive.4,5,6 Our methods to maximize response rates, outlined in section B.3, include multiple mailed surveys and follow-ups to non-responders requesting they complete a web survey—we are not using email. The use of these methods is supported by the literature.

To minimize the variance-inflating impact of unequal weighting, our proposed design will include little oversampling. The design will generate design effects near 1.0 for most estimates, where the design effect is defined as the variance under the actual design divided by the variance under a simple random sample of the same size. Therefore, key subgroup estimates defined by specialty and practice type strength will have standard errors of 2.5% or less. In other words, we will obtain 95% confidence intervals within plus or minus 5% for all key subgroup estimates. Most meaningful subgroup comparisons will be possible with statistical power of 80% or more.

B.2 Procedures for the Collection of Information

B.2.A. Sample Sizes and Expected Precision

The target sample size of 3,500 completed surveys and the sample allocation to strata were developed to ensure the required precision levels for key subgroup estimates defined by specialty, on one hand, and by provider type, on the other hand. In other words, the expected sample sizes of 400 or more completed surveys in each of the five specialties will yield sampling errors within 2.5%. Thus, 95% confidence intervals will be within +/- 5%. Similar arguments will apply to the two provider types, public and private practices, with even better precision levels for these two domains.

B.2.B Sample Selection

The sample will be selected with stratified random sampling from the frame of eligible physicians from the AMA master file. The sample sizes will be allocated to strata with the approaches discussed in Section B.1.

B.2.C Respondent Selection

We will obtain the name and address of the physician from the AMA Master File.

B.2.D Estimation and Weighting

We will compute survey weights that will be assigned to each participating physician to generate unbiased population estimates. The weighting process will start with the computation of sampling weights which reflect the probabilities of selection which vary across design strata. The sampling weights will be then adjusted for non-response. We anticipate the use of simple weighting adjustment cells based on the design strata. Weight adjustments such as post-stratification will ensure that survey weights sum to known population totals for each post-stratum cell.

We will prepare and deliver a final survey data file which includes the final adjusted weights as well as design variables (strata). These variables will allow data users to compute accurate estimates and variance estimates using standard survey analysis software (e.g., SAS survey procedures, Stata or SPSS). Most of these packages use Taylor series linearization methods, or other methods, to compute accurate variance estimates that account for complex sampling designs and unequal weights.

B.2.E Data Collection Cycle

Clearance is being sought for a one cross-sectional survey that will be administered over a 4-month period.

B.2.F Data Quality Control Procedures

To ensure the integrity of the web program, every survey instrument undergoes a structured quality control (QC) process during beta testing. ICF first creates the instrument in Microsoft (MS) word. This document contains all questionnaire content, interviewer notes, and programming logic. Then ICF programmers transform the final approved questionnaire into web scripts. A separate individual from ICF’s data processing team creates a custom “skip-check” algorithm. This program checks the data against defined conditions specified in the MS Word version of the questionnaire. If a discrepancy is found between the Word document and the web program, the ICF project manager and web programmer are notified. The issue is not resolved until the Word questionnaire and the web program match, by either updating the document or revising the web script. If there is any question as to which is correct, ICF will consult with DSTDP for clarification. Before fielding, the skip-check program is executed against randomly generated data and any reported errors are assessed. After the study begins fielding, the skip-check program is executed nightly against actual response data so that errors are detected immediately.

ICF’s steps, outlined below, are repeated until DSTDP approves the final web questionnaire.

  1. The director of survey programming reviews the survey logic to make sure it is consistent and easy to manage in the web software. Discrepancies and problems are resolved with project management staff

  2. An independent programmer creates a check program to test survey data, making sure they conform to the skip patterns.

  3. The programming team generates random data for all possible combinations of responses; these data are then reviewed to ensure program accuracy.

  4. Project management staff review the survey visually to ensure all wording is correct and the screen layout is easy to read, and then proceed with scenario testing and mock interviews

  5. DSTDP staff remotely review an electronic test version of the final survey.

As noted in the last step, after the questionnaire has been programmed, ICF will provide CDC with a standalone web version to test all screens, edits, skip patterns, and logic checks. DSTDP will be able to evaluate the questionnaire under conditions that approximate live data collection. Full fielding will not commence without official acceptance of the randomly generated data, and the test account.

During data processing, the ICF project management team will review open-ended and “other/specify” responses in the first few weeks of data collection, and then periodically throughout fielding, to identify potential coding or training issues. Prior to delivering the dataset, ICF will clean the data (to correct grammatical and typographical errors) and when applicable back-code open-ended responses. After converting and cleaning the data, ICF will produce frequency tabulations of every question and variable to detect missing data or errors in skip patterns, similar to the checks performed during questionnaire programming. ICF will also perform a variety of other checks using SAS programs designed specifically by programmers. For each question, responses outside of the expected range are flagged. Checks are also performed across questions to evaluate consistency. In most cases, inconsistencies discovered are the result of minor errors in the web program that affect how the data are stored in the data file. These can almost always be resolved by further inspecting the individual record. They are also fixed in the program, so the error does not occur again if the survey is to be fielded in the future. A cleaned, unweighted, data file, including variable and value labels, will be provided to CDC via a secure FTP site. All data files will be submitted with the format or layout files.

B.3 Methods to Maximize Response Rates and Deal with Nonresponse

Surveys of providers are known to be challenging for several reasons.1,2,3,4 First, physicians are a highly specialized group of individuals frequently solicited to participate in surveys. Second, their demanding work schedules negatively impact survey participation rates. Third, any forms of communication with physicians are typically filtered by receptionists, administrative assistants, or other “gatekeepers”— making it difficult to contact the physician directly.

To address these barriers, we will use several strategies. First, we will survey physicians using their preferred survey mode to increase the likelihood of participation. Several studies have shown that mail surveys seem to be physicians’ preferred survey mode9,5,6 and that a multi-mode design encompassing an initial mailing of a self-administered questionnaire followed by a web survey to non-respondents improves overall response rates.8,10 For these reasons, we propose using a multimode design for the STD Provider Survey starting with a mailed questionnaire(see Attachment 3- Mailed Survey) and invitation letter (see Attachment 5-Mail Survey Invite) as illustrated in our contact strategy in Exhibit 1. Providers who do not return a completed survey will be sent up to two additional survey packets (see Attachments 6-Mail Survey Reminder 1 and 7-Mail Survey Reminder 2). For all non-respondents, we will send two follow-up mail reminders containing a URL and a Quick Response Code (QR Code) (see Attachments 9-Web Survey Invite and 10-Web Survey Reminder) for those who wish to complete the web survey. All those who complete either the mail or web survey will be sent a thank you letter with a $40 token of appreciation (see Attachment 10 - Thank You Letter $40 Token).

Web data collection is 100% electronic. ICF will design the website to facilitate the interview process for the respondent and reduce burden. These features include:

• Basing the visual layout of the questions on principles of heuristics that people follow in interpreting visual cues;

• Making the survey easily navigable from page to page;

• Incorporating user assistance tools, such as help screens for certain items (e.g., the respondent could click a link to get a definition that would come up if needed);

• Inserting placeholders so that respondents can pause and leave the system and then re-enter (at the point of departure) without losing the responses previously entered; and

• Programming in consistency checks.

ICF tested the website by using several different devices (e.g., laptops, smartphones, and tablets) and operating platforms to ensure that the survey functions properly and is easily navigated in the many ways respondents will access the survey. ICF will compare the mail and web responses for systematic differences in response rate, responses, missing data, and respondent breakoff.

Exhibit 2. Data Collection Protocol for the STD Provider Survey

Mail Survey 1 (Attachment 5)

Personalized cover letter with support from professional organizations, and paper questionnaire.

All sampled physicians

Day 1

Mail Survey 2 (Attachment 6)

Cover letter and paper questionnaire.

All non-responding physicians

Day 14

Mail Survey 3 (Attachment 7)

Cover letter and paper questionnaire.

All non-responding physicians

Day 28

Invitation for web survey 1 (Attachment 8)

1-page invitation to complete a web survey. Letter will contain the URL and a QR code.

All non-responding physicians

Day 42

Invitation for web survey 2 (Attachment 9)

1-page reminder to complete the web survey. Letter will contain the URL and a QR code.

All non-responding physicians

Day 56

Thank you letter (Attachment 10)

$40 token of appreciation after completion of mail or web survey.

All physicians completing mail or web survey


A second strategy we will use is to have the survey supported by professional organizations, as this positively impacts response rates.7,8 Specifically, to enhance physician buy-in and increase the proportion of physicians who complete the survey, each mail survey will include a cover letter with support from four professional organizations: the American College of Physicians (ACP), the American Academy of Family Physicians (AAFP), the American Congress of Obstetricians and Gynecologists (ACOG), and the American College of Emergency Physicians (ACEP).

Our third strategy will be to offer a $40 post-completion token of appreciation to all physicians who complete the mail or web survey (see Attachment 10-Thank you Letter $40 Token).

A fourth strategy to increase response rates and minimize nonresponse is to implement an Interactive Voice Response Respondent (IVR) Help Line and provide email helpdesk support. The IVR system will include options for talking to a project manager, learning about participant privacy, etc. The IVR will be staffed from 9:00 a.m. to 9:00 p.m. 7 days a week. Email requests will be processed within 24 business hours.

B.3.A Expected Response Rate

As noted, we expect to achieve a response rate of 70%. Again, we acknowledge that this response rate does not meet the OMB standard of 80%. We have outlined our methods to maximize response rates above. Below, we describe our plan to analyze the survey data for non-response and representativeness, and to develop weighting adjustments to increase the representativeness of the sample.

B.3.B Assessing Nonresponse and Analysis of Non-Response Bias

Survey nonresponse bias occurs when respondents are substantively different from the nonrespondents. Response rates are often used as a measure of data quality because they are thought to reflect the degree to which non-response bias exists in the data, but this connection is tenuous.7,8 Instead, response rates are a measure of the risk of nonresponse. High response rates reflect low risk of nonresponse bias while low response rates increase the risk of nonresponse. In the absence of high response rates, a nonresponse analysis helps to justify the accuracy of the survey data.

To mitigate the risk of non-response bias, we will develop weighting adjustments to increase the sample representativeness relative to the population. We will evaluate the representativeness by comparing the sample to benchmarks such as the AMA Master File.

B.4 Test of Procedures or Methods to be Undertaken

To test the clarity of wording, understanding, ease of recall, and perceived burden of the survey, DSTDP contracted with ICF to conduct a cognitive interviewing study of some of the new survey items. The aim of a cognitive interviewing study is to investigate how well survey questions perform when asked of respondents, that is, if respondents understand the questions according to their intended design and if they can provide accurate answers based on that intent. As a qualitative method, the primary benefit of cognitive interviewing is that it provides rich, contextual insight into the ways in which respondents 1) interpret a question, 2) consider and weigh out relevant aspects of their lives and, finally, 3) formulate a response based on that consideration. As such, cognitive interviewing provides in-depth understanding of the ways in which a question operates, the kind of phenomena that it captures, and how it ultimately serves a survey’s scientific goals. Findings of a cognitive interviewing project typically lead to recommendations for improving a survey question. Alternatively, results can be used in post-survey analysis to assist in data interpretation.

ICF’s subcontractor, Insight Policy Research, conducted in-depth, semi-structured interviews with a sample of eight respondents. The interview structure consisted of respondents first answering each question that we tested, and then answering a series of follow-up probes that revealed what respondents were thinking and the rationale for their response. Through this semi-structured design, various types of question-response problems, such as interpretive errors or recall accuracy, can be uncovered—problems that often go unnoticed in traditional survey interviews. By asking respondents to provide both textual verification and the process by which they formulate their answer, elusive errors can be revealed.

As a qualitative method, the sample selection for a cognitive interviewing project was purposive. Respondents were not selected through a random process, but rather were selected for specific characteristics such as gender or practice specialty. Analysis of cognitive interviews does not produce generalizable findings in a statistical sense, but rather, provides an explicit understanding of response processes including patterns of interpretation.

As is normally the case for analyses of qualitative data, the general process for analyzing cognitive interview data involves synthesis and reduction—beginning with a large amount of textual data and ending with conclusions that are meaningful and serve the ultimate purpose of the study. For analysis of cognitive interviews, reduction and synthesis can be conceptualized within five incremental steps—conducting interviews, producing summaries, comparing across respondents, comparing across subgroups of respondents, and reaching conclusions. With each incremental step, a data reduction product is created.9 The steps consist of: 1) Conducting interviews to produce interview text; 2) Synthesizing interview text to produce detailed summaries; 3) Comparing summaries across respondents; 4) Comparing identified themes across subgroups; and 5) Making conclusions. Although these steps are described separately and in a linear fashion, in practice they are iterative; varying levels of analysis typically occur throughout the qualitative research process.


As each step is completed, data are reduced such that meaningful content is systematically extracted to produce a summary that describes each question’s performance. In describing a question’s performance, it is possible to understand the ways in which a question was interpreted by various groups of respondents, the processes that respondents utilized to formulate a response, as well as any difficulties that respondents might have experienced when attempting to answer the question.

After cognitive testing was completed, we conducted a pilot test with the revised survey instruments. A total of 8 physicians participated in the pilot test between July 28, 2016 and September 13, 2016. Of these, 3 completed the web survey and 5 completed the paper-and-pencil mailed survey.

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The following individuals have reviewed technical and statistical aspects of procedures that will be used to pilot and implement the Survey of STD Provider Policies and Practices:

Melissa Cidade, PhD

Survey Methodologist

ICF


Ronaldo Iachan, PhD

Senior Sampling Statistician

ICF


Shelley N. Osborn, PhD

Senior Project Manager

ICF


ICF, headquartered in Fairfax, Virginia, will collect all survey data. Shelley N. Osborn, PhD and Ronaldo Iachan, PhD will be the primary parties to review and approve data analysis of the pilot data.


Scientists from CDC/DSTDP will lead the analysis of data from the survey. The lead scientists from CDC are:


Jami Leichliter, PhD

Team Lead (Lead Health Scientist)

CDC

1600 Clifton Rd NE, MS E-02

Atlanta, GA 30329

(404) 639-1821


Kendra Cuffe, MPH

Health Scientist

CDC

1600 Clifton Rd NE, MS E-02

Atlanta, GA 30329

(404) 639-1847



1 Martin BC (1974) Don’t Survey Physicians! Center For Health Services Research And Development, American Medical Association: Chicago, IL

2 Asch DA, Jedrziewski MK, Christakis NA (1997) Response rates to mail surveys published in medical journals. J Clin Epidemiol 50(10): 1129 – 1136

3 Cook JV, Dickinson HO, Eccles MP (2009) Response rates in postal surveys of healthcare professionals between 1996 and 2005: an observational study. Bmc Health Serv Res 9(160): 160

4 Health care provider surveys in the United States, 2000-2010: a review. McLeod CC, Klabunde CN, Willis GB, Stark D.

5 Eval Health Prof. 2007 Dec;30(4):303-21.

Methodologies for improving response rates in surveys of physicians: a systematic review.

VanGeest JB1, Johnson TP, Welch VL.

6 Br J Cancer. 2012 Mar 13;106(6):1021-6. doi: 10.1038/bjc.2012.28. Epub 2012 Feb 28.

Increasing response rates from physicians in oncology research: a structured literature review and data from a recent physician survey.

Martins Y1, Lederman RI, Lowenstein CL, Joffe S, Neville BA, Hastings BT, Abel GA.

1 VanGeest JB, Johnson TP, Welch VL. Methodologies for improving response rates in surveys of physicians: a systematic review. Eval Health Prof. 2007;30(4):303-321.

2 Pit SW, Vo T, Pyakurel S. The effectiveness of recruitment strategies on general practitioner's survey response rates – a systematic review. BMC Med Res Methodol. 2014;14:76.

3 Burns KE, Duffett M, Kho ME, et al. A guide for the design and conduct of self-administered surveys of clinicians. CMAJ. 2008;179(3):245-252.

4 Beebe TJ, Locke GR, 3rd, Barnes SA, Davern ME, Anderson KJ. Mixing web and mail methods in a survey of physicians. Health Serv Res. 2007;42(3 Pt 1):1219-1234.

5 Raziano DB, Jayadevappa R, Valenzula D, Weiner M, Lavizzo-Mourey R. E-mail versus conventional postal mail survey of geriatric chiefs. Gerontologist. 2001;41(6):799-804.

6 Martins Y, Lederman RI, Lowenstein CL, et al. Increasing response rates from physicians in oncology research: a structured literature review and data from a recent physician survey. Br J Cancer. 2012;106(6):1021-1026.

7 Curtin, R., S. Presser, and E. Singer, The effects of response rate changes on the index of consumer sentiment. Public opinion quarterly, 2000. 64(4): p. 413-428.

8 Groves, R.M., Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 2006. 70(5): p. 646-675

9 Miller, K., et al., Cognitive Interviewing Methodology. 2014: John Wiley & Sons.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorThomas, Matthew
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy