Sup Statement B

Sup Statement B.docx

HIV Knowledge, Beliefs, Attitudes, and Practices of Providers in the Southeast

OMB: 0920-1160

Document [docx]
Download: docx | pdf




HIV Knowledge, Beliefs, Attitudes, and Practices of Providers in the Southeast

(K-BAP Study)







Supporting Statement B





OMB# 0920-New







June 13, 2016





CONTACT:

Kirk D. Henny, PhD
Behavioral Scientist, Epidemiology Research Team

Epidemiology Branch

Division of HIV/AIDS Prevention

Centers for Disease Control and Prevention
1600 Clifton Road, NE, Mailstop E-45
Atlanta, GA 30329
Phone: 404-639-5383
Fax: 404-639-1950
E-mail: [email protected]



Table of Contents



  1. Respondent Universe and Sampling Methods 2

  2. Procedures for the Collection of Information 4

  3. Methods to Maximize Response Rates and Deal with No Response 5

  4. Tests of Procedures or Methods to be Undertaken 7

  5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 7




Exhibit B1.1 Total Number of Actively Practicing Physicians, Nurse Practitioners (NPs), and Physician Assistants (PAs) Practicing within each MSA

Exhibit B1.2 Total number of Physicians, Nurse Practitioners (NPs), and Physician Assistants (PAs) to be Sampled within each MSA.

1. Respondent Universe and Sampling Methods

The sample of medical providers for the proposed study will be pulled from the Healthcare Data Solutions (HDS) ProviderPRO and MidLevelPRO databases. Drawing on public state licensing data and other public resources, these commercial databases provide a near-census of all actively practicing physicians, nurse practitioners, and physician assistants in the US, including data on provider specialty, practice address, practice phone number, and provider email address. Licensed but inactive providers are regularly removed from the database, with approximately 95% of current entries being up to date. As a sampling frame, HDS will provide Altarum with a census/universe of the eligible providers’ in the six MSAs demographic information (stripped of names and contact details). Using this census/universe file as the sampling frame, Altarum will conduct stratified sampling as described below, and then acquire the full sampling data for 10% of the eligible provider population.

The sample will be drawn with random stratification by provider type (physician, nurse practitioner, and physician assistant) and MSA (Baton Rouge, LA; New Orleans-Metairie-Kenner, LA; Baltimore-Columbia-Towson, MD; Atlanta-Sandy Springs-Roswell, GA; Miami-Fort Lauderdale-West Palm Beach, FL; and Washington, DC-VA-MD-WV). The universe of eligible providers from HDS databases are shown in Exhibit B1.1. This includes all licensed and actively practicing providers in each MSA. Altogether, this totals 43,407 active providers.

Exhibit B1.1 Total Number of Actively Practicing Physicians, Nurse Practitioners (NPs), and Physician Assistants (PAs) Practicing within each MSA


Atlanta

Baltimore

Baton

Rouge

Miami

New Orleans

Washington DC

TOTAL

Physicians

5011

3864

628

6255

1608

8599

25965

NPs

3167

1715

410

2404

859

3818

12373

PAs

1219

828

149

894

135

1844

5069

Total

9397

6407

1187

9553

2602

14261

43407



Based on our results from the HIV Medical Monitoring Project, a similar provider survey conducted by Altarum, we expect a response rate for this survey to be approximately 42%. The HIV MMP project achieved a 60% response rate using similar questions, the same token of appreciation, and similar methods, but with a sample of HIV specialists. Because our sample will be composed of primary care providers and not HIV specialists, we assume that providers will have an attenuated level of interest in the survey relative to HIV specialists, causing us to revise our estimates from 64% down to 42%, a relative decrease of one third (33%) in response rate. The fielding methodology and token of appreciation structure to achieve this response rate in both MMP and the present study are described in section A3.

To achieve our goals for statistical power as described in section B1.2 below, and assuming a response rate of 42%, we estimate a need for sampling 723 providers within each MSA and 241 for within each MSA and provider type combination. Sample will be divided evenly between strata combinations (MSA and provider type), with the exception of Physician Assistants in Baton Rouge and New Orleans. In these two MSAs, the total population of PAs is lower than 241. To preserve a count of 723 for the MSA, the remaining share of respondents will be divided evenly between Physicians and Nurse Practitioners within these MSAs. A four-stage weighting process, as described in section B3, will be used to ensure that this sampling method will provide accurate population estimates for each strata while adjusting sample proportions to maintain appropriate statistical power. Exact cell counts of sampled providers are shown in Exhibit B1.2.





Exhibit B1.2. Total number of Physicians, Nurse Practitioners (NPs), and Physician Assistants (PAs) to be sampled within each MSA.


Atlanta

Baltimore

Baton

Rouge

Miami

New Orleans

Washington DC

Physicians

241

241

287

241

294

241

NPs

241

241

287

241

294

241

PAs

241

241

149

241

135

241

Total

723

723

723

723

723

723



Our six-month follow-up survey will include only core questions from the baseline survey, but will not be stratified by MSA or provider type. Instead, we will make comparisons between providers who completed CEs and those who did not, and if completing CEs impacted provider knowledge, beliefs, attitudes, and practices surrounding HIV care. Our sample will be limited to providers who completed the baseline survey, an estimated 1827 cases. We anticipate approximately half of our original respondents completing the six-month follow-up survey. Of those, we expect approximately half will have completed CE courses in HIV and half will not, giving a follow-up survey sample of 914 cases, evenly split between the two outcomes (457 completing CE courses and 457 not completing CE courses).



2. Procedures for the Collection of Information

To ensure we have enough survey responses to make statistical comparisons of response data, but not so many as to place undue burden on the study population, we have calculated our sample sizes based on several criteria and assumptions:

  1. For all statistical hypotheses tests performed, we would like to have 95% confidence that we will not reject the null hypothesis when it is true. We have chosen a two-tailed test so that we can compare proportions no matter the direction of change (α = 0.05).

  2. We would like to have the statistical power to accept the null hypothesis when it is true 80% of the time (β = 0.20).

  3. At its most conservative, the chance that a particular group of survey respondents will choose one survey response over another for a particular question is 50/50. Since no survey like this has been done before, our estimates will also be conservative (p1 = 0.50).

  4. For comparing proportions of survey item responses between any two provider types, we would like to be able to determine significant differences of at least 8% (p2 = 0.58). MSAs, we would like to be able to determine significant differences of at least 12% (p2 = 0.62).

Our two-tailed sample size equation is the following:


            

Where

α = 0.05

β = 0.2

Z 1-α/2 = 1.96

Z 1- β = 0.84

p1 = 0.50

q1= 1-p1 = 0.50

p2 = 0.58 between provider types, 0.61 between MSAs

q2= 1-p2= 0.42 between provider types, 0.38 between MSAs

(p1 + p2)/2 = (.50 + 0.56)/2 = 0.54 between provider types, 0.56 between MSAs

1- 0.46 between provider types, 0.44 between MSAs

Thus, to compare the differences in proportions between 2 mutually exclusive groups of provider types with our parameters, we will need N=609 responses for each provider type, or 1827 responses in total. To compare the differences in proportions between 2 MSAs with our parameters, we will need at least N=268 responses from each of the six MSAs.


As described above in section B.1, our overall estimated response rate is 42% with no differing estimated response rates by strata. Dividing the desired number of completes per cell by the expected response rate, the level of sample we need to achieve 1827 responses is 4,338.


With this number of completes we will be able to detect at least an 8% difference between provider types and a 12% differences between MSAs. The sample of 4,338 will be assembled into six replicates (one for each MSA), each representative of the MSA sample as a whole, to contact no more than the minimum necessary to achieve 1827 completed cases.


With an estimated 914 completions of the follow-up survey, and 457 within each group (completing CE courses or not), following the formula above, we would be able to detect differences of 9.2% between those who completed CE courses and those who did not with statistical power of 0.8.


This is a one-time study, with the survey administered twice, with the six-month follow-up survey including only core questions from the baseline survey. The follow-up survey administration is the minimum needed to determine the impact of CE courses in improving HIV prevention and care. A pre-post survey of this design cannot be conducted less frequently.

3. Methods to Maximize Response Rates and Deal with No Response

The proposed study was designed to maximize response rates, using a method developed with the CDC HIV MMP survey. The method of using multiple modes of survey administration, in sequential order, including postal, email, and phone reminders to non-responders. This method has been shown to improve response rates in several prior surveys29. Additionally, the inclusion of a $20 cash token of appreciation with the survey invitation has also been shown to improve response rates for the MMP survey and in other surveys of physicians30. Our survey includes no sensitive questions, and no questions about individual patients. Question wording has been drawn from validated surveys where possible, like MMP. Lastly, non-response will be adjusted for in the weighting process for completed surveys. The study team will conduct a 4-stage weighting process:



1. Base weights.  Base weights are the initial weights assigned to a given potential respondent in the sample.  These weights are calculated as the inverse of the probability of selection for a given individual from within the population, by strata.  These weights essentially represent the number of people that a given person within the sample initially represents.  Given a random draw of individuals, the sample population is representative of the population as a whole once weights are applied with the base weights summing to strata and population totals.



2. Propensity Score adjusted Non-response weights.  Although the base sample weight adjusts for varying probabilities of selection, all studies experience differential non-response across strata. To minimize potential bias in results, this differential response requires a post-field non-response weight to be calculated, to bring the final collected sample back to representing the original population. Altarum will use the generally accepted statistical practice of logistic regression to estimate propensity scores for respondents controlling for known factors among both the respondents and non-respondents.  The propensity scores represent the probability of a given person to respond to the survey controlling for known socio-demographic characteristics. The inverse of the propensity scores will be multiplied by the corresponding base weights to bring the respondents in line with the total population of providers in the six MSAs.



3. Post-Stratification Weights.  The application of propensity score adjusted non-response weights can lead to a mis-alignment of populations with some potentially excessive weights which skew the respondent populations data.  Creation of a post-stratification weight will adjust the weights to ensure they best reflect the populations they represent.


4. Final weights.  Final weights for each respondent will be calculated as the product of Base weight multiplied by the inverse of propensity score based non-response weight, multiplied by the post-stratification weight. Final weights will be used in conjunction with survey specific analytical techniques within SAS-callable SUDAAN which account for the complex survey design.


The survey team will calculate and report an adjusted response rate for the proposed study, following the American Association of Public Opinion Research’s (AAPOR) standard definitions for calculating response rates. The proposed study has a mixed-mode, escalating series of invitations as described in section A3. Initially, respondents are recruited to participate through a combination of email and phone calls. Once recruited, participants will receive a postal invitation, including cover letter and token of appreciation, followed by an email invitation. After this, participants will receive a postcard reminder, and a series of three more reminder emails. If respondents have still not replied, they will receive phone reminders encouraging them to complete the survey. This methodology is based on a methodology developed by Altarum to conduct the CDC’s HIV Medical Monitoring Project.


4. Tests of Procedures or Methods to be Undertaken

The survey will be administered in 6 cycles of replicates, one cycle for each MSA. We will start with Baton Rouge, the least populated MSA and end with the most populated MSA, Washington DC. Collected cases will be monitored on an ongoing basis to identify any issues with the instrument, field protocols, or sample. The methodology is similar to the one we used in the CDC HIV Medical Monitoring Project. For the MMP study, the token of appreciation was also $20 for participating in the study. We also called non-responders to remind them to complete the survey. This approach and denomination was used effectively in MMP to attain a 64% response rate of providers.

The questionnaire instrument was tested with members of the study team’s organization and reviewed by outside HIV subject matter experts for edits. Additionally, where possible, we have used already validated questions from the MMP study in our questionnaire. This ensures questions have been previously validated and that data will be comparable between this study of primary care providers and MMP’s study of HIV specialists.

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Kirk D. Henny, PhD

Behavioral Scientist

Epidemiology Branch

Division of HIV/AIDS Prevention

National Center for HIV/AIDS, Viral Hepatitis, STD and TB Prevention

Centers for Disease Control and Prevention

1600 Clifton Road NE, MS E-45

Atlanta, GA 30333

Phone:  404-639-5383

Email:  [email protected]


Madeline Y. Sutton, MD, MPH, FACOG

CAPT, USPHS

Division of HIV/AIDS Prevention

National Center for HIV/AIDS, Viral Hepatitis, STD and TB Prevention

Centers for Disease Control and Prevention

1600 Clifton Road NE, MS E-45

Atlanta, GA 30333

Phone: 404-639-1814
Email: [email protected]


Chris Duke, PhD
Project Manager

Senior Analyst

Altarum Institute
3520 Green Court, Suite 300
Ann Arbor, MI 48105
Phone: 734-302-4642
Email: [email protected]




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorLaura Nelson
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy