Revised Section B

Supporting Statement part B and Refs 8-9-2007 (2).doc

Examining the Efficacy of the HIV Testing Social Marketing Campaign for African American Women

Revised Section B

OMB: 0920-0752

Document [doc]
Download: doc | pdf





New Application for OMB Approval





Examining the Efficacy of the HIV Testing Social Marketing Campaign

for African American Women




Supporting Statement B











Technical Monitor: Jami Fraze, PhD


Address: 8 Corporate Square, 5th Floor, Rm 5051

Atlanta, GA 30329-3013


Telephone: 404-639-3371


Fax: 404-639-2007


E-mail: [email protected]


August 9May 4, 2007





Table of Contents


B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

B1. Respondent Universe and Sampling Methods

B2. Procedures for the Collection of Information

B3. Methods to Maximize Response Rates and Deal with Nonresponse

B4. Test of Procedures or Methods to be Undertaken

B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

References



LIST OF Exhibits


Number


  1. Minimum Detectable Treatment-Control Differences for Percentage Estimates of Outcomes

  2. Number of Participants

  3. Longitudinal Completion and Retention Rates for Prior Knowledge Networks Studies

  4. Campaign Stimuli

  5. Study Design and Experimental Conditions


B. Statistical Methods


1. Respondent Universe and Sampling Methods


This study will include a sample of English-speaking single, African American females aged 18 to 34, with less than 4 years of college education drawn from a combination of the probability-based national Knowledge Networks online panel and a nonprobability-based national e-mail list sample. In addition, to avoid contaminating the study’s primary treatment condition, sample members who live in or have recently moved from either of the two targeted campaign cities (Philadelphia and Cleveland) will be excluded from the study. HIV-positive individuals will be excluded from the study because HIV testing is the primary study outcome. Women who reported having sex with only females during the past 12 months will be excluded because the campaign materials were developed for women who have sex with men. Finally, women who reported that they have not had sex without a condom at least once in the past 12 months will be excluded because the campaign materials are aimed at women who are at risk for HIV infection by having unprotected sex with men.

The current Knowledge Networks panel consists of approximately 40,000 adults actively participating in research. The Web-enabled panel tracks closely to the U.S. population in terms of age, race, Hispanic ethnicity, geographical region, employment status, and other demographic elements. The differences that do exist are small and are corrected statistically in survey data (i.e., by non-response adjustments). The Knowledge Networks panel is recruited randomly through random-digit-dialing (RDD) and is comprised of both Internet and non-Internet households. The Knowledge Networks Web-enabled panel is also the only available method for conducting Internet-based survey research with a nationally representative probability sample (Couper, 2000; Krotki and Dennis, 2001).


Often, a probability-based sample alone is not sufficient or large enough to reach a desired subpopulation of interest. The target audience for Take Charge. Take the Test. is particularly narrowly defined. Because the available sample pool from the Knowledge Networks panel is not expected to provide enough sample to sufficiently power the study, a nonprobability-based national e-mail list sample will be used to boost the study’s sample size. The advantage to this approach is that it provides a small representative subset of the U.S. population (the Knowledge Networks panel), allowing data from the Knowledge Networks panel to be analyzed separately or jointly with the e-mail list sample, provided sufficient sample sizes in both sources. This dual frame sampling approach provides a basis upon which to validate the overall sample and investigate and control for potential biases. Knowledge Networks uses two firms to obtain off-panel samples to augment their existing probability-based panel: SSI (http://www.ssisamples.com) and GMHI (http://www.globaltestmarket.com). These are both nonprobability-based opt-in panels open to anyone with access to the Internet. The e-mail list samples are not biased or exclusive except for the requirement that respondents have access to the Internet. Because the sample for this study is extremely targeted, our participants should represent the campaign’s target population closely except for the possibility that the campaign’s target audience has more limited internet access. When acquiring the e-mail sample lists, Knowledge Networks specifies the gender, age, race/ethnicity, education, and marital status of potential respondents. From the overall Knowledge Networks panel and an national e-mail list sample, we will select the sample of 1,630 participants (114 Knowledge Networks participants and 1,516 e-mail list sample participants) for our study using methods that will include appropriate sample design weights, based on specific parameters of sample composition. To further reduce the effects of non-sampling error, non-response and post-stratification weighting adjustments are applied to the sample. Participants will be randomly assigned to one of two experimental conditions: exposure to Take Charge. Take the Test. messages or no exposure. Once the data are collected in the field from the Knowledge Networks panel and from the e-mail list sample, all data will be combined using a rigorous set of weighting procedures that will allow us to account for any systematic differences in study estimates by sample source.

For pre-post comparisons between the exposure and control groups, we expect that outcomes will be highly correlated between baseline and follow-up time periods. We therefore assumed a pre-post correlation of 0.6. Since the comparisons between the exposure and control groups will be made primarily via multivariable models, we also expect that model covariates will explain some portion of the overall variation in outcomes. We assumed this portion to be about 20%. We conducted power analyses to determine the optimal sample size for detecting statistically significant differences between treatment and control groups. Assuming an initial sample of 1,630 and a 70% retention rate for each of the follow-up surveys, our study would contain approximately 800 participants with complete baseline and follow-up sets of interviews. The final 800 participants will be divided randomly between exposure conditions, yielding 400 participants per experimental condition. With a sample size of 400 participants per experimental condition, we will have 80% power to be able to detect relatively small baseline to third follow-up effect sizes, between 3.3 and 4.1 percentage points. Exhibit 110 shows the study power by illustrating the treatment-control differences in the proportion of participants who indicate a specific HIV testing related behavior.


Exhibit 110. Minimum Detectable Treatment–Control Differences for Percentage Estimates of Outcomes


Minimum Detectable Percentage Point Differences

Percentage Estimate

T1T2 Difference

T2T3 Difference

T1T3 Difference

50%

3.7

3.8

4.1

60%

3.6

3.7

4.0

70%

3.4

3.5

3.7

80%

3.0

3.0

3.3

Note: Precision estimates assume 80% power, 0.05 significance level, and two-tailed hypothesis tests.


To achieve 0.80 power, we will need a total of 800 participants (400 exposure group and 400 control group). The numbers of participants in the respondent universe and in each sample are shown in Exhibit 121. The expected response rates at 6-week follow-up include participants who may move as well as those who may refuse to participate in the follow-up data collection.


2. Procedures for the Collection of Information


In partnership with Knowledge Networks, a sample of single, African American women aged 18 to 34 who have less than a 4 year college education will be selected from a combination of the probability-based national Knowledge Networks online panel and a nationaln e-mail list sample. Because the available sample pool from the Knowledge Networks panel is not expected to provide a large enough sample to sufficiently power the study, non-probability based nationalopt-in e-mail list samples obtained from Survey Sampling International (SSI) and/or Global Market Insite (GMI) will be used to boost the study’s sample size. The Knowledge Networks online panel and e-mail list samples allow us to use the same technology to collect primary data from the evaluation instrument and to implement the experimental conditions with multimedia components. All data collection materials are at 8th grade reading level or below due to sample eligibility criteria and CDC requirements. For both samples, study participants will be enrolled in the study via e-mail recruitment. Sample members will be contacted initially with an e-mail message inviting them to participate in the study. Potential participants will be given relevant contact information and instructions for logging into the Web-based survey. Nonrespondents will receive two e-mail reminders from Knowledge Networks requesting their participation in the survey. Members of the Knowledge Networks panel will also receive a reminder phone call. Copies of the e-mail notifications and reminder phone script are in Attachment 7. Recruited individuals will be given a user identification (ID) number that they will be required to enter to access the study’s website. The surveys will be self-administered and accessible any time of day for a designated period.


Exhibit 121. Numbers of Participants

Numbers and Response Participation Rates

Treatment Condition

Control Condition

Total

Number of subjects to be contacted at baseline

2,463

2,463

4,926

Expected response rate at baseline

33%

33%

33%

Number of completed baseline surveys

815

815

1,630

Expected response rate at 2-week follow-up

70%

70%

70%

Number of completed 2-week follow-up surveys

570

570

1,140*

Expected response rate at 6-week follow-up

70%

70%

70%

Number of completed 6-week follow-up surveys

400

400

800*

* A subset of the original 1,630 baseline respondents.


Our sample design is based on conservative assumptions about survey response. Thus our estimates of longitudinal retention rates shown in Exhibit 12 should be viewed as “worst case” scenarios that if hold true, would still ensure sufficient sample sizes to reasonably detect small message effects. We estimate that at least 70% of participants who complete the baseline survey will complete the 2-week follow-up survey. We further estimate that at least 70% of participants who complete the 2-week follow-up will be retained in the 6-monthweek follow-up.


Exhibit 13 shows longitudinal retention rates for prior KN studies of various lengths. While we are assuming follow-up survey response rates as low as 70%, the average follow-up completion rate across each prior KN study listed in Exhibit 13 is over 90% with baseline to follow-up retention averaging 81% across follow-ups ranging from 3 months to 3 years. We expect similar retention patterns for this efficacy study.


Exhibit 13. Longitudinal Completion and Retention Rates for Prior Knowledge Networks Studies


Project

Institution/Client

Sample

Survey

Time from Baseline

Follow-up Survey Completion Rate

Baseline to Follow-up Retention Rate

Stress and Trauma Survey


UC Irvine

18+ General Pop

Wave 7

3 years

94%

48%

Menopausal Women Survey


RTI

Female 40-65

Wave 2

2 years

89%

71%

National Seafood Study

NOAA

18+ Primary Grocery Shoppers

Wave 2

4 months

90%

97%




Wave 3


8 months

92%

94%

Chronic Opioid Survey


RTI


18+ General Pop

Wave 2

3 months

95%

96%

National Health Follow-Up


University of Pennsylvania

18+ General Pop

Wave 2

1 year

90%

78%

2004 Election Survey


Ohio State University

18+ General Pop

Wave 3

7 months

77%

84%

2004 Biotech Survey

Northwestern University

18+ General Pop

Wave 3

11 months

96%

75%


It should be noted that while attrition will inevitably occur in this study, as it usually does in any longitudinal study, we do not expect attrition to bias any of the study’s main findings. In sample surveys, there will almost always be missing data due to the attrition (or initial nonresponse) of selected respondents. In longitudinal surveys, this problem is typically exacerbated as a function of time because there may be further attrition at each wave of the survey. Three distinct mechanisms causing missing data can be identified and the cause of missingness determines the extent to which bias may be introduced into the study estimates. These mechanisms include the following:


  1. Data are said to be missing completely at random (MCAR) if the probability of attrition is unrelated to study outcome variables or to the value of any other explanatory variables, including the exposure conditions. No additional bias will be introduced to estimates based on incomplete data due to missngness under MCAR. However, the reduced data set will typically result in larger standard errors.


  1. Data are said to be missing at random (MAR) if the probability of attrition is unrelated to study outcome variables after controlling for other explanatory variables. That is, attrition may vary by demographic characteristics. For example, participants of lower income may be more likely to drop out of the survey compared to participants of higher income. Thus bias would be introduced into an overall outcome variable estimate for these participants but not into income-specific estimates. Thus, under MAR, the potential bias in estimates due to missingness can be eliminated (or reduced significantly) if the appropriate explanatory variables, such as income, are controlled for.


  1. Data are said to be missing not at random (MNAR) if the probability of attrition is related to the study outcome variable itself. For example, suppose that women who indicate lower knowledge about HIV testing are more likely to drop out of the survey than women who report more knowledge of HIV testing. In this case, the overall estimate of HIV testing knowledge among all female participants will be biased upward by attrition.

In practice, all three missingness mechanisms may be at work (i.e., different attriters may drop out according to different mechanisms). If MNAR is not dominant, then reasonably unbiased estimates of study outcomes can be constructed through appropriate modeling. In the case of this efficacy study, we do not expect MNAR to be present. This study differs from typical longitudinal surveys in that it has an experimental component. That is, respondents will be randomly assigned to different levels of exposure or “treatment”. The purpose of this study is to compare outcome variables across different levels of the exposure conditions to identify treatment variable effects. Thus even if attrition exists under the worst case scenario (i.e., MNAR), random assignment to exposure conditions ensures that any nonresponse bias will be similar across exposure conditions. Hence, although attrition may vary by demographics and other variables, bias will not be introduced in our estimates of treatment-level effects since those variables and associated attrition will be evenly distributed across exposure conditions due to random assignment.


The only scenario under which our study findings may be subject to attrition bias is if the exposure conditions themselves introduce attrition. Based on our prior experience with similar projects, we do not expect this to be the case. There is no evidence, in the form of quantitative data or other information from prior KN studies, that exposure to stimuli similar to those we propose using induces attrition from longitudinal studies.


So far, we have discussed how to approach attrition at the “back-end” of a survey (i.e., how to mitigate bias due to nonresponse). Another option is to address attrition at the “front-end” of the study in order to ensure validity of the analysis. For example, suppose that the initial sample is “unbalanced” with regard to some important demographic variable(s) (e.g., suppose we obtain a sample that is predominantly higher-income females). Then overall generalizations may be skewed because of this unbalance. Under this scenario, it is possible to account for this unbalance by means of appropriate models. However, another solution would be to draw a subsample of the initial sample, and to oversample the strata that are underrepresented in an appropriate ratio that would offset the under-representation. 



Participants can complete each survey only once. Upon initial log-in, potential participants who indicated willingness to participate will be directed to a brief online informed consent form (Attachment 6) where they will be given general information about the study screener. Participants will provide consent to be screened for the study through point-and-click acceptance through Knowledge Networks’ software. Once participants indicate their consent to be screened for the study, they will then be screened for eligibility via a brief online screener (Attachment 5) that includes questions on gender, age, race/ethnicity, marital status, education, HIV status, and other characteristics needed to identify eligible sample members. Eligible participants include English-speaking, HIV-negative, single African American women aged 18 to 34, with less than 4 years of college education who live outside the two targeted campaign cities (Philadelphia and Cleveland). Individuals who are eligible for the study will be presented with the more detailed online consent form (Attachment 6) where they will be given general information about the study, topics to be covered in the survey, potential risks from the survey, and honoraria available for completing the survey. Knowledge Networks panelists and E-mail list sample participants will be given separate consent forms that differ only in the explanation of honoraria available for completing the survey. Knowledge Networks panelists will be given honoraria via the Knowledge Networks bonus points system, which includes points that are redeemable for cash, while E-mail list sample participants will receive checks directly in the mail. Once participants indicate their consent to participate, they will proceed directly to the online baseline survey. Study participants will be given a designated period during which the survey will be available for them to complete, making it feasible for participants to complete the survey during their own time, in private. The Knowledge Networks mechanism therefore makes this study suitable for addressing sensitive topics among Knowledge Networks panel members and the e-mail list sample, such as sexual behavior and HIV status, while also improving the accuracy and validity of the data obtained for these sensitive topics.


Participants will self-administer a 15-minute baseline questionnaire at home on personal computers. A 20,000 Knowledge Networks bonus point honorarium (equivalent to $20 cash) will be offered to Knowledge Networks participants who complete the baseline survey. Participants selected from the e-mail lists will be offered monetary honoraria of $20 for completion of the baseline survey. Subsequent to the baseline data collection, participants will initially be randomly assigned to 2 conditions: (1) exposure to campaign messages and (2) no exposure. Following this random assignment, participants assigned to the exposure condition will log into a dedicated, password-protected Web site and view or listen to campaign stimuli (Exhibit 142) via multimedia components (two radio advertisements, one image of a billboard, and one electronic booklet as shown in Attachment 9) for a period of two weeks. Knowledge Networks’ technology will allow us to track the number of times each participant views the assigned stimuli during specific time periods, allowing us to validate the fidelity of experimental implementation. Study participants assigned the control (no exposure) condition will not view any of the stimuli and will self-administer the follow-up surveys at the same time as those assigned to the exposure condition. Two follow-up data collections will be conducted approximately 2 weeks, and 6 weeks after baseline (as shown in Exhibit 153). A 10,000 Knowledge Networks bonus point honorarium (equivalent to $10 cash) will be offered to Knowledge Networks participants who complete each of the follow-up surveys (2 weeks and 6 weeks after baseline). Participants selected from the e-mail lists will be offered monetary honoraria of $10 for completion of each of the follow-up surveys. Adults are difficult to engage in a survey about this sensitive topic without the use of a small honorarium. The honorarium is intended to recognize the time burden placed on participants, encourage their cooperation, and convey appreciation for contributing to this important study over three data collection periods. A detailed description of Knowledge Networks’ panel recruitment methodology is in Attachment 8. A toll-free telephone number for CDC INFO (1-800-CDC-INFO) will be provided to all participants during the informed consent process if they have additional questions about HIV/AIDS.



Exhibit 142. Campaign Stimuli

  • Two radio advertisements, 1 minute each, received 3 times per week, for a total of 6 exposures per ad (12 minutes total burden per respondent)

  • One billboard advertisement to appear during “log-on” for each stimuli session, every 3 days, for a total of 6 exposures over 2 weeks (we estimate no burden for these exposures)

  • One reading of an electronic booklet per respondent (15 minutes burden per respondent)



3. Methods to Maximize Response Rates and Deal with Nonresponse


The following procedures will be used to maximize cooperation and to achieve the desired high response rates:

  • Recruitment through Knowledge Networks for some respondents averaging 70-75% response rate for the web-enabled panel.

  • Knowledge Networks bonus point honorarium in the amount of 20,000 (equivalent to $20 cash) will be offered to Knowledge Networks participants who complete the baseline survey. Participants selected from the e-mail lists will be offered monetary honoraria of $20 for completion of the baseline survey. An additional 10,000 Knowledge Networks bonus point honorarium (equivalent to $10 cash) will be offered to Knowledge Networks participants who complete the 2-week and 6-week surveys. Participants selected from the e-mail lists will be offered monetary honoraria of $10 for completion of each of the follow-up surveys.

Exhibit 153. Study Design and Experimental Conditions


  • Nonrespondents will receive two e-mail reminders from Knowledge Networks requesting their participation in the survey. Members of the Knowledge Networks panel will also receive a reminder phone call.

  • Knowledge Networks will provide toll-free telephone numbers to all sampled individuals and invite them to call with any questions or concerns about any aspect of the study. RTI will provide a toll-free telephone number for the RTI project director and a toll-free telephone number for the RTI IRB hotline should participants have any questions about the study or their rights as a study participant.

  • Knowledge Networks data collection staff will work with RTI project staff to address concerns that may arise.

  • A study overview will be included in the introductory information for participants prior to each survey. The information will present an interesting and appealing image and alert participants to the upcoming surveys.


4. Test of Procedures or Methods to Be Undertaken


To estimate the survey burden for each respondent, two different survey specialists were consulted. The survey specialists conducted mock interviews and provided affirmative responses to most or all questions that branched to further follow-up questions. In this way, the burden estimate most closely resembles a maximum average burden, since almost all survey questions were presented in the interview. In addition, the survey specialists deliberately read each item at a slow rate of speed. RTI’s experience has shown that administering the questionnaire by interviewer generally increases the interview length compared with self-administration. The survey specialists estimated the maximum average burden to be 2 minutes for the study screener, 13 minutes for the baseline survey, 15 minutes for the first follow-up survey and approximately 5 minutes for the second follow-up survey. The survey instrument is shown in Attachment 2 and the study screener is shown in Attachment 5.

Prior to the efficacy experiment, RTI will cognitively test the survey instrument with a small sample of no more than nine participants recruited by RTI in the Raleigh/Durham, North Carolina, area, who meet the screening criteria for the study. Cognitive interviews use a structured protocol of questions with standardized probes and tailored follow-up questions. Respondents will be told that the purpose of the interview is to improve the survey they are reviewing before it is given to a large number of people. They are encouraged to offer feedback either spontaneously or in direct response to RTI’s questions. When the participant arrives at the study site, the RTI interviewer will read the participant an introduction to the activity and obtain the participant’s informed consent. After obtaining informed consent, the RTI interviewer will continue through the cognitive testing protocol with the participant.

For the purposes of cognitive testing, the assessment instruments will be combined into one instrument. Probes will be embedded in the protocol, and the RTI interviewer may ask additional questions or probes, as necessary. At the conclusion of the interview, the participant will be given an honorarium in appreciation for her time and effort, as well as the toll-free telephone number for the CDC Contact Center. 1-800-CDC-INFO.

Before implementing the efficacy study, RTI, Knowledge Networks, and CDC staff will test the entire process of self-administering the baseline online survey, receiving a basic set of campaign stimuli, and self-administering the online follow-up surveys. RTI, Knowledge Networks, and CDC will test for any problems with navigating through the surveys or stimuli. This will enable us to pilot test the survey programming and logic and correct any potential problems before the experiment is implemented with the actual sample of participants.


5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


Dhuly Ahsan

Statistician

RTI International

6110 Executive Boulevard, Suite 902
Rockville, Maryland 20852-3907

301-770-8234

[email protected]


Kevin Davis

Data Collection, Analysis, and Reporting Task Leader

RTI International

3040 Cornwallis Rd.

Research Triangle Park, NC 27709

[email protected]

919-541-5801


J. Michael Dennis

Vice President and Managing Director,

Government and Academic

Knowledge Networks, Inc.

1350 Willow Road, Suite 102
Menlo Park, CA 94025

650-289-2000

[email protected]


Jennifer D. Uhrig

RTI Project Director

RTI International

3040 Cornwallis Rd.

Research Triangle Park, NC 27709

[email protected]

919-316-3311


References


Abreu, D.A., & Winters, F. (1999). Using monetary incentives to reduce attrition in the survey of income and program participation. Proceedings of the Survey Research Methods Section of the American Statistical Association.

Abma, J. C., Martinez, G. M., Mosher, W. D., & Dawson, B. S. (2004). Teenagers in the United States: Sexual activity, contraceptive use, and childbearing, 2002. Hyattsville, MD: National Center for Health Statistics.

Bureau of Labor Statistics (2006). National Compensation Survey. U.S. Department of Labor. http://www.bls.gov/ncs/

Centers for Disease Control and Prevention (CDC). (2005). Behavioral Risk Factor Surveillance System Survey Data. Atlanta, Georgia: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention.

Centers for Disease Control and Prevention (CDC). (2003). Cases of HIV infection and AIDS in the United States. HIV/AIDS Surveillance Report 15, 1-46.

Couper, M. P. (2000). Web surveys: A review of issues and approaches. Public Opinion Quarterly, 64, 464-494.

The Henry J. Kaiser Family Foundation. (2006). Survey of Americans on HIV/AIDS. Retrieved May 17, 2006 from http://www.kff.org/entpartnerships/viacom/index.cfm

The Henry J. Kaiser Family Foundation. (2003). Key Facts: African Americans and HIV/AIDS. Retrieved August 3, 2006 from http://www.kff.org/kaiserpolls/pomr050806pkg.cfm

Krotki, K. & Dennis, J. M. (2001). Probability-based survey research on the internet. Presented at the Proceedings of the 53rd Conference of the International Statistical Institute, Seoul, Korea.

Lethbridge-Çejku, M., Rose, D., Vickerie, J. (2006). Summary health statistics for U.S. Adults: National Health Interview Survey, 2004. . Hyattsville, MD: National Center for Health Statistics.

O’Rourke, D., G. Chapa-Resendez, L. Hamilton, K. Lind, L. Owens, and V. Parker. 1998. An inquiry into declining RDD response rates part I: Telephone survey practices. Survey Research, 29, 1-16.

Rideout, V. (2004). Assessing public education programming on HIV/AIDS: A national survey of African Americans. Washington, DC: Kaiser Family Foundation.

Shettle, C., & Mooney, G. (1999). Monetary incentives in U.S. government surveys. Journal of Official Statistics, 15, 231-250.

13


File Typeapplication/msword
Authorsxw2
Last Modified Bykraemer_j
File Modified2007-08-22
File Created2007-08-22

© 2024 OMB.report | Privacy Policy