Supporting Statement for Paperwork Reduction Act Submission
for
Examining the Efficacy of the HIV Testing Social Marketing Campaign for African American Women
New OMB Application
Technical Monitor: Jami Fraze, PhD
Address: 8 Corporate Square, 5th Floor, Rm 5051
Atlanta, GA 30329-3013
Telephone: 404-639-3371
Fax: 404-639-2007
E-mail: [email protected]
Date: March 28, 2007
Table of Contents
A. JUSTIFICATION
A1. Circumstances Making the Collection of Information Necessary
A2. Purpose and Use of the Information Collection
A3. Use of Improved Information Technology and Burden Reduction
A4. Efforts to Identify Duplication and Use of Similar Information
A5. Impact on Small Businesses or Other Small Entities
A6. Consequences of Collecting the Information Less Frequently
A7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside Agency
A9. Explanation of Any Payment or Gift to Respondents
A10. Assurance of Confidentiality Provided to Respondents
A11. Justification for Sensitive Questions
A12. Estimates of Annualized Burden Hours and Costs
A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers
A14. Annualized Cost to the Federal Government
A15. Explanation for Program Changes or Adjustments
A16. Plans for Tabulation and Publication and Project Time Schedule
A17. Reason(s) Display of OMB Expiration Date is Inappropriate
A18. Exceptions to Certification for Paperwork Reduction Act Submissions
B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS
B1. Respondent Universe and Sampling Methods
B2. Procedures for the Collection of Information
B3. Methods to Maximize Response Rates and Deal with Nonresponse
B4. Test of Procedures or Methods to be Undertaken
B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
LIST OF ATTACHMENTS
1. Authorizing Legislation and Other Relevant Laws
2. Evaluation Data Collection Instrument
3. Federal Register Notice - 60 day published and 30 day draft
4. CDC Institutional Review Board Approval
5. Study Screener
6. Consent Forms
7. E-mail Notification and E-mail and Telephone Reminders
8. Knowledge Network Panel Recruitment Methodology
9. Experiment Stimuli
Exhibits
Number
Evaluation Research Questions
Take Charge. Take the Test. Evaluation Consultants
Social Marketing and Behavior Change Experts Consulted
RTI/Knowledge Networks Studies Involving Adult Respondents Receiving
Honoraria and Corresponding Response Rates
Annualized Burden Hours
Cost to Respondents
Government Costs
Study Hypotheses
Project Time Schedule
Minimum Detectable Treatment-Control Differences for Percentage Estimates of Outcomes
Number of Participants
Campaign Stimuli
Study Design and Experimental Conditions
Examining the Efficacy of the HIV Testing Social Marketing Campaign for African American Women
The purpose of this submission is to request OMB approval to conduct a Web survey evaluating the efficacy of the Take Charge. Take the Test. social marketing campaign aimed at increasing HIV testing rates among young, single, African American women.
A wide body of research shows that African Americans are disproportionately afflicted by HIV/Acquired Immune Deficiency Syndrome (AIDS). Although African Americans represent approximately 13% of the U.S. population, they accounted for 40% of AIDS cases diagnosed since the beginning of the epidemic and almost half of the AIDS cases diagnosed in 2003 alone (Centers for Disease Control and Prevention [CDC], 2003). African American women accounted for 36% of new AIDS cases among African Americans overall in 2003, while white women accounted for 14% of new AIDS cases among Whites overall (Henry J. Kaiser Family Foundation, 2003). African American women also accounted for two-thirds of new AIDS cases among all women in 2003, and the HIV/AIDS rate for African American females was 19 times the rate for white females and 5 times the rate for Hispanic females between 2000 and 2003 (CDC, 2003). Heterosexual contact was the leading cause of HIV infection among African American women, and injection drug use was the second leading cause (CDC, 2003).
One of the goals of CDC’s HIV Prevention Strategic Plan is to reduce the number of new HIV infections in the United States, with particular focus on eliminating racial and ethnic disparities in the rate of new HIV infections. Two objectives related to accomplishing this goal are to (1) increase, through voluntary counseling and testing, the proportion of HIV-infected people in the United States who know they are infected from the current estimate of 70% to 95%; and (2) increase the proportion of HIV-infected people in the United States who are linked to appropriate prevention, care, and treatment services from the current estimate of 50% to 80%. Take Charge. Take the Test. is an initiative in direct response to these needs. However, little is known about the efficacy of social marketing messages aimed at increasing rates of HIV testing among at-risk populations. The current study addresses the need for this assessment and is designed to determine the efficacy of the Take Charge. Take the Test. messages.
The study will include a sample of single, African American women aged 18 to 34 with less than a 4 year college education selected from a combination of the probability-based national Knowledge Networks online panel and a nonprobability-based national e-mail list sample. Participants will self-administer the questionnaire at home on personal computers. After completion of a baseline assessment, participants will be randomized into two experimental conditions: (1) exposure to campaign messages and (2) no exposure. The research will include 3 data collections: a baseline self-administered survey, and follow-up surveys of participants who completed the baseline survey 2 weeks and 6 weeks after baseline.
The following section of the U.S. Federal Code (see Attachment 1) is relevant to this data collection: 42 USC 241, Section 301 of the Public Health Service Act authorizes conduct of “research, investigations, experiments, demonstrations, and studies relating to the causes, diagnosis, treatment, control, and prevention of physical and mental diseases and impairments of man.”
The purpose of the evaluation is to determine the efficacy of Take Charge. Take the Test. messages on desired outcomes, including main effects of exposure to campaign messages on HIV testing behaviors and mediating and moderating effects of participant characteristics on HIV testing behaviors. Changes in the following outcomes will be examined: attitudes, beliefs, and knowledge about HIV testing; receptivity to Take Charge. Take the Test. messages; perceived credibility, perceived risks of HIV and importance of testing, intentions to get an HIV test, and HIV testing behaviors. Key research questions for the evaluation are presented in Exhibit 1. A copy of the evaluation data collection instrument is Attachment 2.
Exhibit 1. Evaluation Research Questions
1. Does exposure to Take Charge. Take the Test. messages increase HIV testing behaviors, relative to participants in the unexposed control group? 2. Does exposure to Take Charge. Take the Test. messages increase the exposure group’s intentions to get tested for HIV, relative to participants in the unexposed control group? 3. Does exposure to Take Charge. Take the Test. messages increase the exposure group’s knowledge of the importance of testing compared to the unexposed control group? 4. Does exposure to Take Charge. Take the Test. messages increase the exposure group’s belief that they should get tested for HIV because they are at risk? 5. Does exposure to Take Charge. Take the Test. messages increase the exposure group’s belief that it is important for women like them to get an HIV test? 6. Does exposure to Take Charge. Take the Test. messages increase the exposure group’s belief that community resources and HIV treatment are available to them, if needed? 7. Do study participants in the exposure group have positive receptivity to Take Charge. Take the Test. messages, including positive reactions to specific advertising executions? 8. Do study participants in the exposure group have higher knowledge of their HIV status compared to the unexposed control group? |
The information obtained from the proposed data collection activities will be used to inform the Centers for Disease Control and Prevention (CDC), policy makers, prevention practitioners, and researchers about the effects of campaign messages in a controlled setting to increase HIV testing rates among young, African American women. This information will enable CDC to more effectively address HIV testing and prevention. Finally, the data provided from the proposed evaluation may be used for an understanding of the appropriateness for continued or expanded funding and dissemination of the campaign.
CDC and RTI will disseminate results to peer-reviewed journal readers, as well as through an executive summary and a full report. The executive summary will be written in clear language to be understandable among a wide range of audiences (African American women, practitioners, policy makers, researchers). The full report will include an overview of background literature to provide contextual information about the purpose of the campaign and evaluation approach, theoretical underpinnings of the analysis, and specific data and methodologies used. The report will also include a synthesis of findings across all evaluation questions and an overall assessment of the efficacy of the Take Charge. Take the Test. messages, strengths and limitations of the evaluation, and recommendations for further evaluations. The report will be scientifically rigorous to capture the complexity of the analyses but also will be sensitive to nontechnical audiences and relevant to other stakeholders.
The Take Charge. Take the Test. efficacy evaluation will rely on Web surveys to be self-administered at home on personal computers. We anticipate a higher response rate using this technology relative to a telephone or mail survey, particularly because a portion of our sample of participants (enrolled through Knowledge Networks) has already agreed to participate in research studies if contacted. Utilization of the World Wide Web has the advantages of being able to expose treatment condition respondents to Take Charge. Take the Test. audio messages, allowing respondents to complete as much of the survey as desired in one sitting and to continue the survey at another time, minimizing the possibility of respondent error by electronically skipping questions that are not applicable to a particular respondent, and creating the least burden to the respondent. One alternative method considered was to conduct telephone surveys. However, the longer a telephone survey continues, the more likely it is that respondents will "drop out" and not fully answer all the questions. In addition, response rates for telephone surveys are decreasing as new technology (answering machines, voice mail, caller identification) becomes available (O’Rourke et al., 1998), and non-locate rates in later waves of longitudinal telephone surveys are increasing, likely due to increased use of cellular phones and frequent switching of carrier companies. We also considered mail surveys. Many mailed surveys are never returned, making the sample self-selective and less random, since there is little control over who completes and returns the survey and who does not. In sum, because of the disadvantages of alternate modes of administration and because our research objectives could not be fully met without a high response rate among selected respondents, we determined that the study design of collecting data via Web surveys was the best methodology.
In designing the proposed data collection activities, we have taken several steps to ensure that this effort does not duplicate ongoing efforts and that no existing data sets would address the proposed study questions. To ensure that this study is forging new ground in our understanding of the efficacy of the Take Charge. Take the Test., we conducted an extensive review of the literature by examining several large periodical journal databases. We identified published articles or books containing the keywords, “African American,” “women,” and “HIV testing.” In addition to reviewing published information, we searched for “gray” literature by exploring the Internet. Searches were performed on several Internet search engines, including Google, Yahoo, AltaVista, Medline, and Science Direct, using search terms “African American,” “women,” and “HIV testing.”
The Take Charge. Take the Test. is a new social marketing campaign for which no evaluation data exists. Although some existing surveys may contain measures of the campaign’s targeted outcomes (e.g., HIV testing behaviors), no existing data sources contain measures of awareness or exposure to the Take Charge. Take the Test. Measures of exposure to the Take Charge. Take the Test., obtained either through controlled experiment conditions or questionnaires of the target audience, are required in order to assess the campaign’s association with HIV testing related outcomes. Therefore, our evaluation requires the collection of new primary data. To date, no duplication of effort has been identified.
We have carefully reviewed existing data sets to determine whether any of them are sufficiently similar or could be modified to address CDC’s need for information on the efficacy of the Take Charge. Take the Test. on HIV testing behaviors. Efforts to avoid duplication include a review of CDC’s administrative agency reporting requirement and of existing studies of CDC’s programs. We investigated the possibility of using existing data to examine our research questions, such as data collected as part of surveys by Behavioral Risk Factor Surveillance System (CDC, 2005); National Health Interview Survey (Lethbridge-Çejku, Rose, & Vickerie, 2006); and the National Survey on Family Growth (Abma, Martinez, Mosher, & Dawson, 2004; Albert et al., 2005); and the evaluations of KNOW HIV/AIDS campaign (Henry J. Kaiser Family Foundation, 2006) and Rap It Up Campaign (Rideout, 2004). However, none of these existing data included pre-and post-test data in a randomized design to test messages like the ones employed in the Take Charge. Take the Test.
No small businesses will be involved in this study.
The present study will provide the primary data needed for federal policy makers to assess the efficacy of the Take Charge. Take the Test. and its messages. If this evaluation were not conducted, it would not be possible to determine the value or impact of Take Charge. Take the Test. messages on the lives of the people they are intended to serve. Failure to collect these data could preclude effective use of program resources to benefit African American women.
The efficacy evaluation involves three data collection points—a baseline and two follow-up surveys. Serious consideration has been given to the issue of how frequently to survey and re-survey participants for the efficacy evaluation. After consulting with CDC’s contractor, RTI International (RTI), it was determined that the data collection strategy selected would need to be sufficient in number to track and document changes in outcomes between and across individuals before exposure to a time point late enough for campaign message effects to be observed. A measure of potential changes in attitudes, beliefs, or behaviors among participants is necessary immediately after exposure to initial campaign messages to measure short-term changes. Follow-up data collection at 4 weeks will provide data about subsequent changes in or maintenance of attitudes, beliefs, or behaviors. Less frequent data collection would not allow for measurement of potential short-term immediate reactions to the campaign messages, and outcomes which should change over a longer time period. Because of concerns about respondent attrition, RTI staff determined that the follow-up intervals would need to be narrow enough to enable completion of survey cycles with a given individual over a reasonably short period of time. For these reasons, RTI and CDC agreed to conduct efficacy evaluation data collection with participants at baseline, 2 weeks post-baseline, and 6 weeks post-baseline. There are no legal obstacles to reduce burden.
As described in section A6, respondents will report information more often than quarterly between baseline and the first follow-up assessment (at 2 weeks post-baseline) and the second follow-up assessment (at 6 weeks post-baseline). A measure of potential changes in attitudes, beliefs, or behaviors among respondents is necessary immediately following exposure to initial campaign messages to measure short-term changes and within 4 weeks of exposure to campaign messages to measure HIV testing behaviors and selected attitudes, beliefs, and knowledge towards HIV testing, which should change over a longer time period. Less frequent data collection would not allow for immediate reactions to campaign messages and long term effects. There are no other special circumstances that require the data collection to be conducted in a manner inconsistent with 5 CRF 1320.5 (d)(2).
A 60-Day Federal Register notice published on July 18, 2006 (Volume 71, Number 137, pages 40721-40722) solicited comments on Examining the Efficacy of the HIV Testing Social Marketing Campaign for African American Women; no comments were received. A copy of the Federal Register notice can be found as Attachment 3.
A list of key evaluation consultants for this project is provided in Exhibit 2. RTI staff consulted with a Distinguished Professor of Communication and a public health scientist on the study design and evaluation instrument and two different survey specialists to estimate the interview burden for each respondent.
Before selecting the final campaign target audience, CDC recognized the importance of gaining valuable insights directly from members of the target audience and from organizations and individuals who work with them in the community. CDC developed an extensive formative research plan to better understand the target audiences’ daily lives, values, and priorities; explore influences in their lives; and learn about their knowledge, attitudes, and behaviors related to HIV testing and risk reduction (most specifically, their barriers and motivators to HIV testing). By talking to the community, CDC wanted to gain insight into other HIV testing programs, understand how to leverage preexisting efforts for this audience, and gain third-party perspectives on the audiences’ motivators and barriers to HIV testing. This formative research process spanned approximately 1 year and included the following key initiatives:
Reviewed hundreds of published journal articles, mass media stories, and CDC consultation reports.
Exhibit 2. Take Charge. Take the Test. Evaluation Consultants
Michael D. Slater Social and Behavioral Sciences Distinguished Professor School of Communication The Ohio State University 3022 Derby Hall, 154 North Oval Mall Columbus, OH 43210-1339 614-247-8768 614-292-2055 (fax)
Olivia Silber Ashley Public Health Scientist RTI International 3040 Cornwallis Rd. Research Triangle Park, NC 27709 919-541-6427 919-485-5555 (fax) |
Douglas Currivan Survey Methodologist RTI International 3040 Cornwallis Rd. Research Triangle Park, NC 27709 919-316-3334 919-316-3866 (fax)
Randolph Ottem Survey Specialist RTI International 3040 Cornwallis Rd. Research Triangle Park, NC 27709 919-541-8068 919-541-7250 (fax)
|
Conducted interviews with “key informants”—practicing professionals in the HIV/AIDS academic, health communication, and target audience communities—including representatives from the Black AIDS Institute, the BET “Rap-It-Up” Campaign, Kaiser Family Foundation, Florida Truth Campaign, Better World Advertising, and many others.
Hosted a daylong consultation meeting with national experts in the HIV/AIDS arena, including key representatives from The Balm in Gilead, Inc.; National Alliance of State and Territorial AIDS Directors; National Black Caucus of State Legislators; National Black Leadership Commission on AIDS; National Minority AIDS Council; National Association of People With AIDS, and Whitman-Walker Clinic.
Hosted a daylong consultation meeting with social marketing and behavior change experts. Names and contact information for the expert panel participants is provided in Exhibit 3.
Exhibit 3. Social Marketing and Behavior Change Experts Consulted
Carol Bryant Professor, Community and Family Health University of South Florida 13201 Bruce B. Downs Blvd., MDC 56 Tampa, FL 33612 (813) 974-6686
Thomas Chapel Office of the Director Office of Strategy and Innovation CDC 1600 Clifton Road, NE Atlanta, GA 30333 (404) 639-5284
Susan Kirby President Kirby Marketing Solutions 1051 Via Mil Cumbres Solana Beach, CA 92075 (858) 245-2456
Nancy Lee President Social Marketing Services, Inc. 4001 West Mercer Way Mercer Island, WA 98040 (206) 232-8768 |
Susan Maguire Director of Social Marketing 1825 Connecticut Avenue, NW Washington, DC 20009-5721 (202) 884-8000
Sonja Myhre Health Communication Consultant
Kristin Weeks-Norton Office of AIDS, California Dept. of Health and Human Services 1616 Capitol Ave., Suite. 616 Sacramento, CA 95814 (916) 449-5900
|
Prior to the efficacy experiment, RTI staff will consult with no more than nine African American women in connection with cognitive testing the survey instrument (as described in Section B.4). This process will illuminate participants’ thought processes; help identify areas of the survey that are either unclear, difficult to understand, or offensive to this population; and to identify language that may be culturally sensitive to this population. Data collection materials may be revised based on feedback from the cognitive interviews. If these revisions significantly alter the nature of specific data collection materials, we will submit to OMB a request for approval of changes.
Participants selected from the e-mail lists will be offered an honorarium of $20 for completion of the baseline survey and $10 for completion of each of the two follow-up surveys. Upon agreeing to be a Knowledge Networks panel member, Knowledge Networks respondents are given free hardware, free Web access, free e-mail accounts for each panel member, and ongoing technical support. While these products/services are provided to facilitate the data collection methodology, respondents are given free usage of the products for personal use, and these benefits are also used as an incentive for recruiting potential panel members. In addition, a 20,000 Knowledge Networks bonus point honorarium (equivalent to $20 cash) will be offered to Knowledge Networks participants who complete the baseline survey. An additional 10,000 Knowledge Networks bonus point honorarium (equivalent to $10 cash) will be offered to Knowledge Networks participants who complete the 2 week and 6 week surveys. Knowledge Networks panel participants “cash in” their Knowledge Networks bonus points by requesting a check. Knowledge Networks panel participants can request a check for their accumulated bonus points in 5,000 point increments (equivalent to $5 cash) at any time. The honoraria are intended to recognize the time burden placed on the participants, encourage their cooperation, and to convey appreciation for contributing to this important study. Numerous empirical studies have shown that honoraria can significantly increase response rates (e.g., Abreu & Winters, 1999; Shettle & Mooney, 1999). The decision to use honoraria for this study is based on findings reported in current research publications and several projects conducted by Knowledge Networks and RTI, which found that use of an honorarium increases response rates among adults. Exhibit 4 summarizes several such studies and the response rates achieved. Although these studies differ in other respects that could account for some variability in response rates, overall, honoraria of at least $10 were generally associated with higher response rates compared with no honorarium.
Because a large portion of our sample will be selected from a nonprobability based list sample of e-mail addresses, the use of modest honoraria is expected to enhance survey response rates without biasing responses or coercing respondents to participate. A smaller honorarium would not appear sufficiently attractive to adults. We also believe that the honoraria will result in higher data validity as adults become more engaged in the survey process. The amount of the honoraria was determined through discussions with RTI staff with expertise in conducting adult surveys about HIV. Because all selected individuals may not be eligible for the study, we want to assure sufficient project spending and only provide monetary/bonus point honoraria to respondents after they are determined to be eligible.
Office of OSRS staff have reviewed this application and determined that the Privacy Act does not apply to this data collection. CDC and RTI will receive data for analysis in aggregate form, and the randomly generated numbers assigned as participant ID numbers will not link data to individuals. The participant ID itself will only be used for the purposes of tracking the survey completion pattern, i.e., how many people complete one or both waves of the survey. Although Knowledge Networks retains contact information on participants for honoraria purposes, individually identifiable information is not shared with anyone including CDC and RTI. It is stored separately from the survey data file and is not linked in any way to participant responses.
This project has been reviewed and approved by the Institutional Review Board (IRB) at CDC (see Attachment 4 for letter of IRB approval). The study screening instrument (Attachment 5) will include a question about HIV status, but the participant will be informed that both screener data and survey data will be treated in a secure manner. The CDC IRB has granted a waiver of documentation of informed consent. Knowledge Networks will maintain a list of participant ID numbers, names, addresses, phone numbers, and e-mail addresses only for the purpose of honoraria mailings and reminders about the study. CDC and RTI will only have access to the generic, randomly generated ID numbers for the purpose of tracking survey completion patterns. Neither RTI nor CDC will see names or contact information for any participant responses.
Exhibit 4. RTI/Knowledge Networks Studies Involving Adult Respondents Receiving Honoraria and Corresponding Response Rates
Study |
Population |
Honorarium Provided |
Response Rate Achieved |
Woman Focused HIV
Prevention with African-Americans |
African American women who used crack on 13 days in the past 90 days, engage in high-risk sex practices, and are not currently in drug treatment |
$20 at intake |
74%: 3-month |
Pretreatment Intervention for African American Crack Abusers (Recruitment: 2000 to 2002) |
African American men and women who used crack on 13 days in the past 90 days, and are not currently in drug treatment |
$10 at intake 1 |
90%: 3-month |
The University of California Irvine Stress and Trauma Study (2001-2004) |
Adult panelists and teens (13-17 with parental approval) |
$10 Initial Honorarium Pool A, $10 Initial Honorarium + $10 Completion Honorarium Pool B |
83%: Pool A |
All respondents will be assured that the information will be used only for the purpose of this research and will be kept private to the extent allowable by law, as detailed in the study consent form (see Attachment 6). Respondents will be assured that their answers to screener and survey questions will not be shared with anyone outside the research team and that their names will not be reported with responses provided. Respondents will be told that the information obtained from all of the surveys will be combined into a summary report so that details of individual questionnaires cannot be linked to a specific participant.
It is possible that a family member could view the survey on the home computer without the participant’s knowledge, which could create family problems. At the start of the survey, each participant will be asked whether she is the only one who can see the questions and her answers. If the participant responds that others can see her screen, she will be taken to an exit screen and allowed to continue the survey at another time.
RTI maintains restricted access to all data preparation areas (i.e., receipt and coding). All data files on multi-user systems will be under the control of a database manager, with access limited to project staff on a “need-to-know” basis only. Knowledge Networks has developed a secure transmission and collection protocol, including the use of system passwords and two separate sets of firewalls to prevent unauthorized access to the system. Neither questionnaires nor survey/screener responses are stored onto the WebTV box installed in Knowledge Networks respondents’ homes; questionnaires are administered dynamically over the Internet. Survey and screener responses are written in real-time directly to Knowledge Networks’ server and are then stored in a local Oracle database. The database is protected primarily through firewall restrictions, password protection, and 128-bit encryption technology. Individual identifying information will be maintained separately from completed screeners and questionnaires and from computerized data files used for analysis. Data on ineligibles will be destroyed. No respondent identifiers will be contained in reports, and results will only present data in aggregate.
The Take Charge. Take the Test. is a direct initiative in response to the need to decrease the number of HIV positive individuals who are unaware that they are infected. As such, our study entails the measurement of sensitive HIV-related questions.
Because HIV testing is the primary behavioral outcome of this study, HIV-positive individuals must be excluded from the outset of the study. Individuals who have previously tested positive for HIV will likely have no need for further HIV testing and are thus not included in the primary population of interest. Therefore, our pre-survey screening instrument (see Attachment 5) includes a question (S8) that assesses whether individuals have ever tested positive for HIV. Furthermore, because our campaign materials are targeted at women who are at risk for HIV because they are having unprotected sex with men, our pre-survey screening instrument includes two questions (S11 and S12) that assess sexual behavior.
The baseline and follow-up surveys (see Attachment 2) also include questions about HIV testing and HIV status (questions B1-B9). In addition, because HIV is transmitted through sexual contact and intravenous drug use, our survey includes questions about these behaviors to enable us to understand the transmission behaviors of our survey respondents (questions C1-C5). These questions will also enable us to determine whether change occurs in the exposure group relative to the control group at follow-up. Furthermore, the survey contains a set of questions about respondents’ HIV knowledge, attitudes, beliefs and intentions to get tested for HIV (questions D1-D29). These questions will enable us to measure knowledge, attitudes, beliefs and HIV testing intentions at baseline and determine whether change occurs in the exposure group relative to the control group at follow-up.
The total response burden is estimated at 1,127 hours. Exhibits 5 and 6 provide details about how this estimate was calculated. Timings were conducted during our instrument development process to determine the overall burden per respondent. The study screener is expected to take about 2 minutes to complete. The baseline survey is expected to take 13 minutes while the 2-week follow-up survey is expected to take 15 minutes. Individuals assigned to the treatment condition will be exposed to the radio advertisement (12 minutes) and booklet (15 minutes). The 6-week follow-up survey is expected to take 5 minutes because it will only pertain to questions about behavior change and selected behavioral intentions. We will complete 5,200 questionnaires (760 hours for questionnaires and 367 hours for stimuli, totaling 1,127 hours).
The total response burden of 1,127 hours included in Exhibit 5 is different from the total response burden of 1,209 that was included in the burden table in our 60-day Federal Register Notice. We reduced the estimated burden hours because we had originally estimated exposure to three radio advertisements (18 minutes) when we calculated burden for the Federal Register Notice. However, our final research design includes exposure to two radio advertisements (12 minutes). This change accounts for the difference in burden hours reported in the 60-day Federal Register Notice and the burden hours reported in this supporting statement.
Because it is not known what the wage rate category will be for these selected participants (or even whether they will be employed at all), the figure of $6.00 per hour was used as an estimate of average minimum wage across the country (Bureau of Labor Statistics, 2006). The estimated annual cost to participants for the hour burden for collections of information will be $6,762.
Respondents participate on a purely voluntary basis and, therefore, are subject to no direct costs other than their time to participate; there are not start-up or maintenance costs. We do not require any additional record keeping.
Exhibit 5. Annualized Burden Hours
Respondents |
No. of Respondents |
No. of Responses per Respondent |
Average Burden per Response (in Hours) |
Total
Burden |
Study Screener |
1,630 |
1 |
2/60 |
54 |
Baseline survey |
1,630 |
1 |
13/60 |
354 |
Radio ad stimuli viewing |
815* |
1 |
12/60 |
163 |
Booklet reading |
815* |
1 |
15/60 |
204 |
2-week follow-up survey |
1,140* |
1 |
15/60 |
285 |
6-week follow-up survey |
800* |
1 |
5/60 |
67 |
Total |
|
|
|
1,127 |
* A subset of the original 1,630 baseline respondents.
Exhibit 6. Cost to Respondents
Respondents |
No. of Respondents |
No. of Responses per Respondent |
Average Burden per Response (in Hours) |
Hourly Wage Rate |
Total
Burden |
Total |
Study Screener |
1,630 |
1 |
2/60 |
$6.00 |
54 |
$324.00 |
Baseline survey |
1,630 |
1 |
13/60 |
$6.00** |
354 |
$2,124.00 |
Radio ad stimuli viewing |
815* |
1 |
12/60 |
$6.00** |
163 |
$978.00 |
Booklet reading |
815* |
1 |
15/60 |
$6.00** |
204 |
$1,224.00 |
Follow-up survey 1 |
1,140* |
1 |
15/60 |
$6.00** |
285 |
$1,710.00 |
Follow-up survey 2 |
800* |
1 |
5/60 |
$6.00** |
67 |
$402.00 |
Total |
|
|
|
|
1,127 |
$6,762.00 |
* A subset of the original 1,630 baseline respondents.
** Estimates of average hourly wage for participants.
The contractor’s costs are based on estimates provided by the contractor who will carry out the data collection activities. With the expected period of performance, the annual cost to the federal government is estimated to be $266,405 (Exhibit 7). This is the cost estimated by the contractor, RTI, and includes the estimated cost of coordination with the CDC, data collection, analysis, and reporting.
There is no change in burden requested, as this is a new information collection.
Exhibit 7. Government Costs
Item/Activity |
Details |
$ Amount |
CDC oversight of contractor and project |
15% of FTE: GS-13 Health Communication Specialist and 5% of FTE GS-13 Health Communication Specialist |
$16,640.00 |
Data Collection (Contractor) |
683 labor hours, data collection subcontract with Knowledge Networks, and ODCs |
$201,381.00 |
Analysis and Reporting (Contractor) |
377 labor hours and ODCs |
$48,384.00 |
Total |
|
$266,405.00 |
Our analyses will consist of two phases: (1) preliminary analyses of simple pre–post comparisons between participants in the treatment and control groups on primary outcome variables and (2) multivariable analyses of the association between the treatment condition and changes in outcome variables between the baseline and follow-up assessments. The first phase of data analysis will include basic summary statistics for the purposes of describing the sample, determining whether participants randomized to treatment/control conditions differ significantly on pretest measures, and examining the distribution of the primary outcome variables. We will also compute means for continuous, normally distributed variables of interest and frequencies for categorical variables of interest, both for the entire sample and separately for sample members in each experimental condition. Statistical tests, such as chi-square tests, will be conducted to evaluate preliminary differences by study conditions, and any variables found to differ significantly between conditions will be evaluated as potential covariates for the analysis of primary outcome variables. In addition, the distributions of primary outcome variables will be examined to determine whether the distributional assumptions of planned analytic procedures are met.
Once preliminary analyses in the first phase are complete, we will begin to develop preliminary models that assess the association between the exposure condition and downstream mediators and outcomes. These models will include repeated measurements (merged baseline and follow-up data) on respondents within each experimental condition and will be estimated using a combination of linear and logistic regression methods. For example, our hypothesis that exposure to Take Charge. Take the Test. will increase participant HIV testing behavior (see Exhibit 8) will be tested in a regression model, where a measure of HIV testing behavior is specified as the dependent variable and the exposure condition is specified as the primary independent variable. These models will also include covariates for a number of background characteristics, including variables to control for pretest differences in study groups. The overall goal of these models is to determine the extent to which changes in HIV testing-related attitudes, beliefs, intentions, and behaviors differ by exposure conditions.
Our models will primarily be conducted among participants who complete all three waves of data collection. However, prior to estimation of our models, we will analyze patterns of attrition among all sample participants in order to identify factors that make some respondents more likely to complete all three surveys. As with any multi-wave cohort survey, it is expected
Exhibit 8. Study Hypotheses
|
that a certain percentage of participants who complete the baseline survey may not complete either of the two follow-up surveys. For example, because our sample will consist of a dual frame sample of both existing Knowledge Networks panelists and individuals from e-mail list samples, we expect attrition patterns to differ between these two sample sources. Once data collection is complete, we will analyze patterns in attrition and identify, through multivariable analyses, baseline factors (such as sample source) that are most predictive of future attrition. Once these variables are identified, they will be included in our primary analysis models as a way to control for self selection into the cohort of respondents that complete all stages of data collection.
For this study, we expect the findings to be disseminated to a number of audiences. Therefore, the evaluation reports will be written in a way that emphasizes scientific rigor for more technical audiences but are also intuitive, easily understood, and relevant to less technical audiences. The reporting and dissemination mechanism will consist of two primary components: (1) a final evaluation report, and (2) peer-reviewed journal articles.
The final evaluation report will be the central focus of dissemination efforts and will be written in clear language that is understandable by a wide range of audiences (African American women, practitioners, policy makers, researchers). This evaluation report will include a 10-page executive summary, a report of less than 100 pages (including an overview of background literature to provide contextual information about the purpose of the campaign and evaluation approach; a detailed summary of evaluation methods and activities; the evaluation results; discussion of findings in comparison with those of other relevant program evaluations; strengths and limitations of the evaluation; and recommendations for future evaluations of this scope for practitioners, evaluators, and policy makers), and appendices. The results of our study also will be used to develop at least one peer-reviewed journal article (e.g., American Journal of Public Health, Sexually Transmitted Disease, or Aids and Behavior) that summarizes findings on the overall efficacy of the Take Charge. Take the Test..
The key events and reports to be prepared are listed in Exhibit 9.
Exhibit 9. Project Time Schedule
Project Activity |
Time Schedule |
Conduct cognitive testing |
Upon IRB approval |
Baseline data collection |
2 months after OMB approval |
Experimental stimuli delivery |
3 months after OMB approval |
First follow-up data collection |
3 months, 2 weeks after OMB approval |
Second follow-up data collection |
5 months after OMB approval |
Data analysis |
7 months after OMB approval |
Submit final report |
10 months after OMB approval |
Submit at least one manuscript |
12 months after OMB approval |
We do not seek approval to eliminate the expiration date.
There are no exceptions to the certification statement.
File Type | application/msword |
File Title | Supporting Statement for Paperwork Reduction Act Submission |
Author | sxw2 |
Last Modified By | gzk8 |
File Modified | 2007-04-30 |
File Created | 2007-04-24 |