TPP Replication Study Follow Up Supporting Justification A_revised 030113

TPP Replication Study Follow Up Supporting Justification A_revised 030113.docx

TPP Replication Study Follow Up Data Collection

OMB: 0990-0405

Document [docx]
Download: docx | pdf


Supporting Justification for OMB Clearance of Teen Pregnancy Prevention Replication Evaluation (OMB Control #0990-NEW)



Part A: Justification for the Collection of Follow-Up Data



November 2012

The Office of the Assistant Secretary for Planning and Evaluation (ASPE) in collaboration with the Office of Adolescent Health (OAH), Office of the Assistant Secretary for Health (OASH) in the U.S. Department of Health and Human Services (HHS) is overseeing the TPP Replication Study evaluation. The TPP Replication study is specifically designed to address the question “Do evidence-based program models, replicated and funded as part of the OAH Teen Pregnancy Prevention Program, demonstrate impacts on sexual risk behaviors that are comparable to the originally-reported impacts and are they effective in preventing teen pregnancy and reducing sexually transmitted infections?” This evaluation focuses on the replication of a small number of program models across multiple sites with the goals of determining the extent to which program impacts are replicated as well as addressing questions about the extent to which aspects of program implementation are associated with program impacts. In the fall of 2011, ASPE awarded a contract to Abt Associates Inc. to conduct the evaluation.


For the purpose of this clearance, OAH is seeking OMB approval for two administrations of follow-up survey data collection for the Teen Pregnancy Prevention (TPP) Replication Study. The first administration will be short-term follow-up data collection 6 to 12 months post-baseline and the second administration will be longer-term follow-up data collection 18 to 24 months post-baseline. The 60-day notice for the follow-up survey data collection was published March 15, 2012. A request for approval for the study and for the baseline data collection was approved on June 8, 2012 under OMB clearance number 0990-0394.

OAH is overseeing and coordinating adolescent pregnancy prevention evaluation efforts as part of the Teen Pregnancy Prevention Initiative. In order to ensure that these Federal evaluation efforts across the Department are aligned, OAH is coordinating the submission of OMB Packages related to them. In support of these coordinated evaluation efforts, OAH has collaborated with other agencies that implement and evaluate teen pregnancy prevention and related issues in order to address a range of research and policy questions that complement rather than duplicate one another. These agencies include the Administration for Children and Families (ACF), the Office of the Assistant Secretary for Planning and Evaluation (ASPE), and the Centers for Disease Control and Prevention (CDC). HHS has created a Federal Teen Pregnancy Prevention Coordination Workgroup to develop and manage a coordinated strategy of HHS teen pregnancy prevention activities and evaluation efforts. The workgroup involves research and program staff from ACF, ASPE, CDC, and OAH. The workgroup has enabled the Department to collaborate on the new evaluation efforts and maximize the questions we can answer across the initiative, including the development of common core measures to be used across evaluation studies.


HHS created a “core follow-up instrument” to use across federal teen pregnancy prevention evaluation studies. The core follow-up instrument identifies as core those items that the Federal Teen Pregnancy Prevention Coordination Workgroup agreed should be included on each follow-up survey instrument administered as part of any federal teen pregnancy prevention evaluation study. The TPP Replication study follow-up instruments consist of the core follow-up instrument plus ancillary measures and OAH TPP grantee performance measures. Further description of the TPP Replication study follow-up instrument for which approval is requested may be found at the end of A1. The TPP Replication Study follow-up instruments can be found in Attachments D through F (the crosswalk identifies those items on the follow-up survey that have been added to the approved baseline instrument).


A1. Circumstances Making the Collection of Information Necessary

For decades, policymakers and the general public have remained concerned about the prevalence of sexual activity among adolescents. Although adolescents today are waiting somewhat longer before having sex than they did in the 1990s, 60 percent of teenage girls and more than 50 percent of teenage boys report having had sexual intercourse by their 18th birthday.1 Approximately one in five adolescents has had sexual intercourse before turning 15.2 Rates of teenage pregnancy declined by 34 percent between 1991 and 2005 for teens aged 15-19, before rising 5 percent between 2005 and 2007.3 The rate of teen births again dropped between 2007 and 2011, falling 25 percent for teens aged 15-19.4 Preliminary data in 2011 indicate an overall teen birth rate for teens aged 15-19 of 31.3 per 1000, which is an 8% decline in the teen birth rate since 2010.5

HHS is interested in identifying and evaluating approaches to reduce teen pregnancy, associated risk behaviors, and their consequences. One of the key policy questions is whether programs that have demonstrated evidence of effectiveness can be replicated in new settings with positive impacts. Of the 31 programs on the HHS list of evidence-based programs, only one program model has been replicated and shown to have positive effects through a rigorous evaluation. The follow-up data collection described in this ICR will provide important information to guide policy decisions aimed at replicating evidence-based programs.

Legal or Administrative Requirements that Necessitate the Collection


On December 19, 2009, the President signed the Consolidated Appropriations Act of 2010 (Public Law 111-117). Division D, Title II of the Act created the Teen Pregnancy Prevention Program, which is consistent with the Administration’s interest in establishing an evidence-based program to prevent teen pregnancy. The Act provides $110 million to fund this program within OAH, which is responsible for both program implementation and administration. The Teen Pregnancy Prevention Program is a two-tiered program that includes: (1) $75 million for replicating evidence-based programs that have been proven effective through rigorous evaluation (Tier 1); and $25 million for research and demonstration grants to develop and test additional models and innovative strategies (Tier 2).


In addition, Public Law 111-117, which set fiscal year (FY) 2010 appropriations levels, included the following language: “$4,455,000 shall be available from amounts available under section 241 of the Public Health Service Act to carry out evaluations (including longitudinal evaluations) of adolescent pregnancy prevention approaches.” The same language appropriated $4,455,000 in FY 2011. These funds have been used to fund several ongoing federal evaluation efforts, including this TPP Replication Study. In addition to these funds, the FY 2012 Appropriations Act provided $8.455 million in PHS evaluation funds, an increase of $4 million over the FY 2011 level, which is also supporting longitudinal evaluations of teen pregnancy prevention approaches.

As previously mentioned, the TPP Replication Study is focused on evaluating replications of evidence-based program models funded through the OAH TPP Program Replication (Tier 1) grants. Another evaluation, the Evaluation of Pregnancy Prevention Approaches (PPA) is focused on evaluating untested and innovative program models funded through the OAH TPP Program Research and Demonstration (Tier 2) grants as well as other funding streams.

To accomplish the objective of the appropriation, OAH seeks OMB approval of the TPP follow-up survey instrument.

Objectives of the TPP Replication Study

The goal of the TPP Replication Study is to determine the extent to which evidence-based program models that have been shown to be effective in an earlier trial, usually conducted by the program developer, demonstrate effects on adolescent sexual risk behavior and teenage pregnancy when they are replicated in similar and in different settings, for different populations. The evaluation will help OAH provide guidance to program managers and state and local policymakers about evidence-based program models and about the factors necessary to support successful replication.


For this evaluation, HHS has identified three evidence-based program models that represent different approaches to the prevention of teenage pregnancy, and that are being widely replicated as part of the TPP Program and through other federal and state funding initiatives. The three program models are: Safer Sex, a clinic-based individualized intervention for sexually-active female youth; Cuidate!, a culturally-sensitive small-group intervention aimed at Latino youth; and Reducing the Risk, a classroom-based sexual health curriculum that can also be implemented as an after-school program and in non-school settings. For each model, the agencies have identified three grantee replications, for a total of 9 replications which vary in the scope of the replication (number of youth served, scale on which it is implemented and the populations). The setting for the programs varies across the three models, but is consistent within model. While one program model is implemented only in clinics, the other two models are being replicated in school settings although their developers allow for variation in setting. The study will use a sample of approximately 8,550 youth across 9 grantee replication sites, a sufficient size to detect policy-relevant impacts of the program replications. Sample size assumptions at each survey administration point are provided in the table below.


Program Model

Baseline

Short-term Follow-Up

(86% retention)

Longer-Term Follow-Up

(80% retention)

Cuidate! (3 replications)

950/site

817/site

760/site

Reducing the Risk (3 replications)

950/site

817/site

760/site

Safer Sex ( 3 replications)

950/site

817/site

760/site

Total

8,550

7,353

6,840


In each of the replications selected, youth will be assigned to receive the intervention or to be part of a control group that does not receive it. In clinics and other community-based settings, individual youth will be randomly assigned. In the three sites where the Reducing the Risk curriculum is being implemented in schools, the unit of random assignment will be classes within a school (for example, health or physical education classes). In all cases, the intervention will be delivered by grantee staff who are health educators, not by the regular class teacher, so that the issue of contamination when the same teachers deliver both the intervention to the treatment group as well as the regular class to the control group does not arise.


The follow-up survey will be conducted at two time-points with youth in both treatment and control groups after youth in the treatment group have been exposed to the intervention. Depending on the program model, the first follow-up survey will be administered 6-12 months after the baseline survey; the final follow-up will be administered 18-24 months after baseline. To the extent feasible, the self-administered first follow-up survey will be completed in the school setting; otherwise the survey will be completed in a setting of convenience for the respondent via the web.6


Through the baseline and follow-up surveys HHS will address the following research questions:

  • What are the impacts on adolescent sexual risk behavior and teen pregnancy rates when an evidence-based program is replicated?

  • Do impacts vary for different youth populations (i.e., females vs. males, different age ranges, ethnicities)?

  • Are impacts replicated across sites implementing a specific program model?



Major activities for the TPP Replication Study include the following:

  • Selecting replication sites from the Teen Pregnancy Prevention Initiative grantees funded to replicate evidence based programs (Tier 1). All of these grantees are replicating “evidence-based” program models and are required to take steps to ensure fidelity to the model.

  • Recruiting grantees to participate in a rigorous experimental evaluation and working with them to design and support a strong study.

  • Collecting data on the research sample at baseline and at two subsequent time points (i.e. short-term and longer-term follow-up survey administration).

  • Conducting a comprehensive implementation study in each replication site.

  • Analyzing data and reporting the results.


The Follow-Up Survey


The proposed TPP Replication Study follow-up survey will be conducted with all study participants and contains: a) many of the same questions as the TPP Replication study baseline survey approved on June 8, 2012 under OMB clearance number 0990-0394; and (b) a limited number of additional questions that address outcomes specifically appropriate to the program models being evaluated. Additionally, because of OAH efforts to ensure comparability of outcome measures across federal studies, the proposed TPP Replication study follow-up survey contains many of the same questions as the PPA first follow-up survey, which was approved by OMB on September 27, 2011 (0990-0382).


A2. Purpose and Use of the Information Collection

If this request is approved, the evaluation will collect follow-up data on sample members’ demographic characteristics, knowledge about and attitudes toward their sexual health, sexual and other risk behaviors, prior receipt of information related to reproductive health, and information on how they can be contacted later. These data will be obtained from a follow-up survey administered to sample youth at two points: between 6 and 12 months (short-term follow-up) and between 18 and 24 months (longer-term follow-up, depending on the program model as shown in the table below. 7


Program Model

Baseline Survey Administration

Short-term Follow-Up Survey Administration

Longer-Term Follow-up Survey Administration

Cuidate!

Pre-Intervention

6 months post-baseline

(beginning Feb 2103)

18 months post-baseline

(beginning Feb 2014)

Reducing the Risk

Pre-Intervention

12 months post-baseline

(beginning Sept. 2013)

24 months post-baseline

(beginning Sept. 2014)

Safer Sex

Pre-Intervention

9 months post-baseline

(beginning April 2013)

18 months post-baseline

(beginning January 2014)



The data will serve several purposes. Identifying and updating contact information will help the study teams track sample youth throughout the evaluation, and locate them for follow-up if they have graduated, moved to another school, or dropped out. Follow-up data are important primarily for assessing the program’s impact on expected outcomes, including the primary outcomes of sexual behavior as well as mediating outcomes on knowledge, motivations, and intentions. It is important to collect follow-up data at two time points in order to assess program impacts in the short-term (6 to 12 months post-baseline) as well as in the longer-term (18-24 months post-baseline). One of the key program and policy questions this study will address is whether, if the program models impact sexual behavior, these changes in behaviors are sustained over time.

Follow-up data will measure: teens’ demographic and socioeconomic characteristics; dating experience; knowledge, attitudes, and expectations, including about sexual activity and contraception; stressors and supports; and school and community characteristics (as well as collect contact information). There are three versions of the follow-up survey: one for the Safer Sex program model, one for sexually experienced participants in the Cuidate! and Reducing the Risk replication sites, and one for participants who are not sexually experienced who are in the Cuidate! and Reducing the Risk replication sites. The three versions are nearly identical, although there are slight differences in the items in order to tailor the instrument to the program model and the target intervention audience. For example, the Safer Sex program model serves only sexually active females and therefore it is not necessary to include items specific to males or to youth who are not sexually active. Sexually experienced youth will respond to questions about their sexual behavior whereas youth who have never had sex will answer questions unrelated to sex so that the survey length is equivalent for the different groups. This is important for settings in which the surveys are administered to a group. Attachment A is a table that provides:

  • A crosswalk between the versions of the TPP Replication Study follow-up survey and the TPP Replication Study OMB baseline survey indicating which items appear on which survey(s); the question source; and how the data will be used.

Attachment B lists the topics covered in the follow-up instrument and our justification for their inclusion. A list of national surveys reviewed in developing the follow-up survey instrument is provided in Attachment C together with detailed references for sensitive questions. The follow-up survey instrument is broken into the following three versions:

  • Attachment D: Follow-up survey to be used for Safer Sex sites;

  • Attachment E: Follow-up survey to be used for sexually-experienced youth in Cuidate! and Reducing the Risk sites;

  • Attachment F: Follow-up survey to be used for sexually-inexperienced youth in Cuidate! and Reducing the Risk sites.

A3. Use of Improved Information Technology and Burden Reduction

The data collection plan reflects sensitivity to issues of efficiency, accuracy, and respondent burden. Where feasible, information will be gathered from existing data sources; the information being requested through surveys is limited to that for which the youth are the best or only information sources. For all surveys, both baseline and follow-up, state-of- the art technology will be used to reduce burden, improve comprehension and accuracy of responses, and ensure data security. All survey data will be collected, to the extent possible, via web-based Audio Computer-Assisted Self-Interview (ACASI), which has the capacity to capture and store data in real time, where each response to a question (as it is entered) is sent immediately to a central and secure database and no information is stored on local computers. This web-based ACASI technology has been successfully used in several large clinical trials, including studies that deal with drug use or exposure to HIV/AIDS. Research has demonstrated that surveys administered online are characterized by higher levels of self-disclosure, an increased willingness to answer sensitive questions and a reduction in socially desirable responses.

All sample members will be encouraged to complete the web-based survey, which will contain an audio option embedded in it. The strategy is ideal for young survey respondents and reinforces the idea that no-one else will see or hear the survey questions. Once approved, the survey instrument will be translated into Spanish, so that respondents can choose the language in which they take it. Attachment G provides additional information on the use and administration of web-based ACASI surveys and research references.

A4. Efforts to Identify Duplication and Use of Similar Information

The information collection requirements for the evaluation have been carefully reviewed to determine what information is already available from existing studies and what will need to be collected for the first time. Although the information from existing studies adds to our understanding of teenage sexual risk behavior, HHS believes that the extant research literature needs robust evidence about the effectiveness of evidence-based programs (i.e., evidence from independent evaluation of the program or from more than one study) to meet the needs of policymakers and stakeholders interested in reducing this behavior. The data collection for the evaluation is an essential step in providing this information.

HHS has created a Federal Teen Pregnancy Prevention Coordination Workgroup to develop and manage a coordinated strategy of HHS teen pregnancy prevention activities and evaluation efforts. The workgroup involves research and program staff from ACF, ASPE, CDC, and OAH. The workgroup has enabled the Department to collaborate on the new evaluation efforts and maximize the questions we can answer across the initiative, including the development of common core measures to be used across evaluation studies. We have collaborated to design research and evaluation efforts that will enable the Department to answer a range of research and policy questions that are complementary to, rather than duplicative of, one another. Specifically, we are interested in (1) adding to the evidence base by evaluating new and untested program models and innovative strategies; and (2) understanding how to effectively replicate and implement evidence-based program models and how to achieve impacts that were found in the original evaluations. The TPP Replication study addresses the latter research question. The federal evaluation strategy includes a combination of federal-led and grantee-led evaluation efforts described briefly below.

Federal-Led Evaluations: There are four federally managed evaluation studies that address unique questions about the implementation and effectiveness of a subset of HHS grantees.


  • Evaluation of Pregnancy Prevention Approaches (PPA): An experimental evaluation study focused on assessing the implementation and impacts of innovative strategies and untested approaches for preventing teenage pregnancy in seven sites. Three of the sites are from the TPP research and demonstration grantees, three sites are PREP Innovative Strategies grantees, and one is a non-federally funded site. Implementation reports are expected between November 2012 and October 2013 and internal short-term impact memos are expected between January 2014 and July 2015 across the sites. The contractor is Mathematica Policy Research.


  • Teen Pregnancy Prevention (TPP) Replication Study Evaluation: An experimental evaluation study that will examine the implementation and impacts of three TPP replications of three different evidence-based program models, for a total of 9 sites. The study will examine whether program models that were commonly chosen by replication grantees and widely used in the field can achieve impacts with different populations and settings. Implementation and short-term impact findings are anticipated in 2015. The contractor is Abt Associates.


  • CDC Community-Wide Evaluation: A quasi-experimental evaluation study to examine the effects of integrating services, programs, and strategies. Initial impact findings are expected in 2016. The contractor is ICF Macro.


  • State PREP Multi-Component Evaluation: This study will document program design and implementation within states and includes an experimental evaluation to assess the effectiveness of 4 or 5 selected programs. Preliminary descriptive findings are expected in 2013 and impact findings are expected in 2016. The contractor is Mathematica Policy Research.


In addition, there are 40 grantee-led rigorous evaluations of both TPP and PREP Innovative Strategies replication and research and demonstration grants, supported by a federally sponsored evaluation technical assistance contractor (Mathematica Policy Research). The contractor has reviewed each of the local evaluation designs to ensure they are rigorous and feasible and continues to provide ongoing evaluation technical assistance to grantees.



A5. Impact on Small Businesses or Other Small Entities

Programs in many of the sites are operating by community-based organizations. The data collection plan is designed to minimize burden on such sites by providing staff from the evaluation contractor team to assist in group data collection. For respondents who do not complete the survey in a group setting, Abt Associates (through its subcontractor for data collection, DIR) will provide passwords for web completion.

A6. Consequences of Collecting Information Less Frequently

Follow-up data are essential to conducting a rigorous evaluation of pregnancy prevention programs, as the appropriations’ language requires. The follow-up data are necessary for determining whether the interventions had short-term or longer-term impacts on program participants relative to youth in comparison groups. Furthermore, without additional study, funding decisions about teen pregnancy prevention programs will continue to be based on insufficient and outdated information on program effectiveness.

A7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for the proposed data collection.

A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

The 60-day notice was published in the Federal Register on March 15, 2012. The text is found in Attachment H. At this time there are no comments or responses to questions.

As explained in earlier sections, the TPP Replication Study follow-up survey is very similar to the TPP Replication Study baseline survey that was approved by OMB. The development of the items in the survey was primarily done by Mathematica under the PPA contract, with the intention on the part of HHS that the surveys developed under that contract would form the basis for all subsequent federal evaluations of the TPP Initiative. In Section B5 we provide the names and contact information of persons consulted in the drafting and refinement of the follow-up survey instrument, and a list of members of the Technical Work Group for the PPA evaluation who provided comments on a near-final draft of the baseline instrument, which served as the basis for the follow-up instrument.

A9. Explanation of Any Incentive or Gift to Respondents

The population targeted for the evaluation presents a challenge for the study that is increased by the desire to measure long-term impacts of the program beyond the measures taken immediately at the end of the program that are typical in this research field. By design, the programs in this evaluation target youth who are at the highest risk for sexual risk behavior: inner-city youth in cities like St. Louis; low-income Latino youth, many of whose families are recent immigrants to the US; and young females ages 14-19 who are already sexually active and engaging in unprotected sex. These populations are more likely to drop out of school than their more advantaged counterparts and they are often extremely mobile, and hard to reach. To ensure that we achieve the required 80% response rate at the end of two years, it is important to take steps to attach them firmly to the study at the outset and to maintain that attachment over time. These steps are also essential to prevent differential attrition, leading to response bias, since members of the control group are not receiving program services and are not in contact with program staff.

To this end, we have proposed to provide modest incentives to each participant at each survey point. These incentives will be uniform across program models, replication sites, as well as across the three survey time-points (baseline, short-term follow-up and long-term follow-up). All study participants who completed the baseline survey received a $25 gift card; in this clearance request for the follow-up surveys, we will propose that the same incentive be offered for completion of those surveys. The gift card is intended to encourage completion of the survey and, even more importantly, to reinforce the importance of subsequent surveys. In addition to regular efforts to track youth between survey points, the gift cards are intended to increase attachment to the study so as to keep attrition to a minimum and ensure that any attrition is not differential in favor of the control group.

We should point out that, although three of the sites are implementing Reducing the Risk as a classroom-based intervention, it may not be possible in these sites to administer the follow-up surveys in the classroom. Our plan is to work with schools to determine the ideal schedule and setting for survey administration. This could be in small groups of study participants, at times when the student has free time in study hall or during after school hours. This strategy is highly dependent on cooperation from students in keeping scheduled appointments. We hope that the incentive will increase the level of cooperation and retention at both follow-up points.

To develop this strategy we reviewed the research literature on the problem of attrition in both panel and longitudinal surveys and the effectiveness of incentives to address the issue (Exhibit A9.1 below). We know of no experimental studies that compare the effects of different forms of incentives. Therefore, in selecting gift cards, we were guided by our IRB and the OAH grantees, all youth-serving organizations, who were unanimous in believing that gift cards would be the most effective form of incentive for their population. We are working with each grantee to identify the most appropriate gift card for youth in their area (Visa or Target for example).

What the research studies in Exhibit A9.1 demonstrate is that larger incentives ($40) have greater effects than smaller ($30, $20), but that incentives generally have an impact on completion and retention rates. Some but not all of these studies focused on adolescents as opposed to adult respondents. Aside from this group of studies, most studies have chosen a single incentive level, so that we cannot with certainty attribute the completion and retention rates achieved to the incentive. However, ACF’s evaluation of Building Strong Families, conducted by Mathematica, found $25 incentives for low-income youth (and a $25 incentive for parents) effective in attaching the dyads to the study over time and achieving the required completion rates. In Abt’s multiple studies for the Corporation for National Service, a $25 incentive has been the standard incentive used over the last two decades to achieve the desired completion rates with youth populations at all socio-economic levels, although the incentive has sometimes been raised to reach the hardest-to-reach youth. All these incentives were approved by OMB.

In settling on $25 incentives, we attempted to balance the demonstrated effectiveness of greater incentives with the reasonableness of the total cost to the study. All of the youth populations targeted by the program interventions are high-risk and often highly mobile. We have, therefore, as noted in our revised submission, chosen to make the incentives uniform across replication sites and across time, since the challenges of retention are likely to be similar in all sites.



Exhibit A9.1: REFERENCES ON THE EFFECT OF INCENTIVES IN LONGITUDINAL/MULTI-MODE SURVEYS


Impact of incentives on initial and subsequent response rates of adult survey takers

Goldenberg, Karen L., David McGrath, and Lucilla Tan. 2009. “The Effects of Incentives on the Consumer Expenditure Interview Survey.” Proceedings of the Survey Research Methods Section, American Statistical Association (ASA). Accessed via http://www.amstat.org/sections/srms/proceedings/allyearsf.html


An incentives experiment was conducted in the Consumer Expenditure (CE) Quarterly Interview Survey to determine whether offering prepaid incentives of $20 or $40 prior to the first interview would improve response rates in the current wave and subsequent 4 waves. Offering $40 significantly increased response rates 4.5% compared with offering no incentive and the effect, while smaller, persisted across all five interviews. The $20 incentive increased response rates 2.2% in the first wave compared with no incentive, although this difference was not statistically significant.


Impact of incentives on attrition from a multi-modal panel study of teenagers

Jäckle, Annette and Peter Lynn. 2007. Respondent Incentives in a Multi-Mode Panel Survey: Cumulative Effects on Nonresponse and Bias. Institute for Social & Economic Research (ISER) Working Paper. Accessed at https://www.iser.essex.ac.uk/publications/working-papers/iser/2007-01.pdf


This working paper considered the cumulative effects of conditional and unconditional incentives in a multi-mode (mail and telephone) panel study of teenagers in the UK. Unconditional incentives significantly reduced attrition in a multi-mode panel study, with no impact on attrition bias, regardless of mode or type of incentive. The results suggest that incentives are also effective in maintaining sample sizes in a panel study.


Impact of incentives on response rates, sample composition and attrition bias.

Laurie, Heather, and Peter Lynn. 2009 . “The Use of Respondent Incentives on Longitudinal Surveys.” Chapter 12 in Peter Lynn (ed.) Methodology of Longitudinal Surveys. Hoboken, NJ: John Wiley and Sons.


Chapter 12 provides a comprehensive review of the literature on incentives in longitudinal surveys, including the effect of incentives on response rates, sample composition and bias, and data quality.


Impact of incentives on response rates and attrition rates for adult survey-takers

Mack, Stephen, Vicki Huggins, Donald Keathley, and Mahdi Sundukchi. 1998. “Do Monetary

Incentives Improve Response Rates in the Survey of Income and Program Participation?” JSM Proceedings, Survey Research Methods Section. Alexandria, VA: American Statistical

Association, 529-34.


This paper describes incentive experiments undertaken by the U.S. Census Bureau in the Survey of Income and Program Participation (SIPP), a high-burden, face-to-face panel-design interview survey, to deal with rising nonresponse to government surveys in the 1990s. The SIPP research demonstrated that incentive effects for large, interview-administered government surveys were similar to those for non-government surveys, and that these effects continued to hold through the 6th interview wave two years after an incentive was provided.


Impact of incentives on attrition rates in an adult panel study

Martin, Elizabeth, Denise Abreu, and Franklin Winters. 2001. “Money and Motive: Effects of Incentives on Panel Attrition in the Survey of Income and Program Participation.” Journal of Official Statistics 17 (2): 267-284.


This paper describes an experiment that compared the effects of offering a prepaid incentive of $20, $40, or no incentive on panel attrition in a household survey. Both $20 and $40 significantly improved conversion rates of prior non-interviews compared to offering no incentive, particularly for households with higher poverty rates.


Impact of initial incentives on initial and subsequent response rates in a longitudinal study

Rodgers, Willard. 2011. “Effects of Increasing the Incentive Size in a Longitudinal Study.” Journal of Official Statistics 27 (2): 279-299.


In this study, participants in one wave of a longitudinal study were offered $20, $30, or $50. Offering the highest incentive of $50 showed the greatest improvement in response rates and also had a positive impact on response rates for the next four waves.


Impact of promised incentives on refusal conversion in a panel study

Zagorksy, Jay L. and Patricia Rhoton. 2008. “The Effect of Promised Monetary Incentives on Attrition in a Long-Term Panel Survey.” Public Opinion Quarterly 72 (3): 502-513.


In a face-to-face longitudinal study of women, promised incentives of up to $40 had a positive effect on response rates in panel members who had previously participated in the survey but had previously refused to participate in the current wave.



A10. Assurance of Confidentiality Provided to Respondents

HHS has embedded protections for privacy in the study design. Data collection will only occur if informed consent is provided by a parent or legal guardian if the respondent is a minor or by respondents themselves if they are 18 or older. For the Safer Sex replication sites, the contractor obtained a waiver of parental permission. Federal regulations permit the IRB to approve research without parent permission “if the IRB determines that a research protocol is designed for conditions or for a subject population for which permission is not a reasonable requirement to protect the subjects, provided an appropriate mechanism for protecting the children who will participate as subjects in the research is substituted and provided further that the waiver is not inconsistent with federal, state or local law”. In sites such as the clinics that will implement Safer Sex, where adolescents can consent to treatment and procedures, such as contraceptive services, pregnancy and disease testing, without parental knowledge, we have the waiver in place to protect the privacy of the adolescent.

The approved parent permission form in the baseline submission included permission for the follow-up data collections in addition to the baseline. For each of the follow-up surveys, youth themselves will be required at each survey administration to provide written assent. The assent form in Attachment I explains the data being collected and its use. The form indicates that answers will be kept private to the extent permissible by law, that youths’ participation is voluntary, and that they may refuse to participate at any time.

A11. Justification for Sensitive Questions

Many of the measures in the follow-up survey ask for information of a sensitive nature because the programs we will be evaluating are designed specifically to reduce sexual activity and associated risk behaviors among teens. Comprehensive measures of behavior are included because they will provide more accurate representations of teen sexual behavior, and the responses will significantly supplement the knowledge currently available on program effectiveness. Attachment B provides the justification for these and other questions and Attachment C provides detailed references.

Sensitive questions are drawn from previously-successful youth surveys and evaluations (see Attachment C). The items have been carefully selected, and we have been guided by past experience in determining whether or not the benefits of measures may outweigh concerns about the heightened sensitivity among sample members, parents, and program staff to specific issues. Although these questions are sensitive, they are commonly and successfully asked of youth similar to those who will be in the study, and all of these specific survey questions have been pretested among a diverse group of teens without any concerns raised about the questions’ sensitivity. Most of the sensitive items related to sexual activity will be asked only of sample members who report being or having been sexually active.

A12. Estimates of Annualized Burden Hours and Costs

Exhibit A12.1 summarizes the reporting burden on study participants. Enrollment for the TPP evaluation will take place over two years, so the short-term follow-up data collection will occur over 1.5 years and the longer-term follow-up data collection will occur over 1.5 years, for a total of three years of follow-up data collection. The annualized burden is based on one-third (5,700) of the expected follow-up questionnaires administered to the sample at two time points, for a total estimated number of 17,100 completed follow-up survey questionnaires. Questionnaire response times were estimated from baseline pretests with student respondents and from prior experience. The annual burden for questionnaire response is estimated from the total number of completed questionnaires proposed and the time required to complete the questionnaires. The total annual burden is expected to be 2,850 hours.

Exhibit A12.1. Reporting Burden on Study Participants


Form Name



Type of Respondent

Annual
Number of Respondents

Number of Responses per Respondent

Average Burden Hours per Response

Total Annual Burden Hours


Impact Evaluation of the Teen Pregnancy Prevention Program Grantees (TPP Evaluation)

Attachment D: Safer Sex Intervention

Sexually active youth

1,900

2

0.5

1,900

Attachment E: Reducing the Risk and Cuidate! Sexually active youth

Sexually active youth

1,900

2

0.5

1,900

Attachment F: Reducing the Risk and Cuidate! sexually inexperienced youth

Sexually inexperienced youth

1,900

2

0.5

1,900

Total


5,700



5,700



A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers

The estimated 1-year annualized cost to respondent is shown in the table below. The majority of youth participating in programs are school age, 10-18.  We estimate that approximately 1,868youth may be 18 or older and could be earning the Federal minimum wage during the survey time at $7.25 per hour.

Respondents

Form Name

Youth 18+ Years of Age

Number of Responses per Respondent

Average Burden Hours per Response

Total Annual Burden Hours

Average Hourly Wage of Respondents

Total Annual Response
Cost


Impact Evaluation of the Teen Pregnancy Prevention Program Grantees (TPP Evaluation)

Sexually active youth

Attachment D: Safer Sex I

1,710

2

0.5

1

$7.25

$12,397.50

Sexually active youth

Attachment E: Reducing the Risk and Cuidate!

63

2

0.5

1

$7.25

$456.75

Sexually inexperienced youth

Attachment F: Reducing the Risk and Cuidate!

95

2

0.5

1

$7.25

$688.75


Total

1,868





$13,543

Notes: Assumes 90% of youth in SSI will be 18 or older at follow-up; La Alianza is the only Cuidate! site where youth will be 18 or older at follow-up – 25% of youth.



A14. Annualized Cost to the Federal Government

This clearance request is specifically for collecting data at two follow-up points: short-term follow-up occurring 6 to 12 months post-baseline and longer-term follow-up occurring 18-24 months post-baseline. The total estimated cost to the government for the TPP Replication Study follow-up data collection is $3,374,051. Because follow-up data collection will be carried out over three years, the estimated annualized cost to the government for the follow-up data collection is $1,124,684.

A15. Explanation for Program Changes or Adjustments

No program adjustments are anticipated based on this data collection.

OMB gave approval on August 31, 2009 under a generic clearance (0970-0355) to conduct pre-tests of the baseline instrument. The PPA contractor, Mathematica Policy Research, Inc., conducted the pre-test and took the results into account – as well as advice from experts in the field – in redrafting the instrument. On August 17, 2011, the PPA baseline survey instrument was approved (under 0970-0360, currently 0990-0382) and on June 8, 2012, the TPP Replication study baseline survey instrument as approved (under 0990-0394).

HHS now seeks OMB approval for the follow-up survey for the TPP Replication Study, which is very similar to the TPP Replication study baseline survey approved by OMB (new items for the TPP Replication Study follow-up survey are noted in the table in Attachment A). The data will be used for the impact analysis. The Implementation Study for the TPP Replication Study was approved on July 3, 2012 (under 0990-0397).

A16. Plans for Tabulation and Publication and Project Time Schedule

1. Analysis Plan

Before estimating impacts, HHS will conduct two analyses of the data from the baseline survey. First, HHS will use the data to describe the study sample and help define subgroups of policy interest. This step will enable HHS to compare the characteristics of youth in the study with youth nationwide and provide guidance on how the study sample and findings might generalize to a broader policy setting. Second, HHS will assess whether random assignment resulted in similar baseline characteristics of youth, on average, for the treatment and control groups.

To estimate program impacts, HHS will compare the outcomes of treatment and control group members in each site at two time-points, after the completion of the short-term and long-term follow-up data collection in each site. The analytic strategy used will be the same at both time-points.

Random assignment ensures that there are no systematic differences on measurable variables between the treatment and control group at the point of randomization. This ensures that any differences in their outcomes can be attributed with some confidence to the impacts of the intervention (and not to other factors, such as selection bias).

While the simple treatment/control mean outcome comparisons provides an unbiased estimate of true impact, HHS will estimate regression models that control for variation across the sample in baseline measures. Control variables will both increase statistical precision of the impact estimates for a given sample size, reduce the sample size requirements of the study for a given Minimum Detectable Effect size, and reduce attrition bias from missing data, i.e., for a given sample size, regression adjusted estimates will have smaller standard errors.

For replications in which individual sample members are randomized to treatment or control, HHS will estimate an equation like equation (1) below. In equation (1), β1 is the overall treatment effect, known as the Intent-to-Treat effect of the program:

(1) ,

Where:

Yi is the outcome of interest (e.g. consistent condom use) for student i.

Ti is a dummy variable equal to 1 if student i was assigned to the treatment group

Di is a clinic dummy (which accounts for blocking by clinic)

Xmi is the mth baseline characteristic or control variable for student i (e.g. =1 for males).

The coefficient on the treatment dummy, β1, is the primary coefficient of interest. For an unfavorable outcome (e.g. teen pregnancy), a negative and statistically significant coefficient would be interpreted to mean that the program was effective in reducing the rate of that outcome. Impact estimates will be reported as standardized effect sizes.

For replications in which classrooms are randomized to treatment or control, we propose to estimate a regression model that accounts for the clustering of students within classrooms. The clustering of students within classrooms increases the variance of the impact estimates. Two methods are often used to correct standard errors for clustering: cluster-robust standard errors and multilevel modeling. Because we believe that readers of the teen pregnancy prevention literature will be more familiar with multilevel modeling, HHS will take that approach. Hierarchical Linear Modeling (HLM) has the added advantage that it enables the researcher to estimate what portion of the variance is attributable to each level of the model, which would be useful information in the design of future teen-pregnancy prevention evaluations.

Equations (2a) and (2b) provided a stylized version of the model we will use to estimate program impacts when classrooms are randomized:

(2a) Level 1:

(2b) Level 2: ,

Where at level 1 (the individual level):

Yij is the outcome of interest (e.g. sex in prior 90 days) for student i in classroom j.

Xkij is the kth baseline characteristic or control variable for student i in classroom k (e.g. =1 for males).

β0j is the mean value of the outcome measure in classroom j

εij is the residual error for student i from classroom j, which is assumed to be independently and identically distributed.

At level 2 (the classroom level):

Tj is a dummy variable equal to 1 if class j was assigned to the treatment group

γ1 is the coefficient of interest, which represents the estimated impact of treatment

µj is the residual error for classroom j, which is assumed to be independently and identically distributed.

The coefficient on the treatment dummy, γ1, is the primary coefficient of interest. As with the previous model, for an undesirable outcome (e.g. teen pregnancy), a negative and statistically significant coefficient would be interpreted to mean that the program was effective at reducing the prevalence of that outcome. As before, impact estimates will be reported as standardized effect sizes.

After estimating these regression models for each replication, HHS will then compute pooled impact estimates across the replications for each of the three program models. In creating pooled impact estimates, HHS will weight the replications based on the number of individuals in the treatment group in that replication. This will produce estimates of the impact of each program for the average person who received the intervention as part of the three replications conducted under this evaluation.

2. Time Schedule and Publications

The TPP Replication Study evaluation will be conducted over a six-year period that began in Fall 2010 with a feasibility and design study. The contractor for the feasibility and design study (Abt Associates) assisted HHS with the identification of program models and replications and recruited the sites selected by HHS beginning in spring 2011. The baseline data collection is occurring over a two-year period that began in summer 2012 and will end in spring 2014. Follow-up data collections are projected to occur between March 2013 and September 2015. The implementation study will be conducted between Spring 2013 and Fall 2014. Publication of the short-term program impacts based on the short-term follow-up data is expected in Spring 2015 and publication of the longer-term program impacts based on the longer-term follow-up data are expected in Fall 2016.

A17. Reason(S) Display of OMB Expiration Date is Inappropriate

All instruments will display the OMB number and the expiration date.

A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.

1 Abma, J. C., G. M. Martinez, W. D. Mosher, and B. S. Dawson. “Teenagers in the United States: sexual activity, contraceptive use, and childbearing”, Vital and Health Statistics, vol. 23, no. 24, 2004, pp. 1–48.

2 Albert, B., S. Brown, and C. Flannigan, eds. 14 and Younger: The Sexual Behavior of Young Adolescents. Washington, DC: National Campaign to Prevent Teen Pregnancy, 2003.

3 Hamilton, B.E., Martin, J.A., Ventura, S.J. (December, 2010). Births: Preliminary data for 2009. National vital statistics reports web release. Vol. 59 no 3. Hyattsville, MD: National Center for Health Statistics.

4 Hamilton BE, Martin, JA, and Ventura SJ. Births: Preliminary Data for 2011 (October 2012). National vital statistics reports web release; Vol. 61(5). Hyattsville, MD: National Center for Health Statistics. 2012.

5 Hamilton BE, Martin, JA, and Ventura SJ. Births: Preliminary Data for 2011 (October 2012). National vital statistics reports web release; Vol. 61(5). Hyattsville, MD: National Center for Health Statistics. 2012.

6 Paper surveys will only be used if it is not possible to complete the survey via the web.

7 The longest follow-up period is proposed for RtR. The short-term follow-up period reflects a difference in the duration of the interventions and our desire to allow a period of three months or more to elapse, post-intervention, before any assessment of program impact.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Justification for OMB Clearance of Evaluation of Pregnancy Prevention Approaches Part A: Justification for the Collec
AuthorMary Hess
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy