TPP Replication Study - Implementation ICR Part A June 2012 Final CLEAN

TPP Replication Study - Implementation ICR Part A June 2012 Final CLEAN.docx

Teen Pregnancy Prevention Replication Evaluation: Replication Study

OMB: 0990-0397

Document [docx]
Download: docx | pdf

Supporting Justification for OMB Clearance of Teen Pregnancy Prevention Replication Evaluation



Part A: Justification for the Collection of Implementation Data



September 2011



(Updated June 2012)

The Office of the Assistant Secretary for Planning and Evaluation (ASPE) in collaboration with the Office of Adolescent Health (OAH), Office of the Assistant Secretary for Health (OASH) in the U.S. Department of Health and Human Services (HHS) is overseeing the TPP Replication Study evaluation. The TPP Replication study is specifically designed to address the question “Do evidence-based program models, replicated and funded as part of the OAH Teen Pregnancy Prevention Program, demonstrate impacts on sexual risk behaviors that are comparable to the originally-reported impacts and are they effective in preventing teen pregnancy and reducing sexually transmitted infections?” This evaluation focuses on the replication of a small number of program models across multiple sites with the goals of determining the extent to which program impacts are replicated as well as addressing questions about the extent to which aspects of program implementation are associated with program impacts.  In Fall 2011, ASPE awarded a contract to Abt Associates Inc. to conduct the evaluation.


This submission is one of three clearance requests that have been or will be made for the Teen Pregnancy Prevention (TPP) Replication Study. The first submission, a request for clearance of the baseline survey, received OMB approval on June 7, 2012. In this clearance request, OAH is seeking OMB approval for collection of implementation data. A final request for clearance will be made later in the summer for the follow-up surveys planned for the study.


The Office of Adolescent Health (OAH), Office of the Assistant Secretary for Health (OASH), U.S. Department of Health and Human Services is overseeing and coordinating adolescent pregnancy prevention evaluation efforts as part of the Teen Pregnancy Prevention Initiative. In order to ensure that teen pregnancy prevention efforts across the Department are aligned, OAH is coordinating the submission of OMB Packages related to these Federal Evaluations. In support of these coordinated evaluation efforts, OAH has collaborated with other agencies that implement and evaluate teen pregnancy prevention and related issues in order to answer a range of research questions that are complementary to, rather than duplicative of, one another. These agencies include the Administration for Children and Families (ACF), the Office of the Assistant Secretary for Planning and Evaluation (ASPE), and the Centers for Disease Control and Prevention (CDC). HHS has created a Federal Teen Pregnancy Prevention Coordination Workgroup to develop and manage a coordinated strategy of HHS teen pregnancy prevention activities and evaluation efforts. The workgroup involves research and program staff from ACF, ASPE, CDC, and OAH. The workgroup has facilitated Department collaboration on the new evaluation efforts, which has resulted in the development of common core measures to be used across evaluation studies and subsequently, an increase in the questions we can answer across the initiative..


The implementation data collection instruments for which clearance is requested here are very similar to those approved by OMB for the PPA Evaluation on April 7, 2011 under OMB clearance number 0990-0375. The PPA implementation data collection instruments submitted with this request have been revised slightlyto expand the master topics and implementation data collection questions – though the vast majority are the same as those already approved. Further description of the instruments for which approval is requested may be found at the end of A1. The instruments themselves can be found in Attachments A-D.



A1. Circumstances Making the Collection of Information Necessary

For decades, policymakers and the general public have remained concerned about the prevalence of sexual activity among adolescents. Although adolescents today are waiting somewhat longer before having sex than they did in the 1990s, 60 percent of teenage girls and more than 50 percent of teenage boys report having had sexual intercourse by their 18th birthday.1 Approximately one in five adolescents has had sexual intercourse before turning 15.2 Rates of teenage pregnancy declined by 34 percent between 1991 and 2005 for teens aged 15-19, before rising 5 percent between 2005 and 2007. 3 The rate of teen births again dropped between 2007 and 2009, falling 8 percent for teens aged 15-19.4 Preliminary data in 2009 indicate an overall teen birth rate for teens aged 15-19 of 39.1 per 10005.

HHS is interested in identifying and evaluating promising approaches to reduce teen pregnancy, associated risk behaviors, and their consequences. A key policy question is whether programs that have demonstrated evidence of effectiveness can be replicated in new settings with positive impacts. Of the 31 programs on the HHS list of evidence-based programs, only one program model has been replicated and shown to have positive effects through a rigorous evaluation. The implementation data collection described in this ICR, combined with previous baseline data collections and subsequent short- and long-term follow-ups, will provide important information to guide policy decisions aimed at replicating evidence-based programs. The implementation data collection will aid the interpretation of impact findings and also provide much needed information on the practical experiences and challenges of replicating these programs in new settings.

Legal or Administrative Requirements that Necessitate the Collection

On December 19, 2009, the President signed the Consolidated Appropriations Act of 2010 (Public Law 111-117). Division D, Title II of the Act created the Teen Pregnancy Prevention Program, which is consistent with the Administration’s interest in establishing an evidence-based program to prevent teen pregnancy. The Act provides $110 million to fund this program within OAH, which is responsible for both program implementation and administration.


In addition, Public Law 110-161, which set fiscal year (FY) 2008 appropriations levels, included the following language: “$4,455,000 shall be available from amounts available under section 241 of the Public Health Service Act to carry out evaluations (including longitudinal evaluations) of adolescent pregnancy prevention approaches.” The same language appropriated $4,455,000 in each of FYs 2009 and 2010. These funds have been used to fund both the PPA evaluation and the TPP Replication evaluation, to assess grants funded under the Teen Pregnancy Prevention Program. In addition to these funds, the FY 2012 Appropriations Act provided $8.455 million in PHS evaluation funds, an increase of $4 million over the FY 2011 level, which will be used to support longitudinal evaluations of teen pregnancy prevention approaches.


As previously mentioned, the TPP Replication Study will evaluate replications of evidence-based program models funded through the OAH TPP Program Tier 1 replication grants. In comparison, the PPA study is focused on evaluating untested and innovative program models funded through the OAH TPP Program Tier 2 research and demonstration grants as well as other funding streams.

To accomplish the objective of the appropriation, OAH seeks OMB approval of the implementation study protocols.

Objectives of the TPP Replication Evaluation

The goal of the TPP Replication evaluation is to determine the extent to which evidence-based program models that have been shown to be effective in an earlier trial, usually conducted by the program developer, demonstrate effects on adolescent sexual risk behavior and teenage pregnancy when they are replicated in similar and in different settings and for different populations. The evaluation will help OAH provide guidance to program managers and state and local policymakers about program models whose effects are robust and about the factors necessary to support successful replication.


For this evaluation, HHS has identified three evidence-based program models that represent different approaches to the prevention of teenage pregnancy, and that are being widely replicated as part of the TPP Program and through other federal and state funding initiatives. The three program models are: Safer Sex, a clinic-based individualized intervention for sexually-active female youth; ¡Cuidate!, a culturally-sensitive small-group intervention aimed at Latino youth; and Reducing the Risk, a classroom-based sexual health curriculum that can also be implemented as an after-school program and in non-school settings. For each model, the agencies have identified three grantee replications, for a total of 9 replications. The nine vary in the scope of the replication (number of sites within a replication, number of youth served) and in the populations served. A good deal of variation can be expected in the settings in which the program is implemented. While one program model is implemented only in clinics, the others can be implemented in a variety of settings, including schools, churches, and other community-based settings that provide services to youth. The study will use a sample of approximately 8,550 youth across 9 grantee replication sites, a sufficient size to detect policy-relevant impacts of each of the program replications. In each of the replications selected, youth will be assigned to receive the intervention or to be part of a control group that does not receive it. In clinics and other community-based settings and in some school settings, individual youth will be randomly assigned. In the three sites where Reducing the Risk the program is being implemented as a classroom-based curriculum, the unit of random assignment will be classes within a school (for example, health or PE classes). In all cases, the intervention will be delivered by grantee staff who are health educators, not by the regular class teacher, so that the issue of contamination when teachers deliver both the intervention to the treatment group as well as the regular class to the control group does not arise.


Descriptions of the program model (as intended) will be obtained from the grantee’s initial proposal and subsequent refunding applications. All other data (fidelity, performance, interviews, focus groups, annual reports, OAH program officer reports) will be used to compare actual as opposed to planned or intended implementation. Fidelity measures are a part of the performance measures that were developed by OAH and the program developers, and are implemented and reported on a schedule set by OAH (much more frequently than the schedule for the collection of implementation data by the federal evaluator). Local evaluators, or grantee staff who are not responsible for implementing the program, will collect fidelity and performance data. The federal evaluation will use these data on fidelity and program performance (e.g., program attendance and retention, staff training) collected for OAH, rather than attempting to conduct parallel measures much less frequently.


Baseline surveys will be conducted with youth in both treatment and control groups before youth in the treatment group have been exposed to the intervention. In schools, the self-administered survey will be completed in a space that can accommodate small groups and assure privacy; in other settings, notably clinics where entry to the program is on a rolling basis, the survey will be completed in a setting where the individual’s privacy is protected. Web surveys and telephone follow-up will be used when necessary to increase response rates.


Additionally, implementation data collection instruments, requested for clearance in this package, will enable HHS, through the TPP Replication Evaluation. to document program activities (and activities in control sites) in order to better understand any impacts found, as well as provide guidance for any future replications.


Through the baseline and follow-up surveys and implementation data collection, HHS will address the following research questions:

  • What are the impacts on adolescent sexual risk behavior and teen pregnancy rates when an evidence-based program is replicated?

  • Do impacts vary for different youth populations (i.e., females vs. males, different age ranges, ethnicities)?

  • Do impacts vary depending on the setting in which the program is replicated?

  • Are impacts relicated across sites that implement the same program model?

  • To what extent were grantees able to replicate the program as planned?

  • What internal and external factors influence the ability of the grantees to implement the program model with fidelity?

  • How does variation in implementation relate to program impacts?


Major activities for the TPP Replication evaluation will include the following:

  • Selecting program models and replications from the Teen Pregnancy Prevention Initiative grantees funded to replicate evidence based programs (Tier 1). All of the grantees are replicating “evidence-based” program models and are required to take steps to ensure fidelity to the model.

  • Recruiting grantees to participate in a rigorous experimental evaluation and working with them to design and support a strong study.

  • Collecting data on the research sample at baseline and at two subsequent time points, (i.e., short-term and longer-term follow-up survey administration.

  • Conducting a comprehensive implementation study in each replication site.

  • Analyzing data and reporting the results.



The Implementation Data Collection Instruments

There are four data collection instruments being proposed:

  • Staff and community interviews: a master topic guide for these interviews is included in Attachment A.

  • Interviews and focus groups with frontline staff: a guide for these discussions is included in Attachment B.

  • Interviews and focus groups with participating youth: a guide for these discussions is included in Attachment C.

  • Discussions with control group schools and others in the counterfactual: a guide for these discussions is included in Attachment D.


Through the implementation study, HHS will address five main objectives. First, the study will help us understand how each program is intended to operate in the participating sites and how it is expected to affect youths. What is the plan for each intervention’s implementation? How is the program expected to work?


Second, the study will document the actual implementation of each program. How was each program actually delivered? What services and activities were offered, how were they carried out, and to what extent did youths participate and become engaged in them? In what context were these services and activities provided? How did these services and activities differ from those of other similar programs in the community?


The third objective is to assess the extent to which program implementation adhered to the program model (fidelity) and to site implementation plans and to understand the factors that affected implementation, either to support or to undermine it. These include the readiness of the grantee and partners to replicate the model, the extent to which the documentation of and training for the program model supported replication, the administrative and supervisory processes that supported front-line staff in their implementation of the model, and the external forces and influences that affected it.


The fourth objective is to relate variation in implementation to outcomes for youth and to the impact of the program on youth. To what extent is strong adherence to program fidelity and service standards associated with stronger program impacts?


Finally, the implementation study will describe the contrast between the program as implemented and the “business as usual” counterfactual. How were the activities and services provided by the program similar to and different from those provided to control group youths? How did the experiences of program group youths differ from those of control group youths?


Understanding the programs, documenting their implementation and context, assessing fidelity of implementation and the factors that influence it, and describing the counterfactual will enable us to describe each implemented program and the treatment-control contrast evaluated in each site. This information will help us interpret impact analysis findings and may help explain any unexpected findings, differences in impacts across programs, and differences in impacts across locations or population subgroups.


A2. Purpose and Use of the Information Collection


If this request is approved, the TPP Replication evaluation will collect data on program implementation. This will include information about the readiness of and preparation for replication of each grantee and partner, the design and logic model of the program model, the replication plan at each site, program administrative and supervisory processes, resources required to implement the program, the implementation of key program components, dimensions of program delivery and youth participation, fidelity to the curriculum or program guidelines and site plans, and adaptations of the programs to fit the context. Information on these topics will be obtained from existing program documents (including the reports prepared by OAH program officers, as well as individual and group interviews with program administrators and staff, participating youth, school staff (as appropriate), program partners, other stakeholders, and other community members, as well as observations of program activities. Attachment E lists the topics on which information will be collected, and the planned sources of information for each topic.


The data will serve three main purposes. First, the information will enable the study team to produce clear, detailed descriptions of each intervention that is evaluated and the counterfactual in each site. This documentation is critical for understanding the meaning of impact estimates. Second, it will provide an understanding of real-world challenges to implementation, both internal and external, and how programs addressed them. Third, the data will be used to assess fidelity of implementation. This information is essential for determining whether the interventions were implemented as planned and whether the evaluation provided a good test of each site’s intervention.


In addition, the data will be analyzed both qualitatively and through exploratory quantitative analyses, to link variation in implementation to program outcomes and impacts.


A3. Use of Improved Information Technology and Burden Reduction

The data collection plan reflects sensitivity to issues of efficiency, accuracy, and respondent burden. Where feasible, information will be gathered by extracting needed information from existing documents. Protocols for interviews and group discussions during site visits will be customized for each site to focus on information that is relevant for that site and that could not be obtained from documents.

Improved information technology will be used when appropriate. For example, when program documents can be sent electronically, we will not request a hard copy of the documents.

A4. Efforts to Identify Duplication and Use of Similar Information

The information collection requirements for the TPP Replication evaluation have been carefully reviewed to determine what information is already available from existing studies and what will need to be collected for the first time. Although prior studies contribute to our understanding of teenage sexual risk behavior and past efforts to reduce it, HHS does not believe they provide sufficient information on a sufficient range of programs to policymakers and stakeholders. Furthermore, only one of the evidence-based program models eligible for funding under the TPP program demonstrated evidence from more than one rigorous evaluation. Finally, Congress requires evaluations, including longitudinal evaluations, of adolescent pregnancy prevention approaches. The data collection for the TPP Replication evaluation is an essential step in providing this information.

HHS has created a Federal Teen Pregnancy Prevention Coordination Workgroup to develop and manage a coordinated strategy of HHS teen pregnancy prevention activities and evaluation efforts. The workgroup involves research and program staff from ACF, ASPE, CDC, and OAH. The workgroup has enabled the Department to collaborate on the new evaluation efforts and maximize the questions we can answer across the initiative, including the development of common core measures to be used across evaluation studies. We have collaborated to design research and evaluation efforts that will enable the Department to answer a range of research and policy questions that are complementary to, rather than duplicative of, one another. Specifically, we are interested in (1) adding to the evidence base by evaluating new and untested program models and innovative strategies; and (2) understanding how to effectively replicate and implement evidence-based program models and how to achieve impacts that were found in the original evaluations. The TPP Replication study addresses the latter research question. The federal evaluation strategy includes a combination of federal-led and grantee-led evaluation efforts described briefly below.

Federal-Led Evaluations: There are four federally managed evaluation studies that address unique questions about the implementation and effectiveness of a subset of HHS grantees.


  • Evaluation of Pregnancy Prevention Approaches (PPA): An experimental evaluation study focused on assessing the implementation and impacts of innovative strategies and untested approaches for preventing teenage pregnancy in seven sites. Three of the sites are from the TPP research and demonstration grantees, three sites are PREP Innovative Strategies grantees, and one is a non-federally funded site. Implementation reports are expected between March 2012 and October 2013 and internal short-term impact memos are expected between January 2014 and July 2015 across the sites. The contractor is Mathematica.


  • Teen Pregnancy Prevention (TPP) Replication Study Evaluation: An experimental evaluation study that will examine the implementation and impacts of three TPP replications of three different evidence-based program models, for a total of 9 sites. The study will examine whether program models that were commonly chosen by replication grantees and widely used in the field can achieve impacts with different populations and settings. Implementation and short-term impact findings are anticipated in 2015. The contractor is Abt Associates.


  • CDC Community-Wide Evaluation: A quasi-experimental evaluation study to examine the effects of integrating services, programs, and strategies. Initial impact findings are expected in 2016. The contractor is ICF Macro.


  • State PREP Multi-Component Evaluation: This study will document program design and implementation within states and includes an experimental evaluation to assess the effectiveness of 4 or 5 selected programs. Preliminary descriptive findings are expected in 2013 and impact findings are expected in 2016. The contractor is Mathematica.


In addition, there are 41 grantee-led rigorous evaluations of both TPP and PREP Innovative Strategies replication and research and demonstration grants, supported by a federally sponsored evaluation technical assistance contractor (Mathematica). The contractor has reviewed each of the local evaluation designs to ensure they are rigorous and feasible and continues to provide ongoing technical assistance to grantees.



A5. Impact on Small Businesses or Other Small Entities

Programs in some sites may be operated by or in collaboration with small community-based organizations. The implementation data collection plan is designed to minimize burden on such organizations by focusing interviews with their staff on their direct role in the intervention and its development or planning.

A6. Consequences of Collecting Information Less Frequently

Implementation data are essential to conducting a rigorous evaluation of pregnancy prevention programs, per appropriations. In the absence of such data, the meaning of estimated program impacts may be uncertain, and future funding and operational decisions about teen pregnancy prevention programs will be based on insufficient information about program implementation issues.

Collecting implementation data less frequently would make it impossible to assess fidelity to program developers’ standards and site implementation plans. Moreover, we would lose the opportunity to document the evolution of site operations during the evaluation and provide lessons based on the experiences of the sites.

A7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5

There are no special circumstances for the proposed data collection.

A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

The 60-day notice was published in the Federal Register on September 23, 2011 and the 30-day notices was published on December 29, 2011. The text is found in Attachment F. At this time there are no comments or responses to questions.

In Attachment G we provide the names and contact information of the persons consulted in the drafting and refinement of the implementation study protocols, a list of institutions from which we received input on drafts of the protocols, and a list of members of the PPA Technical Work Group who attended a review meeting in spring 2010; plans for the implementation study were presented to this group, and the ensuing discussion contributed to the refinement of the plan as presented here.

A9. Explanation of Any Payment or Gift to Respondents

No payment or gift will be made to program staff and community members for being interviewed during site visits. We propose to offer refreshments to staff who participate in focus groups and refreshments and a small incentive (e.g., a $10 I-Tunes card) to youth who participate in focus groups.

A10. Assurance of Confidentiality Provided to Respondents

Privacy of study participation and responses will be protected by the Privacy Act. HHS has embedded protection of privacy in the study design. Implementation study respondents (program developers, site staff members, and community members) will receive information about privacy protection when arrangements are made for meeting with them, and information about privacy will be repeated as part of the study field staff’s introductory comments during site visits (see Attachment B for an example of these introductory comments). Site visit staff will be informed about privacy procedures during training and will be prepared to describe them and to answer questions raised by local program staff.

Youth who comprise the sample for the TPP Replication evaluation study must have parental consent to participate, in addition to providing their own assent for each data collection. The consent form that parents sign at the time their child is being enrolled allows the study team to collect baseline data and follow-up data through questionnaires and to invite their child to participate in a focus group to discuss his/her experiences in the program. (This consent form was approved by OMB on June 8, 2012 as part of the Baseline ICR under OMB clearance number 0990-0394). Before completing questionnaires, the sample member youth will also complete an assent form. Youth invited to a focus group as part of the implementation study will be asked at that time to complete another assent form before the focus group is conducted. This form will state that answers will be kept private, that youths’ participation is voluntary, that they may refuse to participate, and that identifying information about them will not be released or published. The assent form that sample members will be asked to sign before the start of a focus group is included as Attachment H.

A11. Justification for Sensitive Questions

The implementation study protocols do not contain sensitive questions. Interview guides for data collection from staff focus on the components of the pregnancy prevention programs being evaluated and the experiences of staff in implementing them. Focus groups with youth will address their experiences in the program, and not their sexual experiences or other personal behaviors.

A12. Estimates of Annualized Burden Hours and Costs

Exhibit A12.1 summarizes the estimated annual reporting burden on implementation study participants at the requested stage of information collection. The burden estimates are based on:

  • Site visits to nine programs over three years. We expect to conduct up to three visits to each program. Because the sample is being built up over two (or in some cases over three) years, we will need to collect data on implementation that reflects the experience of each cohort of youth as well as being able to chart the development of the program. Interview times were estimated based on prior experience.

Exhibit A12.2 summarizes the estimated response costs. Average hourly wages for program staff and community members were estimated from the latest – May 2009 – National Occupational Employment and Wage Estimates, Bureau of Labor Statistics, Department of Labor website. For youths under age 18 participating in a focus group discussion, the average hourly wage is assumed to be $0. For youth over 18, we have assumed an average hourly wage of $7.25.

The annual burden is estimated from the average total anticipated annual number of respondents, the number of sites, and the estimated time required to complete the interviews. The average total annual burden is expected to be 513 hours.

Exhibit A12.1. Annual Reporting Burden on Implementation Study Participants

Instrument

Annual
Number of Respondents

Number of Responses per Respondent

Average Burden Hours per Response

Total Annual Burden Hours

Discussion guide for grantee head (1)

9

1

1.5

13.5

Discussion guide for program director (1)

9

1

1.5

13.5

Discussion guide for supervisor of frontline staff(1)

9

1

1.5

13.5

Discussion guide for frontline staff (3 per site)

27

1

1.5

40.5

Discussion guide for community partners (3 per site)

27

1

1

27

Discussion guide for school stakeholders (3 per site)

27

1

1

27

Discussion guide for community stakeholders (3 per site)

27

1

1

27

Focus group guide for frontline staff (6 per site)

54

1

1.5

81

Focus group guide for youth participants (20 per site)

180

1

1.5

270

Totals

369



513



Exhibit A12.2. Estimated Response Costs

Instrument

Annual
Number of Respondents

Average Hourly Wage of Respondents

Total Annual Burden
Cost

Discussion guide for grantee head (1)

9

$30

$365

Discussion guide for program director (1)

9

$25

$338

Discussion guide for supervisor of frontline staff(1)

9

$25

$338

Discussion guide for frontline staff (3 per site)

27

$20

$810

Discussion guide for community partners (3 per site)

27

$20

$540

Discussion guide for school stakeholders (3per site)

27

$20

$540

Discussion guide for community stakeholders (3 per site)

27

$20

$540

Focus group guide for frontline staff (6 per site)

54

$15

$1,215

Focus group guide for youth participants

(8 per site who are 18 or older)

72

7.50

$810

Totals

310


$5,496


A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers

These information collection activities do not place any additional cost on respondents.

A14. Annualized Cost to the Federal Government

This clearance request is specifically for the implementation study. The total estimated cost to the government, for data collection, analysis, and reporting, is $1,000,000. Because the implementation study data collection, analysis and reporting will be carried out over a period of three years, the estimated annualized cost to the government for implementation data collection is approximately $333,333 per year.

A15. Explanation for Program Changes or Adjustments

No program adjustments are anticipated based on this data collection.

Data collection instruments for the implementation evaluation for the PPA evaluation were submitted to OMB on November 23, 2010 and approved on April 7, 2011 under OMB Control No. 0990-0375. With this current submission, HHS now seeks OMB approval to use a revised version of these approved implementation instruments for use in the TPP Replication evaluation. Revisions to the previously-approved instruments adds questions to the instrument so that alternate questions may be asked of specific program types since the data will be used, in part, for understanding impact analysis.

A16. Plans for Tabulation and Publication and Project Time Schedule

1. Analysis Plan

The analytic strategy we propose moves from site-level descriptive analyses through qualitative evaluative analyses at both the individual site level and across replication sites within a program model, to quantitative analyses that use outcome data on all study participants (both treatment and control) to link variation in implementation to impacts on outcomes for youth. Below, we describe our approach to each type of analysis.

Descriptive Analyses

The first step in the implementation analysis is to construct a site-specific description that “tells the story” of what happened in a comprehensive way, tracing the process of replication and the context in which it occurred. The description has two main components: one non-quantitative (the program narrative) and one quantitative (descriptive statistics). The program’s theory of change, will frame the analysis. The analytic meetings held after each round of site visits will help to identify important topics or themes that emerged across replication sites. From the frameworks and the analytic meeting, a set of detailed research questions will be developed.


No one informant will provide comprehensive descriptions of the entire program and its results. The story of the program, from planning and preparation through start-up to full operation and outcomes, will need to be built up from multiple partial views. The account will include a discussion of the challenges encountered by program staff and the strategies they developed to address them. When accounts of the same topic agree, a simple summary of the topic can be developed. Where accounts of the same topic disagree, the analyst must decide on the meaning of the disagreement and document it. The evaluation will use NVivo qualitative software for coding and analyzing the site visit data. NVivo enables the researcher to systematically synthesize qualitative data across sites, provide quantitative description, and identify trends in the qualitative data. It can be used to generate visual displays (charts, models) to show connections between themes and enables efficient retrieval of data behind the analysis at any point.


The narrative will be supplemented by diagrams and timelines, as well as tables that summarize topics such as participant characteristics, participation in the program, fidelity to program components, changes in levels of fidelity over time. The counterfactual condition in each site will be described.


Evaluative and Explanatory Analyses

The next set of analyses will look both within a site and across the three replications of each program model to assess the extent to which the program was implemented as planned and to identify potential explanations for variations in implementation and in participant outcomes. The standards by which the adequacy of implementation is judged include: the requirements of the program model; the grantee’s own plan for replicating the model and theory of change; OAH expectations for fidelity and performance; and stakeholder opinions and judgments.


Understanding why a replication is not working as planned is a particularly useful function of implementation research since it allows both policymakers and program operators to make needed adjustments either to the model itself or to plans for future replications. The analysis will examine the ways in which differing levels of grantee and partner readiness and preparation, the appropriateness and adequacy of the program model selected and the complexity (i.e., number of different locations within a site) of the plan for implementing it, whether or not it is realistic, as well as external factors such as community norms and the availability of sexual health services in the community affected implementation adversely or supported it.


One approach to quantifying implementation characteristics is to create an index such as a fidelity index composed of a checklist of core program elements for which each site would receive a score denoting the degree of adherence to the program model. Composites of variables in each of the key areas of interest (e.g., quality of service, adaptation) could be created to summarize level of implementation at the individual site level and aggregated to the replication site level. These same variables would then be used in analyses linking implementation to program impact (as described in the following section).


The final step in the explanatory analyses is to attempt to link variations in aspects of implementation such as fidelity to the program model and quality of the services provided, as well as other factors, to service and participant outcomes. As with the analyses that precede them, the primary approach is a qualitative one, although we will explore the potential of methods such as performance analysis (Mead, 2003, cited in Werner, 2004) to model the relationships between program activities and processes and outcomes.


Quantitative Approach to Linking Implementation to Program Impact

The final set of analyses will attempt to link key dimensions of implementation to program impact at two levels of policy interest: First, within each program model, relationships between key aspects of implementation and program impact will be examined. This analytic strategy enables the researcher to leverage power by pooling data from relatively small studies in each multi-site replication in order to enhance the ability to detect relationships or even possibly explain variation in program impact of an individual model. Secondly, the analyses will explore the possibility of using pooled data from all three multi-experiment replications to yield similar kinds of information about the TPP replications overall. The approach to these analyses borrows from the work of Bloom, Hill, and Riccio (2003) in linking implementation and effectiveness. Although their topic was welfare-to-work and their sample was considerably larger, the basic approach is promising, and the current study provides sufficient numbers of rigorously designed experiments to support at least exploratory analysis of links between key aspects of implementation and program impacts.


The overall framework for the study posits that administrative and supervisory supports, fidelity of implementation, quality of service delivery, and adaptation will be important aspects of implementation over which programs have some control, in which one would expect some variation, and which one would expect to be linked to program outcomes. A fourth important component which mediates the effect of implementation on outcomes is, of course, participant responsiveness. Quantitative measures of these four key components of implementation will be constructed.


The greatest amount of variation is likely to be at the sub-site or performance site level (such as clinic, in a replication with multiple clinics; classroom/workshop group). The analytic approach entails two steps: First, a small number of quantitative measures of implementation will be constructed. The key dimensions of implementation to be used in the analysis are: administrative and supervisory supports, fidelity to the program model, quality of service, adaptation, and participant responsiveness. In a second step, measured program impacts and multilevel modeling (in which participants (level 1) are grouped by performance site/sub-site (level 2), which are in turn grouped within sites (grantee site)) will be used to examine the relationship between program implementation and effects on short-term and longer-term TPP program impacts. This approach enables us to explain the variation in experimental impact findings by isolating the independent influences on it of the implementation factors of interest.


2. Time Schedule and Publications

The TPP evaluation will be conducted over a five-year period. The contractor for the design and feasibility study, currently underway, will assist HHS with the identification of program models and replications and will recruit the sites selected by HHS in spring 2011. The baseline data collection will take place over a three-year period beginning in January 2012 and ending in late fall 2014. Follow-up data collections at 9-12 and 18-24 months after baseline are projected to occur between July 2012 and February 2016. The implementation study will be conducted between winter 2012 and fall 2014. We will produce an implementation report in fall 2014, after the implementation data have been collected and analyzed. In addition, we will include contextual information on implementation and services offered at the intervention and control sites for the short-term and longer-term impact reports, and one or two topical research briefs that convey information that policy and program decision makers need on key implementation aspects of interest.

A17. Reason(S) Display of OMB Expiration Date is Inappropriate

All protocols will display the OMB number and the expiration date.

A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.



1 Abma, J. C., G. M. Martinez, W. D. Mosher, and B. S. Dawson. “Teenagers in the United States: sexual activity, contraceptive use, and childbearing”, Vital and Health Statistics, vol. 23, no. 24, 2004, pp. 1–48.

2 Albert, B., S. Brown, and C. Flannigan, eds. 14 and Younger: The Sexual Behavior of Young Adolescents. Washington, DC: National Campaign to Prevent Teen Pregnancy, 2003.

3 Hamilton, B.E., Martin, J.A., Ventura, S.J. (December, 2010). Births: Preliminary data for 2009. National vital statistics reports web release. Vol. 59 no 3. Hyattsville, MD: National Center for Health Statistics.

4 Hamilton, B.E., Martin, J.A., Ventura, S.J. (December, 2010). Births: Preliminary data for 2009. National vital statistics reports web release. Vol. 59 no 3. Hyattsville, MD: National Center for Health Statistics.

5 Hamilton BE, Martin JA, Ventura SJ. Births: Preliminary data for 2009. National vital statistics reports, Web release; vol 59 no 3. Hyattsville, MD: National Center for Health Statistics. 2010.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Justification for OMB Clearance of Evaluation of Pregnancy Prevention Approaches Part A: Justification for the Collec
AuthorMary Hess
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy