PPA - Emergency Baseline Package - OMB Contol No. 0970-0360 - 7-27-11

PPA - Emergency Baseline Package - OMB Contol No. 0970-0360 - 7-27-11.docx

Evaluation of Pregnancy Prevention Approaches - Baseline

PPA - Emergency Baseline Package - OMB Contol No. 0970-0360 - 7-27-11

OMB: 0970-0360

Document [docx]
Download: docx | pdf

MEMORANDUM



TO: Joseph Borson, OMB


FROM: Amy Farb, HHS/OAH and Seth Chamberlain, HHS/ACF DATE: 7/27/2011

SUBJECT: Additional Information to Supporting Statement A of Evaluation of Adolescent Pregnancy Prevention Approaches (PPA) Emergency Clearance Package, OMB #0970-0360

Shape1


This memorandum provides additional information to Supporting Statement A of the OMB Clearance Package submitted for emergency review for the Evaluation of Adolescent Pregnancy Prevention Approaches (PPA). This information provides additional justification of the need for site-specific baseline instruments, and to explain any implications of site-specific instruments for the study’s statistical power.


A. Context

The PPA study is being conducted in seven separate evaluation sites. In each site, the study team is testing a different intervention, and program impacts will be analyzed and reported on separately for each site. This is different from studies that have multiple sites but combine data for the purpose of analysis and reporting. In PPA, the sites are all separate.


B. Need for Site-Specific Baseline Instruments

There are three main reasons that PPA requires a different baseline instrument for each site:

  1. Different program models. The data needs vary across sites in part because the program models are very different. For example, one site is testing a clinic-based program to reduce repeat pregnancies among pregnant or parenting teens. Another site is testing a program targeted to high-risk youth in group foster care homes. Two other sites are testing more traditional classroom-based curricula for middle or high school students. Each program has a different logic model and aims to change teens’ behaviors in different ways.

  2. Different samples. The evaluation samples also vary across sites. In some sites, the sample will be limited to a high-risk population like foster care youth or pregnant or parenting teens. In other sites, the sample will be a more general population of middle or high school students. Some baseline questions are not relevant for all these populations. For example, it is not necessary to ask pregnant or parenting teens whether they have ever had sex, but we do need to know information about their pregnancy histories. For foster care youth, some of the questions we ask in other sites about family relationships and parents are either not relevant or potentially too sensitive to ask of youth whose backgrounds often include experience of abuse.

  3. Need to coordinate with grantees and local evaluators. In six of our seven sites, we are working with local organizations that have received federal grants from either the Administration for Children and Families (ACF) or the Office of Adolescent Health (OAH) to implement and test their programs. To meet a requirement of their grant awards, each grantee has contracted with a local evaluator to conduct an independent study of its program. Now that these grantees have been selected as federal evaluation sites, we must work with each of them (and the local evaluators) to merge our federal study with the study that they had already been planning. As one component of this, we must work with each grantee and its local evaluator to develop a single survey instrument that meets the needs and preferences of both studies. A critical part of this process has involved allowing the grantees and local evaluators some input on the survey content, while still ensuring that there is no negative effect on the quality of the federal evaluation and our ability to achieve its goals.

C. Implications for Statistical Power

The PPA study was designed from the outset to recruit sites that could recruit sufficient sample to result in adequate statistical power for each individual site. The number of participants (and, in some cases, clusters) expected to be recruited at each site meets or exceeds the target sample for each type of site.


On July 26, 2010, ACF received OMB approval for the general PPA data collection plans (OMB Control No. 0970-0360). Part of that approval was the power calculations laid out in Supporting Statement B of that Clearance Package. The PPA site recruitment plans targeted programs that could support samples of: (1) 1,600 in the case of a school-based program involving a clustered, random assignment of schools, design; or (2) 600 in the case of an elective program involving random assignment of individuals, because these target samples ensure minimum detectable impacts (MDIs) for each site consistent with those observed previously in the literature (see Table 1 below).


All of the PPA sites now recruited plan to achieve the sample targets that meet or exceed the target samples laid out and approved in the previous OMB package. The two new school-based sites recruited (Live the Life Ministries and Princeton Center for Leadership Training) will be evaluated through a clustered, random assignment of schools, design. As a result, both sites plan to enroll relatively large samples – at least 1,600 students per site across a minimum of 16 schools – consistent with the targeted sample size and power for this design. The remaining sites all feature random assignment of individuals or, in the case of the foster-care program being implemented by the Oklahoma Institute for Child Advocacy, random assignment of such a large number of clusters (42 foster care agencies) that the MDIs for that site are approximately equivalent to what they would be if individuals were randomly assigned. All of these sites plan to enroll at least 600 youth, consistent with the targeted sample size and power for this design. Specifically, the sample sizes across these four sites are: 600 (OhioHealth Research and Innovation Institute), 1,080 (Oklahoma Institute for Child Advocacy), 1,124 (EngenderHealth) and 1,400 (Children’s Hospital of Los Angeles). The variation across these different sites largely reflects the opportunity to enroll even larger samples, improving our power to detect meaningful program impacts even beyond the standard level (80 percent) targeted for the evaluation.

Table 1. Projected Minimum Detectable Impacts for Alternative Program Settingsa



Minimum Detectable Impact for Illustrative Outcome


Sample (specification)



50% Proportion [stdv=0.5]

(e.g. Sexually Active)



Effect Sizeb

Random Assignment of Schools (in-school, non-elective programs): Assumes 16 high schools randomized evenly between program and control groups with a total sample to 1600 youth per site (or about 100 youth per school).


Low ICCc High ICCc


Low ICCc

High ICCc

1 Program site (1600 youth)

- Full sample

- 50% subgroup


7.2

9.4



9.4

11.1



0.14

0.19



0.19

0.22

Random Assignment of Youth (in-school, elective and out-of-school, voluntary programs): Assumes 600 students per site randomized individually into program and control groups.

1 Program Site (600 Youth)

- Full sample, full participation


9.5



0.19

- Full sample, 75% participation

12.7


0.25

-50% subgroup, full participation

13.4


0.27

a The minimal detectable impact, or MDI, is the smallest possible program impact that can be detected for a given sample size at an acceptable level of statistical power. The MDI’s shown assume the commonly preferred level of statistical power (80 percent), a response rate of 80 percent on the follow up survey, a regression R-squared of .030 and a two-tailed test of statistical significance of 10 percent (equivalent to a one-tailed test of 5 percent). For the school-based random assignment, the assumed intra-class correlation (ICC) is 0.01 for “low” and 0.035 for “high”. These assumptions are based on school-level data on teen sexual risk from the National Adolescent Health Survey.


b The effect size is calculated as the ratio of the MDI measured in nominal (percentage point) units divided by the standard deviation of the outcome (which in the illustration is equal to 50 points). Because it reflects a standardized measure, the effect size can be used to calculate the MDI in percentage points for any alternative proportion. For example, for a proportion of 10 percent (which has a standard deviation of 30 points), an effect size of 0.08 translates into an MDI in percentage point terms of 2.4 percentage points


c The intraclass correlation (ICC) reflects the proportion of an outcome’s variance that is attributable to the school (or other group level) that youth are randomly assigned. It is the key determinant of how much statistical power is lost through random assignment of schools or other groups. (The higher this ICC, the more statistical power declines).





File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
Authorbgoesling
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy