TPP Baseline-Supporting Statement Part B_052012 final (clean)

TPP Baseline-Supporting Statement Part B_052012 final (clean).docx

Teen Pregnancy Prevention Replication Evaluation: Baseline Data

OMB: 0990-0394

Document [docx]
Download: docx | pdf


Supporting Justification for OMB Clearance of Teen Pregnancy Prevention Replication Evaluation (OMB Control #0990-NEW)



Part B: Statistical Methods for Baseline Data Collection

January 2012


(Updated June 2012)


B1. Respondent Universe and Sampling Methods

For the TPP Replication Study, HHS has selected three program models, representing different approaches to the prevention of teen pregnancy, and will select several replications of each model. Of the nine replications selected, five will be entirely school-based and four will operate in community settings (clinics, social service or other public and private agencies and organizations, churches) or a mix of both types of setting. The total sample of youth for the study is approximately 8550, a sufficient sample to detect policy-relevant impacts of individual program replications. For each replication (which will occur across multiple sites), youth will be assigned to a treatment group that receives the intervention or to a control group that does not. Selection of the unit of randomization will be driven by: a) the setting in which the replication is implemented; the need to minimize disruption of the program’s normal operation; and by the desire to minimize contamination across groups, to the greatest extent possible. While, for school-based studies, random assignment of schools is the most desirable option, it requires a level of resources that the TPP grantees who were candidates for the evaluation do not have. Specifically, many of the grantees do not have access to the number of schools needed for a school-based random assignment study, either because there are not enough schools in the locality the grantee is serving or because of a lack of funds to serve youth in so many different schools. Given the resource constraints, and the fact that all the interventions will be delivered by trained program staff not school staff, thus reducing though not eliminating potential contamination, classes (e.g., health or wellness classes) will be the unit of random assignment within each school in the three replications of Reducing the Risk. In the three replications of the ¡Cuídate! program model, where the program may be delivered after school or as a “pullout” from a regular class, individual youth will be randomly assigned. In community-based settings, it would be impossible to randomly assign organizations (even if grant resources were adequate for such a design), since the settings are quite heterogeneous within a locality (YMCAs, social service or child welfare offices, other youth-serving locations). In these cases, individual youth will be randomly assigned.

A baseline survey will be conducted with both program and control groups before the youth in the program group are exposed to the pregnancy prevention intervention. The survey will be web-based and will use audio computer-assisted survey interview (ACASI) technology. Where possible, there will be group administration of the survey; when necessary to increase response rates, or deal with absences, this method will be augmented with individual web survey and telephone follow-up. For programs that are individually delivered, such as a program delivered in a school-based or community-based clinic, baseline surveys will be completed by individual youth as they enter the study, with appropriate privacy safeguards. We will also use program participation data, collected and reported by the grantee, as required under the terms of the grant.

The universe of potential respondents will vary across study sites, depending on the type of program in place at each site. Hence, we first describe the possible types of program structures and the corresponding study design.

We expect that, in three of the five school-based replications, classes will be randomly assigned. Random assignment will occur after students have been assigned to classes but before the classes are scheduled to begin. Classes will be assigned equally to treatment and control conditions. Depending on the number of students in each class, the number of classes needed will vary. For the burden calculations, we have assumed a sample of 48 classes in each of the three replication sites where classes will be randomly assigned, with 19-20 students in each class who have parental consent to participate, for a beginning sample of approximately 2,850 students. In the remaining six replications, individual youth will be randomly assigned, with a sample size of approximately 950 in each site (a total of 5,700 across the six sites). The initial total sample size for the TPP evaluation is approximately 8,550 youth.

Power Calculations

First we will conduct site-level analyses and then a set of pooled analyses that will use data from the three replications of a model. All power calculations are based on the analytic sample at final follow-up (80% of originally-consented youth).


The statistical power of the design depends on several parameters that are not observable, but can be estimated with some precision. In particular, in a cluster-randomized design MDEs depend on the intraclass correlation (ICC), the proportion of level-2 variance explained by covariates (level-2 R-squared), and the proportion of level-1 variance explained by covariates (level-1 R-squared). In order to obtain plausible values for the ICC and R-squares for the current study design, we analyzed relevant data from Add Health. From these data, we estimate the ICC to be 0.025, the level-1 R2 to be 0.35, and the level-2 R2 to be 0.65.


The MDEs for the site-level and pooled impact analyses are presented in Exhibit A16.1. These estimates suggest to us that the study is adequately powered to detect impacts on sexual behavior outcomes at the individual site level. However, it is very unlikely that the evaluation will be able to detect the programs’ impacts on teen pregnancy and STIs in the site-level analysis, given the low prevalence of these outcomes. The larger samples in the pooled analyses increase the likelihood that we will be able to detect effects on these outcomes.


The numbers given as the analytic sample at final follow-up are the expected available sample size (i.e., 80% of the originally-consented and randomly assigned sample). For other behavioral outcomes, such as “sex in the last 90 days”, we were guided in our calculation of the analytic sample size needed (and hence the calculation of the initial sample to be randomly assigned) for individual site-specific designs by findings from other evaluations of sexual health interventions for teens as well as by prevalence estimates derived from Add Health data. Pooling the data across replications will allow us to detect smaller impacts on such behaviors. However, no such guidance existed for calculating the sample size needed to detect impacts on teen pregnancy, births or STIs, since prior evaluations have not focused on these outcomes. Prevalence data on these behaviors provided some assurance that we might be able to detect program impacts on the behaviors using the pooled data.


We will report the sample size at final follow-up as a percentage of the initially-consented sample size as a measure of the internal validity of the findings, unrelated to the cumulative response rate which will be separately reported so that readers can make judgments of the external validity of the study’s findings. It is of course essential to report and assess both measures of validity when considering the study’s findings.


Exhibit A16.1: Minimum Detectable Effects for Site-Level Analysis in Each Site or Pooled for Three Sites at Longer-Term Follow-Up

Data Type
(Treatment:Control Ratio)

Outcome Variable

Teen Pregnancy

Sex in Previous 90 Days

STI

A: Safer Sex

Single Site

(1:1)

5.8 percentage points

(n=720 individuals)

7.4 percentage points

(n=720 individuals)

3.3 percentage points

(n=720 individuals)

Single Site

(2:1)

6.1 percentage points

(n=720 individuals)

7.8 percentage points

(n=720 individuals)

3.5 percentage points

(n=720 individuals)

Three Pooled Sites

(1:1)

3.3 percentage points

(n=2,160 individuals)

4.3 percentage points

(n=2,160 individuals)

1.9 percentage points

(n=2,160 individuals)

Three Pooled Sites

(2:1)

3.5 percentage points

(n=2,160 individuals)

4.5 percentage points

(n=2,160 individuals)

2.0 percentage points

(n=2,160 individuals)

B: Reducing the Risk

Single Site

(1:1)

3.3 percentage points

(n=56 classrooms)

8.2 percentage points

(n=56 classrooms)

2.4 percentage points

(n=56 classrooms)

Single Site

(2:1)

3.5 percentage points

(n=56 classrooms)

8.8 percentage points

(n=56 classrooms)

2.6 percentage points

(n=56 classrooms)

Three Pooled Sites

(1:1)

1.9 percentage points

(n=168 classrooms)

4.8 percentage points

(n=168 classrooms)

1.4 percentage points

(n=168 classrooms)

Three Pooled Sites

(2:1)

2.0 percentage points

(n=168 classrooms)

5.1 percentage points

(n=168 classrooms)

1.5 percentage points

(n=168 classrooms)

C: ¡Cuidate!

Single Site

(1:1)

3.1 percentage points

(n=800 individuals)

7.7 percentage points

(n=800 individuals)

2.4 percentage points

(n=800 individuals)

Single Site

(2:1)

3.3 percentage points

(n=800 individuals)

8.2 percentage points

(n=800 individuals)

2.5 percentage points

(n=800 individuals)

Three Pooled Sites

(1:1)

1.8 percentage points

(n=2,400 individuals)

4.4 percentage points

(n=2,400 individuals)

1.4 percentage points

(n=2,400 individuals)

Three Pooled Sites

(2:1)

1.9 percentage points

(n=2,400 individuals)

4.7 percentage points

(n=2,400 individuals)

1.5 percentage points

(n=2,400 individuals)



For all power calculations, we set the alpha level to 5 percent for a two-tailed test, and the power of the test to 80%. We also assumed that 35% of control group members would have had sex in the prior 90 days at the time of the follow-up survey (www.cdc.gov/mmwr/pdf/ss/ss5905.pdf), except for SSI, in which all participants are sexually active at baseline and we assume that 75% will be sexually active at follow-up; and that 2% would have contracted an STI (4% in SSI, due to the higher rate of sexual activity) (http://www.cdc.gov/std/stats09/tables/10.htm). We further assume that 132 out of 1000 teens in the control group will become pregnant during the course of the SSI study, and 45 out of 1000 during the course of the RtR and ¡Cuidate! studies. These assumptions are based on the pregnancy rates in high-risk groups in those age ranges and the length of the follow-up. In addition, we assumed that variables collected in the baseline survey, including baseline measures of the outcome variable, would explain 35% of the variation in the outcome measure for individual random assignment. For cluster random assignment, we assume that those variables will also explain 65% of the variation at the group level and that the classroom-level ICC is 0.025, as explained in the text.  


B2. Procedures for Collection of Information

The evaluation will collect information on youth baseline characteristics and behaviors from approximately 8,550 youth across nine selected replication sites. To the greatest extent possible, baseline data and subsequent follow-up data will be collected using web-based ACASI technology, as described in Part A of this submission.

In clinic sites, trained clinic staff will obtain youth consent and, where indicated (i.e., when parents accompany a minor child to the clinic) parental consent. In school-based replication sites, school staff will assist in obtaining active parental consent and student assent to participate in the evaluation. Parental consent will be obtained at the beginning of the study for possible participation in the program and for the baseline and all subsequent data collections. We will not re-consent parents at any subsequent time. Youth, on the other hand, will be asked to assent at baseline and to re-assent before completing each of the two subsequent surveys.

In school-based settings, the contractor will prepare a final survey roster of all youth at each school for whom it has received parental consent and student assent, and who are expected to complete the baseline questionnaire. Contractor staff will work with schools to determine dates and venues for conducting survey administration with “consented” youths. It is anticipated that non-teacher school staff (e.g., nurses, guidance counselors, school support staff) designated by the school will assist with gathering and coordinating youth for survey administration. Contractor staff will arrive at the school to oversee the survey, use the survey roster to take attendance and determine whether any youth are missing and to exclude any not on the survey roster.

Survey administration begins with contractor staff seating each sample member at a computer (in the designated private space) and giving him or her headphones. Contractor staff then log on to the ACASI system and conduct a basic sound check, enters a pre-assigned ID code, and logs into the web survey. The staff member ensures that the respondent is comfortable with the equipment. The survey session begins with a display of instructions that are also narrated through the headphones. The respondent is then left to complete the survey in private. Where group administration of the baseline survey is impossible (e.g., in clinics where clinic staff are responsible for recruiting study participants at the time they present for services), once the adolescent has agreed to participate in the study, and before random assignment, a trained clinic staff member will carry out the functions described above, on an individual basis.

In both circumstances, staff who will oversee the survey will be trained and equipped with laptop computers, headphones and the wireless internet equipment necessary to ensure that all study participants are able to access the web-based ACASI platform.

In situations where a sample member is absent for the group administration, an alternative time for individual administration will be scheduled. If any youth are not available for the survey administration or make up sessions, contractor staff will contact them and provide a PIN/password for web completion. English and Spanish versions of the survey will also be available in hard copy format, for use in the event that unanticipated technical “glitches” occur at the time of administration. The hard copies will be designed to look like the web version and contain the identical questions, skip and branching patterns and overall instructions as the web-based survey. We believe the use of these hard copy surveys will be rare, but will train data collection staff in the procedures necessary to protect respondent privacy.

If any youth are not available for the survey administration or make-up sessions, contractor staff will contact them and provide a PIN/password for web completion.

Questionnaire Part A asks for background information and concludes with a single screening question about sexual experience. Sexually-experienced youth will complete Questionnaire Part B1 while those who have not been sexually active will complete Questionnaire Part B2.

Once the sample member has completed the survey, the last screen will inform him or her “the survey is now complete”. The youth will leave the computer, real-time verification of completion will be recorded in the survey database, and the youth will receive a $25 gift card. In the rare cases where a hard copy survey is completed, youth will place the entire questionnaire in a return envelope, seal it, and return it to a contractor staff member. Staff will send the completed questionnaires to the contractor’s office, where the questionnaires will be receipted and checked for completeness, and the data entered into the survey database.

B3. Methods to Maximize Response Rates and Deal With Nonresponse

We expect a better than 90 percent response rate to the baseline survey because survey administration will occur shortly after active parental consent is received (or, in the case of the clinic patients recruited to the study, at the time they are recruited for the study). This timing will ensure our contact data are current (no location problems) and that surveys can be administered to most youth in the location where the program takes place (for example, the school). In addition, we expect that obtaining the site’s willing assistance will be very important to maximizing the response rate; we will therefore invest significant effort in gaining their cooperation, minimizing burden on sites, integrating an effective consent process, and assuring privacy to the youth participants. Sites will be given detailed information about the surveys, how they will be administered and on what schedule, what involvement and time will be required of school and agency staff, and how data will be used and protected. Bringing sites into the process while minimizing burden will assure site support for the data collection effort.

We expect to achieve an 80 percent response rate at the second and final follow-up point (and an 86 percent or higher response rate on the intermediate follow-up survey). Eligibility for each data collection point does not require participation in the prior data collection point as long as consent and assent are obtained for the current data point. As indicated in B.2, parental consent will be obtained at the beginning of the study for possible participation in the program and for the baseline and all subsequent data collections. We will not re-consent parents at any subsequent time. Youth, on the other hand, will be asked to assent at baseline and to re-assent before completing each of the two subsequent surveys.

In the study analysis and reports we will distinguish between external and internal validity. For internal validity, we are concerned only with the survey completion rates of those youth who have been randomized (or whose classes were randomized) into the study. The rates of 90% at baseline, 86% at first follow-up and 80% at final follow-up are not however cumulative. At each time point, the percentage represents the expected proportion of originally-consented youth that completes the survey. Following the What Works Clearinghouse guidelines, we believe that, with the expected completion rates at follow-up and no serious attrition bias, we can include in the follow-up analyses all youth who responded, including those for whom baseline data are missing.


For external validity, we need to calculate a cumulative response rate. In this case, the program and school response rate is assumed to be 100% since grantees and their school or agency partners were required as a condition of the grant to participate in the evaluation if invited. If we assume a parental/youth consent rate (our experience is that they will be the same) of 90%, then the cumulative response rate at each point is 90% x 90% (81%) at baseline, 90%x86% (77.4%)at first follow-up, and 90%x80% (72%) at final follow-up.



Completion rate

Cumulative (based on prior contact)

Consent/assent

0.90

0.90

Baseline

0.90

0.81

First follow up

0.86

0.77

Final follow up

0.80

0.72


Even with such high response rates, however, survey nonresponse can bias impact estimates if outcomes of survey respondents and non-respondents differ, or if the types of individuals who respond to the surveys differ for the treatment and control groups. To correct for differences between respondents and non-respondents on follow-up surveys, we will construct sample weights that mirror the characteristics of the full sample, so that the baseline characteristics of the responders to the follow-up survey mirror those of the full sample

Methods to achieve high response rates at follow-up will be discussed in future information collection requests.

B4. Tests of Procedures or Methods to be Undertaken

The instrument submitted for clearance here is, overwhelmingly, the same measure as that approved by OMB in the Evaluation of Pregnancy Prevention Approaches study (0970-0360). That measure was pretested by Mathematica as a paper-and pencil survey. Mathematica staff recruited pretest participants and study staff talked directly with all interested teens to explain the pretest and the need to obtain parental consent prior to their participation.

Youth were asked to participate in one of five pretest administrations, during which small groups of four or five teens completed the self-administered questionnaire in a group setting and then went to a one-hour one-on-one debriefing with a researcher.

Because we need to ensure that the administration of the pretest mirrors as closely as possible what will happen during the actual study, the TPP baseline measure will be translated into a web-based ACASI format and pretested in both a school and non-school setting, with both English and Spanish respondents. A pretest report will be submitted to OMB, together with any changes to the instrument or data collection procedures recommended on the basis of the pretest results.

B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Administration of the baseline survey for the PPA evaluation will be overseen by the contracting organization, Abt Associates Inc., and its subcontractor, DIR. The same contractor will analyze data with support from evaluation colleagues at the University of Chicago’s Chapin Hall. Individuals whom OAH has consulted on the collection and/or analysis of the baseline data include

those listed below.


Alan Hershey

Mathematica Policy Research, Inc.

P.O. Box 2391

Princeton, NJ 08543

(609) 275-2384


Christopher Trenholm

Mathematica Policy Research, Inc.

P.O. Box 2391

Princeton, NJ 08543

(609) 936-279-6384


Laura Kalb

Mathematica Policy Research, Inc.

955 Massachusetts Avenue, Suite 801

Cambridge, MA 02139

(617) 301-8989


Kristin Moore

Child Trends

4301 Connecticut Ave. NW
Washington, DC 20008-2333
(202) 362-5580


Jennifer Manlove

Child Trends

4301 Connecticut Ave. NW
Washington, DC 20008-2333
(202) 362-5580


Meredith Kelsey

Abt Associates

55 Wheeler St.

Cambridge, MA 02138


Christine Markham

The University of Texas School of Public Health

P.O. Box 20186

Houston, TX 77225

(713) 500-9646


Pat Paluzzi

President

Healthy Teen Network

1501 Saint Paul St., Suite 124

Baltimore, MD 21202

(410) 685-0410


Susan Philliber

Philliber and Associates

16 Main St.

Accord, NY 12404(845) 626-2126


Michael Resnick

Division of Adolescent Health and Medicine

717 Delaware St. SE, Suite 370

Minneapolis, MN 55414-2959

(612) 624-9111


Matt Stagner

Chapin Hall – University of Chicago

Executive Director

1313 E. 60th St.

Chicago, I'll 60637

[email protected]


Melissa Gilliam, MD MPH

Department of Obstetrics and Gynecology

The University of Chicago

5841 S. Maryland Ave., MC2050

Chicago, IL 60637

[email protected]





Inquiries regarding statistical aspects of the study design should be directed to:

Amy Feldman Farb, Ph.D.

Office of Adolescent Health

U.S. Department of Health and Human Services

1101 Wootton Parkway, Suite 700

Rockville, MD 20852

(240) 453-2836


or


Lisa Trivits, Ph.D.

Office of the Assistant Secretary for Planning and Evaluation (ASPE)

U.S. Department of Health and Human Services

200 Independence Ave, SW

Washington, DC 20201

(202) 205-5750


Dr. Feldman Farb and Dr. Trivits are the TPP Evaluation project officers. Both have overseen the current baseline instrument.

Inquiries related to the Teen Pregnancy Prevention Program, or evaluations of it, may be directed to:

Amy Farb, Ph.D.

Office of Adolescent Health

Office of the Assistant Secretary for Health

U.S. Department of Health and Human Services

1101 Wootton Parkway, Suite 700

Rockville, MD 20852

(240) 453-2836







File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Justification for OMB Clearance of Evaluation of Pregnancy Prevention Approaches Part B: Statistical Methods for Base
AuthorMary Hess
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy