Supporting
Justification for
OMB Clearance of Evaluation of Pregnancy
Prevention Approaches
Part
A: Justification for the
Collection of Follow-up Data
CONTENTS
Abstract 1
A1. Circumstances Making the Collection of Information Necessary 2
1. Legal
or Administrative Requirements that Necessitate
the
Collection 3
2. Study Objectives 3
A2. Purpose and Use of the Information Collection 8
A3. Use of Improved Information Technology and Burden Reduction 10
A4. Efforts to Identify Duplication and Use of Similar Information 11
A5. Impact on Small Businesses or Other Small Entities 11
A6. Consequences of Collecting Information Less Frequently 11
A7. Special
Circumstances Relating to the
Guidelines of 5 CFR
1320.5 11
A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency 12
A9. Explanation of Any Payment or Gift to Respondents 12
A10. Assurance of Privacyy Provided to Respondents 12
A11. Justification for Sensitive Questions 13
A12. Estimates of Annualized Burden Hours and Costs 14
A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers 15
A14. Annualized Cost to the Federal Government 15
A15. Explanation for Program Changes or Adjustments 15
A16. Plans for Tabulation and Publication and Project Time Schedule 16
1. Analysis Plan 16
2. Time Schedule and Publications 18
CONTENTS (continued)
A17. Reason(s) Display of OMB Expiration Date is Inappropriate 18
A18. Exceptions
to Certification for Paperwork
Reduction Act
Submissions 18
REFERENCES 19
TABLES
A.1 Instruments and Estimated Burden Included in this Submission 1
A.2 Description of the PPA Sites: Program Description, Target Sample, and Targeted Outcomes 5
A.3 PPA Evaluation Sites: Follow-up Schedules 10
A.4 Incentives for Follow-up Data Collection, by Site 12
EXHIBITS
A.11.1 Summary of Sensitive Questions and their Justification 14
A.12.1 Reporting Burden on Study Participants for Follow-Up Data Collection 15
ATTACHMENTS
ATTACHMENT A: Chicago Public Schools: Follow-up Instrument
ATTACHMENT B: OhioHealth: Follow-up Instrument
ATTACHMENT C: Children’s Hospital Los Angeles (CHLA): Follow-up Instrument
ATTACHMENT D: Oklahoma Institute for Child Advocacy (OICA): Follow-up Instrument
ATTACHMENT E: EngenderHealth: Follow-up Instrument
ATTACHMENT F: Live the Life Ministries (LtL): Follow-up Instrument
ATTACHMENT G: Princeton Center for Leadership Training (PCLT): Follow-up Instrument
ATTACHMENT H: Crosswalks of Follow-up and Baseline Items for Each Site
ATTACHMENT I: 60-day Federal Register Notice
The U.S. Department of Health and Human Services (HHS) is conducting the Evaluation of Adolescent Pregnancy Prevention Approaches (PPA), an eight-year demonstration designed to study the effectiveness of promising policy-relevant strategies to reduce teen pregnancy. The study was designed to include up to eight evaluation sites, and at this point it appears that there will be seven sites:
one site – Chicago Public Schools, implementing the Health Teacher curriculum – has been recruited, and baseline and follow-up surveys have been implemented; and
six federally-funded grantees have been recruited, and a baseline survey has been implemented in three of the sites
Approval for outreach discussions with stakeholders, experts in the field, and program developers was received on November 24, 2008 (OMB Control No. 0970-0360). Approval for the baseline survey data collection and the collection of youth participant records was received on July 26, 2010 (OMB Control No. 0970-0360). Emergency clearance for site-specific variants of the baseline survey questionnaire was received on August 22, 2011 (OMB Control No. 0970-0360). Per the conditions of the emergency approval, a request for standard clearance of the site-specific baseline instruments has been submitted and is currently under review.
Similar to the baseline survey effort, a large group of federal staff has collaborated to modify a previously drafted PPA follow-up instrument into a “concordance follow-up instrument” suitable for all HHS pregnancy prevention evaluations, including but not limited to PPA. HHS is trying to maximize consistency across evaluations of federal pregnancy prevention grant programs. In 2010 and 2011, the Administration for Children and Families (ACF) and the Office of Adolescent Health (OAH), in coordination with other HHS offices overseeing pregnancy prevention evaluation, collaborated to consider revisions to the previously drafted PPA instrument. Approval for the first follow-up data collection, the follow-up “concordance” instrument (to be used in Chicago) and one site-specific follow-up questionnaire was received on September 27, 2011 (OMB Control No. 0970-0360). We now seek OMB approval for the remaining site-specific variants of the follow-up instrument1 and the follow-up data collection, including all rounds of follow-up using the instruments submitted for review.
As in the case of baseline data collection, site-specific variation in follow-up data collection instruments is planned because of the differences among the seven PPA sites. As PPA sites were recruited, we found that variations in their target populations and program models make it essential to tailor data collection, at both baseline and follow-up, to analytical priorities in each site. Developing those site-specific instruments involves working closely with the six sites that are federal pregnancy prevention grantees, and with the local evaluators they have engaged as a condition of their grants.
The collaboration with the six grantee sites also involves specifying the exact schedule for follow-up data collection. Across these sites, there is variation in the length of the program being tested, the age of the target population, the key outcomes on which impacts are of greatest interest, and thus on the most suitable schedule for follow-up surveys. The PPA technical work group (TWG) provided important guidance for the timing of two follow-up surveys: a first follow-up no earlier than 3-6 months after program completion, and a second no later than 18-24 months after program completion. This guidance has been quite closely followed, with well-justified exceptions. In two cases the negotiation with local evaluators led to plans for three follow-ups, with the third follow-up inserted as an early survey. In one case, the final follow-up timing deviates from the TWG guidance because the program lasts 18 months; follow-ups are scheduled at 6, 18, and 30 months after enrollment, which means there will be a follow-up during the intervention, immediately after it ends, and 12 months after it ends.
The process of working out these instruments and survey schedules has now been completed site by site, and the result determines when the first follow-up survey must be administered in each site, and thus determines for which sites approval of follow-up data collection is most urgent. A previous submission focused on first follow-up data collection in the two earliest sites: Chicago and Oklahoma (approval received September 27, 2011; OMB Control No. 0970-0360). The current submission presents follow-up questionnaires and estimated burden for the remaining sites and rounds of follow-up data collection.2 Table A.1 provides a summary of instruments and estimated burden included in this submission.
Table A.1. Instruments and Estimated Burden Included in this Submission
|
Follow-up Instrument |
Burden Estimate |
||||
|
Previously Approved, With Minor Modifications |
Previously Approved, With No Changes |
New Submission |
FU1 |
FU2 |
Additional Early Follow-Up |
Chicago Public Schools |
|
|
|
|
|
|
OhioHealth |
|
|
|
|
|
|
CHLA |
|
|
|
|
|
|
Oklahoma Institute for Child Advocacy (OICA) |
|
|
|
|
|
|
EngenderHealth |
|
|
|
|
|
|
Live the Life (LtL) |
|
|
|
|
|
|
Princeton Center for Leadership Training |
|
|
|
|
|
|
A1. Circumstances Making the Collection of Information Necessary
For decades, policymakers and the general public have remained concerned about the prevalence of sexual intercourse among adolescents. Although adolescents today are waiting somewhat longer before having sex than they did in the 1990s, 60 percent of teenage girls and more than 50 percent of teenage boys report having had sexual intercourse by their 18th birthday.3 Approximately one in five adolescents has had sexual intercourse before turning 15.4 Rates of teenage pregnancy declined by 38 percent from 1990 to 2004, and the rate of teen births followed a similar decline5 until recently, when the rate of births rose by 5 percent from 2005 to 2007 for teens aged 15-19.6
HHS is interested in identifying and evaluating promising approaches to reduce teen pregnancy, associated risk behaviors, and their consequences. Combined with the baseline data collection, the follow-up data collection described in this ICR will provide important information to guide policy decisions aimed at addressing this serious concern.
The need to tailor content of the follow-up questionnaires for PPA to specific sites is a reflection of how the sites’ programs have been funded. The PPA site programs are supported by two major funding streams. The first stream, administered by the DHHS Office of Adolescent Health, for Teen Pregnancy Prevention (TPP) Programs, promotes both aims with two funding tiers: 75% of funds go to discretionary grants to replicate evidence-based programs, and 25% go to discretionary grants to conduct innovative demonstration evaluations. The second funding stream, the Personal Responsibility Education Program (PREP), which is administered by the Administration for Children and Families, provides a formula grant to states to replicate evidence-based teen pregnancy prevention programs or substantially incorporate elements of such programs. PREP also provides funding for discretionary grants for Innovative Strategies demonstration evaluations, as well as a Tribal program. Many grantees funded under these two funding streams are required to conduct their own local evaluations, and this is true of the grantees selected as PPA sites.
In addition to local evaluations, these grantees are required, if selected, to participate in one of several federal evaluation studies currently being planned or implemented that examine the impact of teen pregnancy prevention programs. Collaboration between grantees and the PPA evaluation is mandated. One part of this collaboration is to develop “blended” questionnaires that address PPA research objectives but also incorporates the site-specific research priorities established by local evaluators in their required plans. The result is that tailored versions of all questionnaires–baseline and follow-up—are required for the PPA sites.
Public Law 110-161, which set fiscal year (FY) 2008 appropriations levels, included the following language: “$4,500,000 shall be available from amounts available under section 241 of the Public Health Service Act to carry out evaluations (including longitudinal evaluations) of adolescent pregnancy prevention approaches.” The same language appropriated $4,450,000 in each of FYs 2009, 2010, and 2011. These funds have been used for the PPA evaluation.
In FYs 2008 and 2009, these funds were overseen by ACF’s Family and Youth Services Bureau (FYSB). In FYs 2010 and 2011, these funds were overseen by HHS’ Office of Adolescent Health (OAH). However, through all FYs, FYSB and OAH have asked ACF/OPRE to assist in facilitating the research contract. ACF is now assisting OAH in facilitating the contract.
To accomplish the objective of the appropriations, ACF and OAH – heretofore referred to as HHS – seek OMB approval of the first follow-up survey instrument of program participants, for the first two PPA sites.
The objective of the PPA evaluation is to test selected promising approaches to prevent teen pregnancy among middle school- and high school-aged teens. The evaluation will help HHS determine the effectiveness of various approaches in affecting key outcomes related to pregnancy prevention (for example, sexual debut, pregnancy, sexually transmitted disease [STD] infection, and so on). Ultimately, the purpose of the evaluation is to provide stakeholders—including practitioners and federal and other policymakers—with information on a range of approaches that hold promise for preventing teen pregnancy, and, through the follow-up surveys, to assess rigorously the effectiveness of these approaches.
In the PPA evaluation, HHS has identified seven study sites that will implement different pregnancy prevention approaches. In three of these sites, the programs to be tested will be school-based—operated in high schools or middle schools. In the other sites, the programs to be tested will be operated in community-based organizations (CBOs). The study will use a sample of approximately 9,000 teens across all sites. In each site, youth will be assigned to a treatment group that receives the program of interest, or to a control group that does not. In five sites, to ensure that behavior of control group youth is not affected, or “contaminated” by interaction with treatment group youth, random assignment will be done generally at the cluster level (that is, the school or CBO). In the other two sites, random assignment will be done at the individual level, because risks of contamination are low.
Table A.2 provides a description of each of the sites, including the program to be evaluated, the expected sample size, the target population, and key outcomes to be measured. Creating “site specific instruments” will enhance what can be learned from each site, and avoid awkward incongruities between standard questions and site circumstances. For example, Children’s Hospital Los Angeles and OhioHealth will serve pregnant and parenting mothers; asking sample members in these sites about whether they had ever had sexual intercourse would be irrelevant and perhaps offensive. On the other hand, analysis of impacts on repeat pregnancies would be well served by including items about the respondent’s relationship with the child’s father, which may be a predictor of repeat pregnancy and therefore an important covariate, but is not part of the initial PPA questionnaire. Similarly, questionnaire tailoring is important in the Oklahoma Institute site, which will serve teenagers in foster care. For these youth, many of whom have been sexually abused, special care must be taken to modify questions about sexual activity to be sure they capture information on consensual sex. In several sites where programs will operate in quite conservative communities, questions about oral and anal sex have to be pared down or dropped for the evaluation to be accepted. The negotiation of these and other adjustments has involved repeated and detailed discussions with grantees and local evaluators.
Table A.2. Description of the PPA Sites: Program Description, Target Sample, and Targeted Outcomes
Program Description |
Targeted Sample |
Expected Total Sample Enrolled |
Targeted Outcomes |
Chicago Public Schools (Chicago, Illinois) |
|||
HealthTeacher: a comprehensive sex education curriculum originally developed by The University of Chicago. An enhanced version was developed in conjunction with CPS, which consists of 10-12, 45- minute lessons taught to students in their health class, with an emphasis on family health and sexuality. |
Youth in 7th grade in participating schools |
1583 |
|
OhioHealth Research and Innovation Institute (Columbus, Ohio) |
|||
T.O.P.P: 18-month clinic/hospital-based program to delay repeat pregnancies among adolescents 10-19 by improving access to reproductive health services and contraceptive care. Program consists of: (1) monthly telephone calls from a nurse educator to provide contraceptive information and help coordinate access to contraceptive services; (2)access to contraceptive services through a mobile OB/GYN trailer and transportation to clinic services. |
Pregnant and parenting teens, ages 10-19 in OhioHealth hospitals and clinics |
600 |
|
Children’s Hospital Los Angeles (Los Angeles, CA) |
|||
Project AIM: Evidence-based youth development program for teen parents under age 21 receiving case management as part of California’s Adolescent Family Life and Cal-Learn programs. Program consists of six 60-minute individual sessions and three 90-minute group sessions, in addition to ongoing case management and access to referrals for other services. Adapted for use in preventing repeat pregnancies among 15-18 year old females by focusing on aspirations and future planning while incorporating content specific to teen mothers (such as contraceptive use, relationship issues, and balancing their roles as adolescents and young mothers, etc.) |
Pregnant and parenting teen mothers receiving case management services through clinic sites, ages 15-18 |
1400 |
|
Oklahoma Institute for Child Advocacy (Oklahoma, Illinois, Maryland, California) |
|||
Power Through Choices: Sexuality education curriculum implemented in foster care group homes consisting of ten 90 minute sessions, for a total of 15-hours of curriculum. Teaches youth how to avoid sexual risk behaviors, pregnancy, and sexually transmitted infections. Topics include anatomy/reproductive health, increasing communication skills, avoiding sexually transmitted infections/HIV, and preventing pregnancy through the use of contraception. |
Youth in foster care group homes, ages 14-18
|
1080 |
|
EngenderHealth (Austin, TX) |
|||
Gender Matters: 20-hour program focused on helping teens achieve a sound understanding of concepts of health gender roles, healthy relationships, and empowerment to delay sexual initiation and increase consistency of condom use. Focuses on concepts of masculinity and femininity and their connections to sexual risk behavior. |
Youth participating in the Travis County Summer Youth Employment Program, ages 14-15 |
1125 |
|
Live the Life Ministries |
|||
WAIT Training: 8-hour abstinence-based curriculum, to be delivered by teachers in schools as a required class in 7th and 8th grades, for a total of 16 hours. The intervention is delivered in a short, intensive period, typically over eight consecutive school days each year. Focuses on educating young people on pregnancy prevention, setting future goals, responsible behavior, and healthy relationships. Emphasizes that young adolescents should postpone sexual activity and that practicing abstinence is the only way to eliminate the risk for pregnancy and STDs, including HIV. |
Youth in 7th grade in participating schools |
1600 |
|
Princeton Center for Leadership Training (New Jersey, North Carolina) |
|||
TeenPEP: School-based peer-to-peer program in which trained faculty advisors select youth to become a cohesive team of peer educators and serve as sexual health advocates and role models. These peer educators conduct five 90-minute structured and scripted outreach workshops, under the supervision of faculty advisors, for high school 9th graders. Topics include sexual health information, communication with partners and parents, problem-solving, decision-making, negotiation, refusal skills, and self management skills. |
Youth in Grade 9 in participating schools |
1600 |
|
A2. Purpose and Use of the Information Collection
Baseline data (collection already approved) will serve several important purposes. It will be used to establish baseline equivalence of the treatment and control groups and thus to confirm the integrity of the random assignment process. Baseline variables will be used to define subgroups for which impacts will be estimated, and to adjust impact estimates for the baseline characteristics of nonrespondents to the follow-up survey. Many baseline variables will be measures of outcomes measured again at follow-up; their baseline values can be used to improve the precision of impact estimates by their inclusion as covariates in the impact models.
The follow-up data collection for which approval is now sought will focus on two types of outcomes – both of which can only be measured through surveys of youth. The first are sexual risk outcomes, including the extent and nature of sexual activity, use of contraception (if sexually active), pregnancy, and testing for and diagnoses of STDs. The second are a series of intermediate outcomes that may be associated with the sexual risk outcomes and therefore important to measure as potential pathways of any program effects on sexual risk behavior. Examples of these outcomes include participation in and exposure to pregnancy prevention programs and services, intentions and expectations of sexual activity, relationships with family and friends, knowledge of contraception and sexual risks, dating behavior and alcohol and drug use. In addition, the survey includes a small number of questions that identify socio-demographic or other characteristics of youth in the study sample, which may be used either for descriptive purposes or as potential covariates in the regression models for measuring program effects. Finally, for sample youth who report not being sexually active, the survey includes questions to support a descriptive analysis of these youth and a future investigation of their potential transition into sexual activity (to ensure privacy of youth who respond to the surveys, the length of the series of questions for non-sexually active youth has been timed to approximate to the length of the series for sexually active youth). Follow-up data will be used to address the following research questions on program impact:
Are the (selected) approaches effective at meeting their immediate objectives (for example, improving knowledge of pregnancy risks)?
Are the approaches effective at reducing adolescent pregnancy?
What are their effects on related outcomes, such as postponing sexual activity and reducing or preventing sexual risk behaviors and STDs?
Do these approaches work better for some groups of adolescents than for others?
In each site, the follow-up survey is similar to the already-approved baseline instrument. Some items on the baseline were dropped from the follow-up questionnaire to make room for items that are more relevant for follow-up data collection, such as those that address services received. Additional items were added to address local evaluator interests and collect data on program-specific outcomes. For instance, in the two sites working with pregnant and parenting teens, questions were added to account for the possibility of additional pregnancies and births. In a few sites, the recall period for certain questions has been adjusted to reflect the time between rounds of data collection. Attachments A through G present the follow-up instruments for each of the sites7. Attachment H provides site-specific “crosswalks” between the questions approved for the each site’s baseline survey and the questions included in their follow-up questionnaire.
A baseline survey will be conducted with both the program and control groups before the youth in the program group are exposed to the pregnancy prevention programs. In most sites, two rounds of follow-up data collection are planned8. The first follow-up surveys will be conducted in most instances, and pursuant to the TWG guidance, no sooner than 3-6 months after the end of the scheduled program intervention for each sample member. The final follow-up survey will be conducted with participating youth no later than 18-24 months after the scheduled end of the program. The exact timing of the two follow-up surveys has been determined in each site, taking into account the length of the program, the age of the target population, and the priority outcomes of interest. Wherever possible, there will be group administration of the self-administered survey; when necessary to increase response rates or whenever group administration is not feasible, web and telephone with hard copy will be used. Table A.3 provides a summary of the schedule for the follow-up data collection in each of the sites.
Major evaluation activities will include the following:
Identifying promising strategies and programs through a review of the literature and interviews with the “field” (for example, researchers, policy experts, and program developers) in order to focus the evaluation on interventions that are of substantial interest to the field and show the most promise for reducing rates of teen sexual activity and pregnancy (completed) .
Recruiting sites to participate in an evaluation of selected interventions (from among those identified by the field) and providing assistance on evaluation support activities (completed).
Collecting data on the research sample at baseline and at two follow-up data collections.
Analyzing data collected and preparing reports with the results.
Table A.3. PPA Evaluation Sites: Follow-up Schedules
Site (Grantee) |
Length of Intervention (elapsed time) |
Timing
of Early Follow-up |
Timing of Final Follow-up (from end of program) |
Chicago Public Schools (CPS) |
16 weeks (fall 2010–spring 2011 |
5-6 months |
13-14 months |
OhioHealth Research and Innovation Institute |
18 months |
- Early FU during program (6 months after enrollment) - FU at end of intervention (18 months after enrollment) |
12 months (30 months after enrollment) |
Children’s Hospital Los Angeles (CHLA) |
12 weeks |
9 months |
21 months |
Oklahoma Institute for Child Advocacy (OICA) |
10 weeks |
- At program completion - 6 months |
12 months |
Engender Health |
5 days |
6 months |
18 months |
Live the Life (LtL) |
2 school years (8-day dose each year) |
3-5 months (spring 8th grade) |
15-17 months (spring 9th grade) |
Princeton Center for Leadership Training (PCLT) |
5-16 weeks (depending on school schedule) |
6-7 months |
18-19 months |
HHS is conducting this evaluation through a lead contractor, Mathematica, and its subcontractors: Child Trends, Twin Peaks, LLC, and National Abstinence Education Association.
A3. Use of Improved Information Technology and Burden Reduction
The data collection plan reflects sensitivity to issues of efficiency, accuracy, and respondent burden. Where feasible, information will be gathered from existing data sources; the information being requested through surveys is limited to that for which the youth are the best or only information sources. Improved information technology will be used when appropriate and cost-effective. During the first follow-up data collection, self-administered PAPIs will be used for all group-based completions. In those instances in which the survey must be administered to individuals outside of a group setting, respondents will be provided a PIN/password for web completion or will be administered a telephone survey. The advantages of PAPI over more technologically innovative approaches, such as laptops or personal digital assistants (PDAs), are that it enables respondents to set their own pace; provides accurate responses to sensitive questions; reduces costs; and simplifies administration logistics, as the majority of interviews will be conducted in a classroom setting. This method is also consistent with other recent youth surveys and evaluations. Studies have shown no difference between PAPI and computer-assisted self-interviewing (CASI) in reports of most measures of male-female sexual activity, including reports such as ever having had sexual intercourse, recent sexual activity, number of partners, condom use, and pregnancy.9,10,11,12,13,14 Turner et al.5 found that CASI improved reporting on low-prevalence behaviors such as male-male sex, injection drug use, and sexual contact with intravenous drug users.
A4. Efforts to Identify Duplication and Use of Similar Information
The information collection requirements for the PPA evaluation have been carefully reviewed to determine what information is already available from existing studies and what will need to be collected for the first time. Although the information from existing studies provides value to our understanding of reducing teenage sexual risk behavior, HHS does not believe that it provides sufficient information on a sufficient range of programs to policymakers and stakeholders aiming to reduce this behavior. The data collection for the PPA evaluation is an essential step to providing this information.
A5. Impact on Small Businesses or Other Small Entities
Programs in some sites may be operated by community-based organizations. The data collection plan is designed to minimize burden on such sites by providing staff from Mathematica Policy Research to assist in group data collection. For respondents who do not complete the survey in the group setting, Mathematica will provide passwords for web completion or will conduct a telephone data collection, thus minimizing requirements for extensive “sample pursuit” by site staff.
A6. Consequences of Collecting Information Less Frequently
Follow-up data are essential to conducting a rigorous evaluation of pregnancy prevention programs, per appropriations. In the absence of such data, funding decisions on teen pregnancy prevention programs will continue to be based on insufficient and outdated information on program effectiveness.
A7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
There are no special circumstances for the proposed data collection.
A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency
The 60-day notice was published in the Federal Register on December 6, 2011 on page 76164, with the document identifier OS-0990 -0382. The text is found in Attachment I. No comments or questions were received.
A9. Explanation of Any Payment or Gift to Respondents
Table A.4 provides a summary of the incentives to be offered in each of the sites. For the school-based administrations (Chicago, Live the Life, PCLT) no incentive is offered at baseline. In the other sites, baseline incentive amounts vary slightly depending on the mode of administration. For follow-up data collection in the school-based sites, a $10 gift card is provided to participants completing the survey in a group setting; a $25 gift card is provided to those completing the survey by phone or on the web. A higher incentive is offered to these respondents because completion outside of the group administration requires greater initiative and cooperation on behalf of the respondent, as well as additional time outside of the school day. In the other sites, the incentive amounts vary slightly based on the mode of administration (and associated burden), the mobility of the population, and the length of time from enrollment. A report evaluating the use of incentives in this study will be provided to OMB.
Table A.4. Incentives for Data Collection, by Site
Site (Grantee) |
Baseline |
First Follow-up |
Second Follow-up |
Chicago Public Schools |
None |
$10 group; $25 phone/web* |
$10 group; $25 phone/web |
OhioHealth |
$10 individual PAPI |
$10 for 6 months; $25 for 18 months |
$50 |
Children’s Hospital Los Angeles (CHLA) |
$20 in-person ACASI |
$20 in-person ACASI |
$30 in-person ACASI |
Oklahoma Institute for Child Advocacy (OICA) |
$10 group |
$15 for immediate post-test; $25 for 6 months* |
$35+ |
EngenderHealth |
$20 group PAPI |
$25 web/phone |
$25 web/phone |
Live the Life (LtL) |
None |
$10 group; $25 phone/web |
$10 group; $25 phone/web |
Princeton Center for Leadership Training (PCLT) |
None |
$10 group; $25 phone/web |
$10 group; $25 phone/web |
* Incentive was included in previous submission to OMB (approval received September 27, 2011; OMB Control No. 0970-0360).
+ We are pursuing IRB approval to reduce the incentive from the originally approved $50 to $35.
HHS has embedded protections for privacy in the study design. Data collection will occur only if informed consent is provided by a parent or legal guardian if the respondent is a minor, or by respondents themselves if they are 18 or older. Consent for the duration of the study will be collected prior to baseline data collection. Youth without consent will not be included in the study sample and no data will be collected. The consent form, which was approved through the baseline survey ICR, explains the data being collected, and its use. The form also states that answers will be kept private, that youths’ participation is voluntary, and that they may refuse to participate at any time. Participants and their parents/guardians are told that, to the extent allowable by law, individual identifying information will not be released or published; rather, data collection will be published only in summary form with no identifying information at the individual level. The form also notes that the evaluation has obtained a Certificate of Confidentiality from the National Institutes of Health (NIH). In addition, student assent will be obtained prior to each survey administration. Our protocol during the self-administration of the paper-and-pencil instrument will provide reassurance that we take the issue of privacy seriously. It will be made clear to respondents that identifying information will be kept separate from questionnaires. The questionnaire and envelope will have a label with a unique ID number; no identifying information will appear on the questionnaire or return envelope. Before turning completed questionnaires in to field staff, respondents will place them in blank envelopes and seal them. This approach has been shown in research to yield the same reports of sexual activity as computer-assisted surveys in school settings, and a lower incidence of student concerns about privacy. Identifying and contact information will be stored in secure files, separate from survey and other individual-level data.
Telephone surveys are completed by interviewers recording respondent’s answers on a hard copy (PAPI) of the survey. Prior to beginning the survey, a statement ensuring privacy and the student assent is read aloud and respondents are given a chance to verbally opt out of the survey. As with the hard copy for the group administrations, no identifying information is attached to the questionnaire; only a unique study ID will be included on the questionnaire.
For the web surveys, a unique password and PIN will be sent to respondents to log into the survey. A statement ensuring privacy will be presented at the beginning of the survey, and we will have a screen where respondents can choose to opt out of the survey. No names will be attached to the data – only the student’s unique study ID.
As in the baseline survey, many of the measures in the follow-up survey ask for information of a sensitive nature (Exhibit A11.1) because the programs we will be evaluating are designed specifically to reduce sexual activity and associated risk behaviors among teens. Comprehensive measures of behavior are included because they will provide more accurate representations of teen sexual behavior, and the responses will significantly supplement the knowledge currently available on program effectiveness.
Sensitive questions are drawn from previously-successful youth surveys and evaluations. The items have been carefully selected, and we have been guided by past experience in determining whether or not the benefits of measures may outweigh concerns about the heightened sensitivity among sample members, parents, and program staff to specific issues. Although these questions are sensitive, they are commonly and successfully asked of youth similar to those who will be in the study. Many of the sensitive items related to sexual activity will be asked only of sample members who report being sexually active.
Exhibit A.11.1. Summary of Sensitive Questions and their Justification
Topic |
Justification |
Intentions regarding sexual activity |
Intentions regarding engaging in sex and other risk-taking behaviors are extremely strong predictors of subsequent behavior (Buhi and Goodson, 2007). Intentions are strongly related to behavior and will be an important mediator predicting behavior change. |
Drug and alcohol use |
There is a substantial body of literature linking various high-risk behaviors of youth, particularly drug and alcohol use, sexual intercourse, and risky sexual behavior. The effectiveness of various program strategies is expected to differ for youth who are and are not experimenting with or using drugs and alcohol (Tapert et al., 2001; Li et al., 2001; Boyer et al., 1999; Fergusson and Lynskey, 1996; Sen, 2002; Dermen et al., 1998; Santelli et al., 2001.) |
Sexting |
The relationship between the use of technology among youth and sexual behavior is an emerging topic of interest that has not yet been heavily researched (National Campaign to Prevent Teen and Unplanned Pregnancy, Sex and Tech Survey, 2008). Questions will be asked of non-sexually active youth to examine this relationship, and identify potential pathways leading to the transition from non-sexually active to sexually active, and factors affecting the rate of that transition. |
Sexual activity, incidence of pregnancy and STDs, and contraceptive use |
Sexual activity, incidence of pregnancy and STDs, and contraceptive use are all key outcomes for the evaluation. The majority of these questions are asked only of youth who report being sexually active. |
Exhibit A.12.1 summarizes the reporting burden on study participants for first follow-up data collection in five sites, for second follow-up data collection in all seven sites, and for the additional early follow-up in one site (Ohio) 15. . Enrollment will occur over three years, so this burden is based on one-third of the expected sample. Questionnaire response times were estimated from pretests with student respondents and from prior experience. The annual burden for questionnaire response is estimated from an average of the total number of completed questionnaires proposed (expected response rate of 85 percent at first follow-up, 80 percent at second follow-up, and 82 percent for the additional early follow-up in Ohio) and the time required to complete the questionnaires. The total annual burden for first follow-up and additional early follow-up data collection in two sites is expected to be 1,766 hours. This includes 582 hours for follow-up data collection already approved by OMB (September 27, 2011. OMB Control No. 0970-0360). The total annual burden for second follow-up data collection is expected to be 1,424 hours. Combined, the total estimated annual burden for follow-up data collection is expected to be 3,190 hours16.
Exhibit A.12.1. Reporting Burden on Study Participants for Follow-Up Data Collection
First Follow-Up /Second Follow-Up
Site/Program |
Annualized Number of Respondents* |
Number of Responses Per Respondent |
Average Burden Hours per Response |
Total Burden Hours (Annual) |
Chicago Public Schools / Health Teacher |
409 |
1 |
36/60 |
245 |
OhioHealth/T.O.P.P. |
165 |
3 |
42/60 |
347 |
|
|
|
|
|
Children’s Hospital of Los Angeles/Project AIM |
275 |
2 |
36/60 |
330 |
Oklahoma Institute of Child Advocacy/Power Through Choices |
288 |
1 |
36/60 |
173 |
|
|
|
|
|
EngenderHealth |
310 |
2 |
36/60 |
372 |
Live the Life Ministries/WAIT Training |
440 |
2 |
42/60 |
616 |
Princeton Center for Leadership Training (PCLT)/TeenPEP |
440 |
2 |
36/60 |
528 |
Total |
2,327 |
|
|
2,611 |
* The annualized number of responses is an average of the expected responses for each round of data collection (85 percent at first follow-up, 80 percent at second follow-up, and 82 percent at the additional early follow-up in Ohio).
The PPA information collection does not impose a financial burden on youth respondents. Respondents will not incur any burden other than the time spent answering the questions contained in the questionnaires.
These information collection activities do not place any additional cost on respondents.
Total estimated cost to the government for first and second follow-up data collection across all sites is $5,920,551. Because follow-up data collection will be carried out over a total of three years as successive sites start up and enroll samples, the estimated annualized cost to the government for follow-up data collection is $1,973,517 per year.
The annual burden hours for Chicago were approved for 215 and the program change is by 245 hours for a total of 460 burden hours. The increase in burden per response is based on our experience in the field. OICA was previously approved for 367 burden hours and we are increasing burden hours by 173 for a total of 540. The total request for respondents has been adjusted to reflect expected response rates.
This phase of the PPA demonstration and evaluation involves collecting follow-up data that will be used for the impact evaluation.
Before estimating impacts, HHS will conduct two analyses of the data from the baseline survey. First, HHS will use the data to describe the study sample and help define subgroups of policy interest. This step will enable HHS to compare the characteristics of youth in the study with youth nationwide and provide guidance on how the study sample and findings might generalize to a broader policy setting. Second, HHS will assess whether random assignment resulted in similar baseline characteristics of youth, on average, for the treatment and control groups.
Pregnancy prevention approaches emphasize different outcomes. Some focus on promoting abstinence; others focus on use of contraceptives and avoiding STDs. The baseline data collected from program participants will ultimately be used to evaluate the effectiveness of these promising approaches with particular emphasis on the outcomes they target, as well as common outcomes across all approaches.
Given the underlying experimental design, unbiased impact estimates can be obtained from the simple, cross-sectional difference in average outcomes between the treatment and control groups, measured at follow-up. This means that baseline data on outcomes are not necessary to obtain unbiased impact estimates; however, baseline data can still be useful for the analysis. In particular, we can use baseline data to construct covariates for use in the regression models for estimating program impacts. We can thus improve the precision of the impact estimates by reducing the residual variance in the models (that is, the portion of the variance in outcomes that is left unexplained after accounting for treatment status). This gain in precision is often largest when a baseline measure of the outcome can be included as a covariate, so ideally one would use a consistent measure of the outcome variables over time, and ideally word survey questions related to particular outcomes as similarly as possible between the baseline and followup surveys. However, such consistency is not essential to achieve valid impact estimates (since they are obtained cross-sectionally with an experimental design).
The empirical specification for the model will depend on the unit of random assignment, which will depend on the type of program provided at a specific site. As we discuss further in section B1, most sites will use random assignment of entire schools, but some sites will employ random assignment of individuals within the site. With random assignment of students, our model can be expressed as:
(1) ,
where yi is the outcome of interest for student i; xi is a vector of baseline characteristics for student i, including baseline measures of the key outcomes; Ti is an indicator equal to one if the student is in the treatment group and zero if in the control group; and i is a random error term for student i. The vector of baseline characteristics xi will include demographic characteristics such as age, gender, race/ethnicity, and baseline measures of key outcomes. The parameter estimate for is the estimated impact of the program.
In most sites, schools will be randomly assigned and the estimation must account for the correlation of outcomes between students in the same school, as they may be exposed to similar influences not otherwise captured in the regression model. Therefore, each student cannot be considered statistically independent. We can modify the previous regression model as:
(2) .
The general structure of the model is the same, but now yis is the outcome measure for student i in school s (and similarly for the vector of baseline characteristics xis and the error term is). The treatment status Ts is now defined by school rather than by individual. Most importantly, the error term in Equation (2) accounts for the clustering of students within schools because of the inclusion of the school-level error term s—a school “random effect.” If this error term is excluded, the precision of the impact estimates could be seriously overstated. As in Equation (1), the estimated impact of the program is .
The specific maximum-likelihood methods for estimating the parameters of the models will depend on the form of the dependent variable. Logistic regression procedures will be specified for binary outcomes (such as whether the student has an STD) and multinomial regression procedures will be specified for categorical outcomes (such as the number of sexual partners).
Random assignment provides an unbiased estimate of the impact on all eligible youth, but some youth may never show up for services or classes. Assuming the program has no effect on youth who never show up, we can make a simple adjustment to calculate the impact on participants by dividing the impact on eligible youth by the participation rate. (However, this adjustment cannot be used in the more likely scenario that youth receive some, but not all, of the intervention.)
The effects of pregnancy prevention approaches may differ for different groups of youth. We will estimate impacts for subgroups of youth by adding to Equations (1) and (2) a term that interacts the treatment indicator by a binary indicator indicating whether the youth is in the subgroup or not. The estimate of the coefficient on this term provides an estimate of the difference in the program effect across the subgroups.
Certain exploratory analyses may also be conducted that further exploit the longitudinal (combined baseline and follow-up) data. For example, analyses can be conducted to examine the baseline variables that correlate with sexual risk behavior at follow-up, regardless of their treatment status. While such analyses are inherently correlational and not causal, they can nevertheless offer an understanding of which potential mediators of sexual risk behavior (for example, attitudes or knowledge) that are most predictive and, thereby, some guidance to both programs and evaluators on which mediators to emphasize in their work. In addition, should the models above reveal statistically significant evidence of a program impacts at later follow-up(s), models can be estimated that introduce measures of mediators from the first follow-up as covariates and observing how much of the impact can be explained by them. While again non-experimental, findings from these models can offer suggestive evidence of the mediator(s) through which program impacts are emerging, again providing some guidance for the direction of future research and program development.
The entire PPA evaluation will be conducted over an eight-year period. HHS began consultation with stakeholders about the design of the study and identification of potential programs and sites in September 2008 and will continue through March 2011. The baseline data collection for which HHS received OMB approval on July 26, 2010, (OMB Control No. 0970-0360) will take place over a three-year period beginning in November 2010 and ending by May 2013. The first and second follow-up data collections are projected to occur between fall 2011 and fall 2015. An interim report on program impacts, based on the first follow-up survey, will be completed in June 2014, and a final report based on the second follow-up survey will be completed in June 2016.
All instruments will display the OMB number and the expiration date.
No exceptions are necessary for this information collection.
SUPPORTING REFERENCES FOR
INCLUSION OF
SENSITIVE QUESTIONS OR GROUPS OF QUESTIONS
Boyer, Cherrie B., Jeanne M. Tschann, and Mary-Ann Shafer. "Predictors of Risk for Sexually Transmitted Diseases in Ninth Grade Urban High School Students." Journal of Adolescent Research, vol. 14, no. 4, 1999, pp. 448-65.
Buhi, Eric R. and Patricia Goodson. "Predictors of Adolescent Sexual Behavior and Intention: A Theory-Guided Systematic Review." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 40, no. 1, 2007, pp. 4.
Dermen, K. H., M. L. Cooper, and V. B. Agocha. "Sex-Related Alcohol Expectancies as Moderators of the Relationship between Alcohol use and Risky Sex in Adolescents." Journal of Studies on Alcohol., vol. 59, no. 1, 1998, pp. 71.
DiClemente RJ, Durbin M, Siegel D, Krasnovsky F, Lazarus N, and Comacho T. "Determinants of Condom use among Junior High School Students in a Minority, Inner-City School District." Pediatrics, vol. 89, no. 2, 1992, pp. 197-202.
DiClemente RJ, Lodico M, Grinstead OA, Harper G, Rickman RL, Evans PE, and Coates TJ. "African-American Adolescents Residing in High-Risk Urban Environments do use Condoms: Correlates and Predictors of Condom use among Adolescents in Public Housing Developments." Pediatrics, vol. 98, no. 2, 1996, pp. 269-78.
DiIorio, Colleen, William N. Dudley, Johanna E. Soet, and Frances McCarty. "Sexual Possibility Situations and Sexual Behaviors among Young Adolescents: The Moderating Role of Protective Factors." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 35, no. 6, 2004, pp. 528.
Dittus PJ and Jaccard J. "Adolescents' Perceptions of Maternal Disapproval of Sex: Relationship to Sexual Outcomes." The Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine, vol. 26, no. 4, 2000, pp. 268-78.
Fergusson, David M. and Michael T. Lynskey. "Alcohol Misuse and Adolescent Sexual Behaviors and Risk Taking." Pediatrics, vol. 98, no. 1, 1996, pp. 91.
Li, Xiaoming, Bonita Stanton, Lesley Cottrell, James Burns, Robert Pack, and Linda Kaljee. "Patterns of Initiation of Sex and Drug-Related Activities among Urban Low-Income African-American Adolescents." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 28, no. 1, 2001, pp. 46.
Santelli, John S., Leah Robin, Nancy D. Brener, and Richard Lowry. "Timing of Alcohol and Other Drug use and Sexual Risk Behaviors among Unmarried Adolescents and Young Adults." Family Planning Perspectives, vol. 33, no. 5, 2001.
Sen, Bisakha. "Does Alcohol-use Increase the Risk of Sexual Intercourse among Adolescents? Evidence from the NLSY97." Journal of Health Economics., vol. 21, no. 6, 2002, pp. 1085.
Tapert, Susan F., Gregory A. Aarons, Georganna R. Sedlar, and Sandra A. Brown. "Adolescent Substance use and Sexual Risk-Taking Behavior." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 28, n3, 2001, pp.181.
www.mathematica-mpr.com
Improving
public well-being by conducting high-quality, objective research and
surveys
Princeton,
NJ ■
Ann Arbor, MI ■
Cambridge, MA ■
Chicago, IL ■
Oakland, CA ■
Washington, DC
Mathematica®
is a registered trademark of Mathematica Policy Research
1 Within each site, the same instrument will be used for all rounds of follow-up data collection. Minor updates may be needed to adjust references to periods of time or specific dates.
2 Specifically, the current submission includes follow-up instruments for all seven sites (Chicago first follow-up and Oklahoma first follow-up (6 months) and additional early follow-up (immediate post-test) were previously approved on September 27, 2011; OMB Control No.: 0970-0360; now under OMB Control No, 0990-0382). This submission also includes the estimated burden for the following: first follow-up data collection in the five remaining sites (OhioHealth, CHLA, Teen PEP, EngenderHealth, and Live the Life); an additional early follow-up data collection in OhioHealth; and second follow-up data collection in all seven sites.
3 Abma, J. C., G. M. Martinez, W. D. Mosher, and B. S. Dawson. “Teenagers in the United States: sexual activity, contraceptive use, and childbearing”, Vital and Health Statistics, vol. 23, no. 24, 2004, pp. 1–48.
4 Albert, B., S. Brown, and C. Flannigan, eds. 14 and Younger: The Sexual Behavior of Young Adolescents. Washington, DC: National Campaign to Prevent Teen Pregnancy, 2003.
5 Teen birth rates declined by 34% from 1991–2005. See: Hamilton, B. E., J. A. Martin, and S. J. Ventura. “Births: Preliminary data for 2006.” National Vital Statistics Reports, vol. 56, no. 7. Hyattsville, MD: National Center for Health Statistics, 2007.
6 Hamilton BE, Martin JA, Ventura SJ. Births: Preliminary data for 2007. National vital statistics reports, Web release; vol 57 no 12. Hyattsville, MD: National Center for Health Statistics. Released March 18, 2009.
7 OMB approval was received for the Chicago and Oklahoma follow-up instruments on September 27, 2011 (OMB Control No. 0970-0360). The instruments are included in this submission because the estimated burden associated with additional rounds of follow-up data collection in these two sites is part of this submission.
8 In two sites, an additional early follow-up has been scheduled. In the Oklahoma (OICA) site, an immediate posttest will allow analysis of immediate effects on knowledge and attitudes, using the progression of three follow-up data points to model the role of intermediate outcomes on long-term impacts. In the OhioHealth site, where intervention effects on short-term contraceptive practice of teen mothers after the birth of their child is an important goal, the plan includes a follow-up six months after enrollment, while the program sample is still active in the program. The addition of this early follow-up should have no effect on the quality of data collected at later follow-ups. The interval between the early follow-up and the next is six months or more; that interval, and even shorter ones, are commonly used in teen pregnancy prevention studies multiple follow-up surveys. We will work in each of these sites to ensure that the same procedures are used in the early follow-up as in later ones, and to maintain respondent commitment to sustained participation in the study.
9 Turner, C.F., L. Ku, S.M. Rogers, L.D. Lindberg, J.H. Pleck, and F.L. Sonenstein. “Adolescent Sexual Behavior, Drug Use, and Violence: Increased Reporting with Computer Survey Technology.” Science, vol. 280, 1998, pp. 867–873.
10 Beebe, Timothy J., Patricia A. Harrison, James A. McCrae Jr., Ronald E. Anderson, and Jayne A. Fulkerson. “An Evaluation of Computer-Assisted Self-Interviews in a School Setting.” Public Opinion Quarterly, vol. 62, 1998, pp. 623–632.
11 Beebe, Timothy J., Patricia A. Harrison, Eunkyung Park, James A. McRae, Jr., and James Evans. “The Effects of Data Collection Mode and Disclosure on Adolescent Reporting and Health Behavior.” Social Science Review, vol. 24, no. 4, 2006, pp. 476–488.
12 Brener, Nancy D., Danice K. Eaton, Laura Kann, JoAnne Grunbaum, Lori A. Gorss, Tonja M. Kyle, and James G. Ross. “The Association of Survey Setting and Mode with Self-Reported Health Risk Behaviors Among High School Students.” Public Opinion Quarterly, vol. 70, 2006, pp. 354–374.
13 Webb, P.M., G.D. Zimet, J.D. Fortenberry, and M.J. Blythe. “Comparability of a Computer-Assisted Versus Written Method for Collecting Health Behavior Information from Adolescent Patients.” Journal of Adolescent Health, vol. 24, no. 6, 1999, pp. 383–388.
14 Schochet, Peter Z. “An Approach for Addressing the Multiple Testing Problem in Social Policy Impact Evaluations.” Evaluation Review, vol.33, no.6, December 2009.
15 Burden for the first follow-up in Chicago and Oklahoma and the additional early follow-up in Oklahoma was previously approved (total of 582 hours and 1,042 responses approved on September 27, 2011 under OMB Control No.: 0970-0360; now under OMB Control No, 0990-0382).
165The 3,190 hours is for all rounds of data collection, including those already approved by OMB. The estimated total burden for new data collections is 2,611 hours.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Supporting Justification for OMB Clearance of Evaluation of Pregnancy Prevention Approaches. Part A: Justification for the Colle |
Author | Barbara Collette |
File Modified | 0000-00-00 |
File Created | 2021-01-31 |