Supporting Justification for OMB Clearance of Evaluation of Adolescent Pregnancy Prevention Approaches
Part A: Justification for the Collection of Site-Specific Baseline Data – Request for Emergency Review
CONTENTS
Abstract 1
A.1 Circumstances Making the Collection of Information Necessary 3
1. Background 3
2. Study Objectives 3
4. Collaboration between PPA and Grantees Resulting in Site-Specific Instruments 6
5. Legal or Administrative Requirements that Necessitate the Collection 10
A2. Purpose and Use of the Information Collection 11
A3. Use of Improved Information Technology and Burden Reduction 12
A4. Efforts to Identify Duplication and Use of Similar Information 13
A5. Impact on Small Businesses or Other Small Entities 13
A6. Consequences of Collecting Information Less Frequently 13
A7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5 13
A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency 13
A9. Explanation of Any Payment or Gift to Respondents 13
A10. Assurance of Confidentiality Provided to Respondents 14
A11. Justification for Sensitive Questions 14
A12. Estimates of Annualized Burden Hours and Costs 15
A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers 16
A14. Annualized Cost to the Federal Government 16
A15. Explanation for Program Changes or Adjustments 16
A16. Plans for Tabulation and Publication and Project Time Schedule 17
1. Analysis Plan 17
2. Time Schedule and Publications 18
(continued)
A17. Reason(s) Display of OMB Expiration Date is Inappropriate 18
A18. Exceptions to Certification for Paperwork Reduction Act Submissions 18
SUPPORTING REFERENCES FOR INCLUSION OF SENSITIVE QUESTIONS OR GROUPS OF QUESTIONS 19
TABLES
A.1 PPA Program Description, Samples, and Targeted Outcomes, by Site 7
A.2 Domains Included in Site-Specific PPA Baseline Questionnaires Beyond Standard Concordance Instrument 10
A.3 Justification of Sensitive Questions 15
A.4 Estimated Annual Respondent Burden – Baseline Survey 16
The U.S. Department of Health and Human Services (HHS) is conducting the Evaluation of Adolescent Pregnancy Prevention Approaches (PPA), an eight-year demonstration designed to study the effectiveness of promising policy-relevant strategies to reduce teen pregnancy. The study was designed to include up to eight evaluation sites, and at this point it appears that there will be seven sites:
one site – Chicago Public Schools, implementing the HealthTeacher curriculum – has been recruited, and a baseline survey has been implemented; and
six federally-funded grantees have been recruited, and this emergency clearance is for baseline instruments specific to these sites. Clearance is needed by July 15, 2011.
Approval for outreach discussions with stakeholders, experts in the field, and program developers was received on November 24, 2008 (OMB Control No. 0970-0360). Approval for the baseline survey data collection and the collection of youth participant records was received on July 26, 2010 (OMB Control No. 0970-0360). Data collection under that latter approval began in the Chicago Public Schools site in fall 2010, after we had informed OMB of the site (per conditions of the Notice of Approval). At the time approval for baseline data collection was received, only one site had been recruited into the study (Chicago Public Schools). The original expectation was that the study would use the same baseline survey in sites subsequently recruited.
We are now seeking emergency clearance for revision to the baseline data collection instrument. Specifically, we are applying for approval of “site-specific instruments” – that is, variations that would be appropriate for the other six sites recruited into the study – based on a modified version of the previously-approved instrument. Clearance is needed by July 15, 2011, so that a site-specific instrument can be printed in time for survey administration in one site, which is slated to begin the second week of August.
Two developments have prompted this request.
First, a large group of federal staff has collaborated to modify the previously-approved PPA baseline instrument into a “concordance baseline instrument” suitable for all HHS pregnancy prevention evaluations, including but not limited to PPA. HHS is trying to maximize consistency across evaluations of federal pregnancy prevention grant programs. In 2010 and 2011, the Administration for Children and Families (ACF) and the Office of Adolescent Health (OAH), in coordination with other HHS offices overseeing pregnancy prevention evaluation, collaborated to consider revisions to the PPA instrument. Through this group and other efforts, HHS:
Defined core items that should if possible be included in surveys associated with all federal evaluations
Identified other ancillary measures that could be drawn on depending on the particular features of evaluation site programs and target populations
Defined a set of performance measures on which all grantees would report, stipulating that all grantees recruited into federal evaluations would collect participant-level data for these performance measures through their evaluation efforts
The resultant “concordance baseline instrument” thus began with the approved PPA instrument, but now has been revised to contain core items, ancillary measures, and performance measures. This concordance baseline questionnaire is slightly different from the already approved PPA baseline instrument. A copy of the baseline concordance instrument is found in Attachment A.
The second development is the need for instruments that are appropriate for each of the remaining six sites recruited into the study – instruments we are calling “site-specific baseline instruments.” As recruitment of PPA sites has progressed, we have found that variations in the target populations and the program models we will evaluate call for site-by-site variation in the focus of baseline questions. The “concordance baseline instrument” was the starting point for the baseline data collection instrument in each of the PPA sites, and further refinements have been made to make the instruments as appropriate for each site as possible. For example, two of the six sites recruited into the study work with pregnant and parenting youth. Some questions in the concordance instrument would be inappropriate for these populations. For instance, it would be irrelevant (and perhaps insulting) to ask pregnant and parenting youth if they had ever had sexual intercourse. Additions and deletions have been integrated into the site-specific version of the data collection instrument to make it most effective. Each site-specific baseline questionnaire thus reflects:
The “concordance baseline instrument” revisions (i.e. revisions to the previously-approved instrument)
Site-specific adaptations on that new standard instrument.
Each site specific instrument will consistently measure the core constructs. Each will have a set of common core items and performance measures. They will also permit evaluation of outcomes specific to each site.
To help communicate differences between the concordance instrument and the site-specific instruments, differences are pointed out in an introductory table before each site’s site-specific instrument submitted with this justification.
The feasibility of implementing a set of “site-specific instruments” depends on rapid OMB approval. Most of the new evaluation sites are scheduled to begin sample enrollment in fall 2011, some as early as August or September 2011. Following normal review procedures would make it impossible for site organizations to proceed with their scheduled program startup and to meet their evaluation sample enrollment requirements. If we are not able to make these adjustments in the data collection instruments, we risk losing critical information (for example, on target populations or interventions of policy interest) and substantially decreasing the value of the federal money spent on pregnancy prevention evaluation.
HHS is therefore requesting emergency clearance by July 15, 2011 for the site-specific baseline data collection. Emergency review appears justified by the guidance memorandum (M-11-07) on streamlining the PRA process, and in particular by that memo’s reference to risks of disruption of collection as a basis for invoking the emergency review process. Many of the items in the site-specific instruments, and in some sites most of the items, have already been reviewed, because they were included in the baseline instrument already approved by OMB. A memo detailing the justification for emergency clearance is found in Attachment B. This memo was approved by OMB in an email dated May 4, 2011.
For decades, policymakers and the general public have remained concerned about the prevalence of sexual intercourse among adolescents. Although adolescents today are waiting somewhat longer before having sex than they did in the 1990s, 60 percent of teenage girls and more than 50 percent of teenage boys report having had sexual intercourse by their 18th birthday.1 Approximately one in five adolescents has had sexual intercourse before turning 15.2 Rates of teenage pregnancy declined by 38 percent from 1990 to 2004, and the rate of teen births followed a similar decline3 until recently, when the rate of births rose by 5 percent from 2005 to 2007 for teens aged 15-19.4 There is still ample reason for HHS to be interested in identifying and evaluating promising approaches to reduce teen pregnancy, associated risk behaviors, and their consequences.
The objective of the PPA evaluation is to test selected promising approaches to prevent teen pregnancy among middle school- and high school-aged teens. The evaluation will help HHS determine the effectiveness of various approaches in affecting key outcomes related to pregnancy prevention (for example, sexual debut, pregnancy, sexually transmitted disease [STD] infection, and so on). Ultimately, the purpose of the evaluation is to provide stakeholders—including practitioners and federal and other policymakers—with information on a range of approaches that hold promise for preventing teen pregnancy, and, through the follow-up surveys, to assess rigorously the effectiveness of these approaches.
In the PPA evaluation, HHS has identified seven study sites (including six federal grantees) that will implement different pregnancy prevention approaches.5 In three of these sites, the programs to be tested will be operated in high schools or middle schools. In the other sites, the programs to be tested will be operated in community-based organizations (CBOs) or other non-school based settings. The study will include a sample of approximately 9,000 teens across these seven sites, and the sample size in each site is sufficient to detect policy-relevant impacts by site. In each site, youth will be assigned to a treatment group that receives the program of interest, or to a control group that does not. To ensure that behavior of control group youth is not affected, or “contaminated” by interaction with treatment group youth attending the same school or CBO program, random assignment in five of the sites will be done at the organization level (that is, the school, CBO, or other group). However, in two of the sites, random assignment will be done at the individual level, where youth are receiving services on an individual basis, and the risks of contamination are low.
In general, there will be three rounds of survey data collection. A baseline survey will be conducted with both the program and control groups before the youth in the program group are exposed to the pregnancy prevention programs. The first follow-up survey will be conducted with participating youth no sooner than 3-6 months after the end of the scheduled program intervention for each sample member (and a corresponding schedule for the control group). A second follow-up survey will be conducted with participating youth no later than 18-24 months after the scheduled end of the program. (Approval of follow-up surveys will be sought in separate submissions.) The exact timing of the two follow-up surveys will be determined in each site, taking into account the length of the program and the age of the target population. In a few sites, additional follow-up surveys may be required to fulfill analysis plans of grantees’ local evaluators. Wherever possible, there will be group administration of the self-administered survey; when necessary to increase response rates, this method will be augmented with web survey and telephone follow-up6.
Through the baseline and follow-up surveys (as well as school records where relevant), HHS will address the following research questions on program impact:
Are the (selected) approaches effective at meeting their immediate short-term objectives (for example, improving knowledge of pregnancy risks)?
Are the approaches effective at reducing adolescent pregnancy?
What are their effects on related outcomes, such as postponing sexual activity and reducing or preventing sexual risk behaviors and STDs?
Do these approaches work better for some groups of adolescents than for others?
HHS is interested in evaluating fairly intensive programs and strategies that can reasonably be expected to produce change, but that are also practical for host organizations to deliver. Thus, some programs involve participants over an extended period (for example, curricula covering one or more semesters, sequenced courses provided during different years in middle school, or year-long community programs). Others are delivered in a shorter, more intensive schedule.
Major evaluation activities include the following:
Identifying promising strategies and programs through a review of the literature and interviews with the “field” (for example, researchers, policy experts, and program developers) in order to focus the evaluation on interventions that are of substantial interest to the field and show the most promise for reducing rates of teen sexual activity and pregnancy.
Recruiting sites to participate in an evaluation of selected interventions (from among those identified by the field) and providing assistance on evaluation support activities.
Collecting data on the research sample at baseline and at two follow-up data collections. The first follow-up will be administered no sooner than 3-6 months after the program end date; the second follow-up will be administered no later than 18-24 months after the program end date.
Analyzing data collected and preparing reports with the results.
OAH is conducting this evaluation, with ACF assisting, through a lead contractor, Mathematica, and its subcontractors: Child Trends, National Abstinence Education Association, and Twin Peaks, LLC.
To date, one of the seven selected sites has begun implementing the evaluation. In the Chicago Public Schools, where an enhanced version of the Health Teacher curriculum is being tested, baseline data collection using the OMB-approved baseline (OMB Control No. 0970-0360) started in November 2010. Schools there were randomly assigned and the treatment group schools began program implementation with their seventh graders in January 2011.
Baseline data will serve several important purposes. Baseline information will be used to establish initial equivalence of the treatment and control groups and thus to confirm the integrity of the random assignment process. Baseline variables will be used to define subgroups for which impacts will be estimated, and to adjust impact estimates for the baseline characteristics of nonrespondents to the follow-up survey. Many baseline variables will be measures of outcomes measured again at follow-up; their baseline values can be used to improve the precision of impact estimates by their inclusion as covariates in the impact models.
3. Current Federal Pregnancy Prevention Programming and Evaluation Activity
Current federal pregnancy prevention programming has two aims: to replicate evidence-based programs and to spur on innovation. There are two major funding streams, and both promote both aims. The first stream – the Teen Pregnancy Prevention (TPP) Program (administered by the DHHS Office of Adolescent Health) – promotes both aims with two funding tiers: 75% of funds go to discretionary grants to replicate evidence-based programs, and 25% go to discretionary grants to conduct innovative demonstration evaluations. The second funding stream – the Personal Responsibility Education Program (PREP) (administered by the Administration for Children and Families) – provides a formula grant to states to replicate evidence-based teen pregnancy prevention programs or substantially incorporate elements of such programs. PREP also provides funding for discretionary grants for Innovative Strategies demonstration evaluations, as well as a Tribal program.
A subset of grantees in each stream noted in the previous paragraph applied to implement new, innovative programs, or replications of existing evidence-based programs with major adaptations. This subset is sometimes, in shorthand, referred to as “Tier II” (Tier I are those grantees replicating evidence-based programs with fidelity). All of these Tier II grantees were required, as part of their grant applications, to propose plans to conduct their own “local” evaluations, which would be rigorous (RCT or high-quality QED). Thus, as part of their grant application, each grantee had to propose activities in which more participants were recruited than could be served, so that program and control groups could be created. Additionally, each grantee was expected to propose a “local evaluator,” i.e. an individual, firm, university, or other entity, as well as a proposed evaluation plan, which the local evaluator would carry out.
Although all of this subset of grantees was required to propose plans to conduct their own “local evaluations,” they were also required, if selected, to participate in one of several Federal evaluation studies currently being planned or implemented that examine the impact of teen pregnancy prevention programs. One of these evaluations is PPA. PPA is focusing its evaluation sites on those grantees that are implementing innovative programs, i.e. TPP Tier 2 and PREP Innovative Strategies (called PREIS) grantees. PPA is now in conversations with six of these Tier II-type programs.
Collaboration between grantees and the PPA evaluation is required. If a grantee that is required to conduct a rigorous local evaluation is selected for a federal evaluation (e.g. PPA), the grantee is required under the relevant Funding Opportunity Announcements (FOAs) to have its local evaluator collaborate with the federal evaluation. The PPA evaluation contractor, likewise, is required to enter into this collaboration to fulfill its evaluation role. One part of this collaboration is to develop a “site-specific baseline instrument” that:
Includes core items and performance measures established in the “concordance baseline instrument”
Incorporates measures related to the site-specific research priorities established by local evaluators in their required plans
The collaboration between the grantees and PPA is, thus, mutually beneficial. Most survey items used will be the same across evaluation sites, thus enabling cross-site comparison at a later point (though that is not an objective of the PPA evaluation) and consistent interpretation of important measures. However, survey questions important to analysis of each site’s program impacts may vary depending on the population served or the nature of the program.
Negotiations between the PPA evaluation team and the six grantees recruited as additional PPA sites have resulted in “site-specific baseline instruments” that reflect the diversity of the grantees. The six additional grantees vary in their programmatic approach and in the definition of the populations they serve:
Focus on preventing repeat pregnancy among pregnant or parenting teenage females (Ohio Health Research and Innovation Institute; and, Children’s Hospital Los Angeles)
Serve youth in foster care group homes (Oklahoma Institute for Child Advocacy)
Teach healthy relationships and empowerment to pursue healthy gender roles, in the context of a summer youth employment program (Engender Health)
Teach abstinence in schools (Live the Life)
Deliver healthy relationship and safe-sex information through a peer educator approach (Princeton Center for Leadership Training)
Table A.1 provides a description of each of the six sites, including the program to be evaluated, the expected sample size, the target population, and key outcomes to be measured. Creating “site specific instruments” will enhance what can be learned from each site, and avoid awkward incongruities between standard questions and site circumstances. For example, Children’s Hospital Los Angeles and OhioHealth will serve pregnant and parenting mothers; asking sample members in
these sites about whether they had ever had sexual intercourse would be irrelevant and perhaps offensive. On the other hand, analysis of impacts on repeat pregnancies would be well served by including items about the respondent’s relationship with the child’s father, which may be a predictor of repeat pregnancy and therefore an important covariate, but is not part of the initial PPA questionnaire. Similarly, questionnaire tailoring is important in the Oklahoma Institute site, which will serve teenagers in foster care. For these youth, many of whom have been sexually abused, special care must be taken to modify questions about sexual activity to be sure they capture information on consensual sex. In several sites where programs will operate in quite conservative communities, questions about oral and anal sex have to be pared down or dropped for the evaluation to be accepted. The negotiation of these and other adjustments has involved repeated and detailed discussions with grantees and local evaluators.
Table A.1 PPA Program Description, Samples, and Targeted Outcomes, by Site
Program Description |
Targeted Sample |
Expected Total Sample Enrolled |
Targeted Outcomes |
OhioHealth Research and Innovation Institute (Columbus, Ohio) |
|||
T.O.P.P: 18-month clinic/hospital-based program to delay repeat pregnancies among adolescents 10-19 by improving access to reproductive health services and contraceptive care. Program consists of: 1) monthly telephone calls from a nurse educator to provide contraceptive information and help coordinate access to contraceptive services; 2)access to contraceptive services through a mobile OB/GYN trailer and transportation to clinic services. |
Pregnant and parenting teens, ages 10-19 in OhioHealth hospitals and clinics |
600 |
|
Children’s Hospital Los Angeles (Los Angeles, CA) |
|||
Project AIM: Evidence-based youth development program for teen parents under age 21 receiving case management as part of California’s Adolescent Family Life and Cal-Learn programs. Program consists of six 60-minute individual sessions and three 90-minute group sessions, in addition to ongoing case management and access to referrals for other services. Adapted for use in preventing repeat pregnancies among 15-18 year old females by focusing on aspirations and future planning while incorporating content specific to teen mothers (such as contraceptive use, relationship issues, and balancing their roles as adolescents and young mothers, etc.) |
Pregnant and parenting teen mothers receiving case management services through clinic sites, ages 15-18 |
1400 |
|
Oklahoma Institute for Child Advocacy (Oklahoma, Illinois, Maryland, California) |
|||
Power Through Choices: Sexuality education curriculum implemented in foster care group homes consisting of ten 90 minute sessions, for a total of 15-hours of curriculum. Teaches youth how to avoid sexual risk behaviors, pregnancy, and sexually transmitted infections. Topics include anatomy/reproductive health, increasing communication skills, avoiding sexually transmitted infections/HIV, and preventing pregnancy through the use of contraception. |
Youth in foster care group homes, ages 14-18
|
1080 |
|
EngenderHealth (Austin, TX) |
|||
Gender Matters: 20-hour program focused on helping teens achieve a sound understanding of concepts of health gender roles, healthy relationships, and empowerment to delay sexual initiation and increase consistency of condom use. Focuses on concepts of masculinity and femininity and their connections to sexual risk behavior. |
Youth participating in the Travis County Summer Youth Employment Program, ages 14-15 |
1125 |
|
Live the Life Ministries |
|||
WAIT Training: 8-hour abstinence-based curriculum, to be delivered by teachers in schools as a required class in 7th and 8th grades, for a total of 16 hours. The intervention is delivered in a short, intensive period, typically over eight consecutive school days each year. Focuses on educating young people on pregnancy prevention, setting future goals, responsible behavior, and healthy relationships. Emphasizes that young adolescents should postpone sexual activity and that practicing abstinence is the only way to eliminate the risk for pregnancy and STDs, including HIV. |
Youth in 7th grade in participating schools |
1600 |
|
Princeton Center for Leadership Training (New Jersey, North Carolina) |
|||
TeenPEP: School-based peer-to-peer program in which trained faculty advisors select youth to become a cohesive team of peer educators and serve as sexual health advocates and role models. These peer educators conduct five 90-minute structured and scripted outreach workshops, under the supervision of faculty advisors, for high school 9th graders. Topics include sexual health information, communication with partners and parents, problem-solving, decision-making, negotiation, refusal skills, and self management skills. |
Youth in Grade 9 in participating schools |
1600 |
|
Despite this important process for creating site-specific baseline instruments based on site circumstances, there is still a solid core of common questions based on the “core items” and the performance measures that are part of the “concordance baseline instrument” (discussed above). This will enable consistent interpretation of important outcome measures and cross-site comparison, if desired at a later point (cross-site comparison is not an objective of the PPA evaluation). Questions specifically required for analysis of each site’s program impacts have been integrated into the “concordance baseline instrument” to create site-specific instruments.
In summary form, Table A.2 shows the additional domains or broad topics that have been added to the concordance instrument for particular sites to respond to the research priorities of local evaluators because of the particular program focus or target population in those sites. Note that some adaptations are more subtle than adding totally new question domains: for some site-specific instruments, greater emphasis is placed on certain topics already assessed in the concordance instrument, in the form of additional or more detailed questions.
To help communicate differences between the concordance instrument and the site-specific instruments, differences are pointed out in an introductory table before each site’s site-specific instrument submitted with this justification.
Negotiation of these site-specific instruments has been completed, but much remains to be done for the PPA evaluation to proceed. OMB approval of these instruments is essential by July 15, 2011, so that they can be printed in time for survey administration. In one site, survey administration would begin the second week of August, and in most of the other sites it will begin in September. If the 60-day FRN were published in June, and the 30-day FRN in August, surveys could not begin on the required schedule. The normal clearance process would thus prevent initiation of the information collection and evaluation sample enrollment for the first (and critical) program cohorts. In most cases, grantees would not be able to “make up” for that loss of sample within the period of their grant, and the evaluation would be seriously degraded as a result. Rapid emergency review of the site-specific instruments is therefore essential. A memo detailing the justification for emergency clearance is found in Attachment B. This memo was approved by OMB in an email dated May 4, 2011.
Table A.2 Domains Included in Site-Specific PPA Baseline Questionnaires Beyond Standard Concordance Instrument
Domain |
CHLA |
OICA |
Ohio Health |
Engender Health |
Teen Pep* |
LtL* |
Date/age of entry into foster care |
|
X |
|
|
|
|
Country of birth |
|
|
X |
|
|
|
Amount of time living in the U.S. |
|
|
X |
|
|
|
Religious/spiritual affiliation |
|
|
X |
|
|
|
Socioeconomic status/indicators |
|
|
X |
|
|
|
Alternative education |
X |
|
X |
|
|
|
Marital status |
X |
|
|
|
|
|
Relationship with baby’s father |
X |
|
X |
|
|
|
Pregnancy history |
X |
|
X |
|
|
|
Previous pregnancies |
|
|
X |
|
|
|
Most recent/current |
|
|
X |
|
|
|
Intention to become pregnant |
X |
|
X |
|
|
|
Baby’s health/prenatal care |
|
|
X |
|
|
|
Physical and emotional support with the baby |
X |
|
|
|
|
|
Father’s relationship with the child |
X |
|
|
|
|
|
Use of child care |
X |
|
|
|
|
|
Future pregnancies (intentions, expectations) |
X |
|
X |
|
|
|
Reasons for not using contraception |
X |
|
|
|
|
|
Attitudes towards/perceptions of gender stereotypes |
|
|
|
X |
|
|
Knowledge of human anatomy/reproductive system |
|
X |
|
|
|
|
Knowledge of laws regulating access to birth control |
|
|
|
X |
|
|
Use of dual contraceptives |
|
|
|
X |
|
|
Future goals |
X |
|
|
|
|
|
Self-perception/motivation |
X |
|
|
|
|
|
* No additional questions were added to the Teen Pep instrument. Additional questions were added to the Ltl instrument, however these questions built on existing domains.
Public Law 110-161, which set fiscal year (FY) 2008 appropriations levels, included the following language: “$4,500,000 shall be available from amounts available under section 241 of the Public Health Service Act to carry out evaluations (including longitudinal evaluations) of adolescent pregnancy prevention approaches.” The same language appropriated $4,450,000 in each of FYs 2009, 2010, and 2011. These funds have been used for the PPA evaluation.
The regulations for implementing the Paperwork Reduction Act specify the requirements for requesting emergency processing:
§ 1320.13 Emergency processing:
An agency head or the Senior Official, or their designee, may request OMB to authorize emergency processing of submissions of collections of information.
(a) Any such request shall be accompanied by a written determination that:
(1) The collection of information:
(i) Is needed prior to the expiration of time periods established under this Part; and
(ii) Is essential to the mission of the agency; and
(2) The agency cannot reasonably comply with the normal clearance procedures under this part because:
(i) Public harm is reasonably likely to result if normal clearance procedures are
followed; or
(ii) An unanticipated event has occurred; or
(iii) The use of normal clearance procedures is reasonably likely to prevent or
disrupt the collection of information or is reasonably likely to cause a statutory
or court ordered deadline to be missed.
(b) The agency shall state the time period within which OMB should approve or disapprove the collection of information.
(c) The agency shall submit information indicating that it has taken all practicable steps to consult with interested agencies and members of the public in order to minimize the burden of the collection of information.
(d) The agency shall set forth in the Federal Register notice prescribed by § 1320.5(a)(1)(iv), unless waived or modified under this section, a statement that it is requesting emergency processing, and the time period stated under paragraph (b) of this section.
(e) OMB shall approve or disapprove each such submission within the time period stated under paragraph (b) of this section, provided that such time period is consistent with the purposes of this Act.
(f) If OMB approves the collection of information, it shall assign a control number valid for a maximum of 90 days after receipt of the agency submission.
To accomplish the objective of the appropriations, HHS seeks emergency OMB approval of the site-specific baseline survey instruments. Emergency review for the site-specific instruments appears justified because the collection of the information is essential to the agency and the use of normal clearance procedures would prevent or disrupt the collection of information.
The PPA evaluation requires baseline collection of three types of data on study sample members: baseline characteristics, service receipt (e.g. from program records), and baseline outcomes. These data will be obtained from a baseline survey administered to sample youth and from records data available from the programs and/or schools participating in the study.
The data will serve several important logistical and analytical purposes in the PPA evaluation. Detailed identifying information and contact information will be obtained to help track sample youth throughout the evaluation, and to locate them for follow-up if they have graduated, moved to another school, dropped out, or were absent for group follow-up data collection. Baseline variables are also important for the analysis to establish baseline equivalence of the treatment and control groups, to define subgroups for which impacts will be estimated, to adjust impact estimates for the baseline characteristics of nonrespondents to the follow-up survey, and to improve the precision of impact estimates by their inclusion as covariates in the impact models.
HHS will use the baseline data to measure: teens’ demographic and socioeconomic characteristics; knowledge, attitudes, and expectations; history of romantic relationships; stressors and supports; as well as to collect contact information. Site-specific outcomes will also be assessed. Because using values for key outcomes as measured at baseline can increase the precision of impact estimates, we will collect at baseline information on prior and current sexual activity and other variables to be measured again later at follow-up.
In addition to the concordance instrument, materials for each site are presented in separate files delivered with this justification statement. For each site, an instrumentation set is contained in a separate file, including the baseline questionnaire, the consent form, and the assent form, all tailored to the specific site. (No materials are presented for the Chicago Public Schools site, because that site is using the already-approved forms.)
The data collection plan reflects sensitivity to issues of efficiency, accuracy, and respondent burden. Where feasible, information will be gathered from existing data sources; the information being requested through surveys is limited to that for which the youth are the best or only information sources. Improved information technology will be used when appropriate and cost-effective. During the baseline data collection, self-administered paper-and-pencil instruments (PAPIs) will be used for all group-based completions. In those instances in which the survey must be administered to individuals, respondents will be provided a PIN/password for web completion or will be administered a telephone survey, or in one case will complete through an audio-CASI instrument because of local grantee concerns about respondent literacy. The advantages of PAPI over more technologically complex approaches, such as laptops or personal digital assistants (PDAs), are that it enables respondents to set their own pace; provides accurate responses to sensitive questions; reduces costs; and simplifies administration logistics when interviews are conducted in a group setting. This method is also consistent with other recent youth surveys and evaluations. Studies have shown no difference between PAPI and computer-assisted self-interviewing (CASI) in reports of most measures of male-female sexual activity, including reports such as ever having had sexual intercourse, recent sexual activity, number of partners, condom use, and pregnancy.7,8,9,10,11,12 Turner et al.5 found that CASI improved reporting on low-prevalence behaviors such as male-male sex, injection drug use, and sexual contact with intravenous drug users.
The information collection requirements for the PPA evaluation have been carefully reviewed to determine what information is already available from existing studies and what will need to be collected for the first time. Although the information from existing studies provides value to our understanding of reducing teenage sexual risk behavior, HHS does not believe that it provides sufficient information on a sufficient range of programs to policymakers and stakeholders aiming to reduce this behavior. The data collection for the PPA evaluation is an essential step to providing this information.
Programs in some sites will be operated by community-based organizations. The data collection plan is designed to minimize burden on such sites by providing staff from Mathematica Policy Research to assist in group data collection. For respondents who do not complete the survey in the group setting, Mathematica will provide passwords for web completion or will conduct a telephone data collection, thus minimizing requirements for extensive “sample pursuit” by site staff.
Baseline data are essential to conducting a rigorous evaluation of pregnancy prevention programs supported under Public Law 110-161. In absence of such data, funding decisions on teen pregnancy prevention programs will continue to be based on insufficient and outdated information on program effectiveness.
There are no special circumstances for the proposed data collection.
The 60-day notice for the initial PPA baseline data collection was published in the Federal Register on July 12, 2010.
The text of the required Federal Register Notice signaling HHS’s request for emergency review by OMB is included as Attachment C.
In four sites (including the site in which the baseline instrument has already been implemented Chicago Public Schools), no payment or gift to youth respondents will be made during the baseline interview. Three sites will provide payment to respondents to the baseline questionnaire13. In two of the sites (Children’s Hospital of Los Angeles and Ohio Health), a gift card will be given at baseline because the programs are recruiting adolescent mothers who have recently given birth and will be completing the baseline survey in their homes or in the hospital, most likely with their newborns present. In addition, in these sites some respondents will not have had any prior connection to the grantee organization, so providing a gift card as a thank you for participating seems essential to obtain high response rates and encourage participation in future rounds of follow-up data collection. In the third site (Oklahoma Institute), participants are youth living in foster care homes who could potentially transition out of the foster care system prior to follow-up, losing their connection to the grantee organization. As with the previous sites, providing a gift card as a thank you seems essential to obtain high response rates and encourage participation in future rounds of data collection.
HHS has embedded protections for privacy in the study design. Data collection will occur only if informed consent is provided by a parent or legal guardian if the respondent is a minor, or by respondents themselves if they are 18 or older. Consent for the duration of the study will be collected prior to baseline data collection. Consent forms explain the data being collected, and its use. The forms also state that answers will be kept private, that youths’ participation is voluntary, and that they may refuse to participate at any time. Participants and their parents/guardians are told that, to the extent allowable by law, individual identifying information will not be released or published; rather, data collection will be published only in summary form with no identifying information at the individual level. The forms note that the evaluation has obtained a Certificate of Confidentiality from the National Institutes of Health (NIH). In addition, student assent will be obtained prior to each group survey administration.
As with the baseline questionnaires, consent forms will be site-specific, that is, tailored to each site, based on the standard consent form approved already by OMB. Local evaluators in the grantee sites are required, as a condition of their grants, to obtain approval of a cognizant Institutional Review Board (IRB). Most local evaluators had already obtained IRB approval for questionnaires and consent and assent forms before they were selected for the PPA evaluation. As a result, the PPA evaluation team and the local evaluators have collaborated to arrive a consent and assent forms that still meet the requirements specified above, but also can meet the requirements of the local evaluators’ IRBs (as well as the IRB reviewing the overall PPA study). Copies of the IRB-approved parent letter, consent form, and student assent form for each site are found in the separate files presented with this justification statement for each site.
Our protocol during administration of the baseline survey will provide reassurance that we take the issue of privacy seriously. It will be made clear to respondents that identifying information will be kept separate from questionnaires. The questionnaire and envelope will have a label with a unique ID number; no identifying information will appear on the questionnaire or return envelope. Before turning completed questionnaires in to field staff, respondents will place them in blank envelopes and seal them. This approach has been shown in research to yield the same reports of sexual activity as computer-assisted surveys in school settings, and a lower incidence of student concerns about privacy. Identifying and contact information will be stored in secure files, separate from survey and other individual-level data.
Many of the measures in the OMB-approved baseline survey ask for information of a sensitive nature because it is not possible to avoid sensitive questions in a study of programs designed specifically to reduce sexual activity and associated risk behaviors among teens. Comprehensive measures of behavior are included because they will provide more accurate representations of teen sexual behavior, and the responses will significantly supplement the knowledge currently available on program effectiveness. Most of the sensitive questions in the site-specific instruments come from the baseline survey that OMB has already approved.
The items have been carefully selected, and we have been guided by past experience in determining whether or not the benefits of measures may outweigh concerns about the heightened sensitivity among sample members, parents, and program staff to specific issues. Although these questions are sensitive, they are commonly and successfully asked of youth similar to those who will be in the study, and we have pretested all of these specific survey questions among a diverse group of teens without any concerns raised about the questions’ sensitivity. Table A.3 provides justification for sensitive questions.
Table A.3 Justification of Sensitive Questions
Topic |
Justification |
Intentions regarding sexual activity |
Intentions regarding engaging in sex and other risk-taking behaviors are an extremely strong predictor of subsequent behavior (Buhi and Goodson, 2007). Intentions are strongly related to behavior and will be an important mediator predicting behavior change. |
Sexual activity questions |
The expected intervention strategies and outcomes differ between those youth who have previously had sexual intercourse and those who have not. Thus, it is critical that we obtain this information from youth at baseline, as well as at follow-up. |
Drug and alcohol use |
There is a substantial body of literature linking various high-risk behaviors of youth, particularly drug and alcohol use, sexual intercourse, and risky sexual behavior. The effectiveness of various program strategies is expected to differ for youth who are and are not experimenting with or using drugs and alcohol (Tapert et al., 2001; Li et al., 2001; Boyer et al., 1999; Fergusson and Lynskey, 1996; Sen, 2002; Dermen et al., 1998; Santelli et al., 2001.) |
Sexual victimization |
Sexual victimization is associated with age at first intercourse, number of sexual partners, as well as STD history among adolescent girls (David and Friel, 2001; Upchurch and Kusunoki, 2004). |
Sexual Orientation |
There is mixed evidence linking reported sexual orientation with early or late sexual initiation, risky behavior, and number of partners (Blake, et al. 2001, Goodenow et al., 2002; Resnick, et al., 1998, Magura, et al. 1994; Raj, et al. 2000). Nevertheless, we expect to control for baseline differences in this measure given its potential importance across an 8 site study. In addition, for interventions that focus particular attention on gay, lesbian and bisexual youth, we will use this measure to estimate impacts separately for this subgroup.
|
Pregnancy (past, current, and future intentions) |
Two of the programs are targeted towards teen mothers/parents and one of the primary outcomes is to prevent rapid, repeat pregnancies. |
The PPA information collection does not impose a financial burden on youth respondents. Respondents will not incur any burden other than the time spent answering the questions contained in the questionnaires.
Table A.4 summarizes the reporting burden on study participants at each of the sites. Enrollment will occur over two or three years, depending on the site. However, annual burden estimates are annualized and therefore based on one-third of the expected sample. Questionnaire response times were estimated from pretests of the original PPA baseline and prior experience. The annual burden for questionnaire response is estimated from the total number of completed questionnaires proposed and the time required to complete the questionnaires. The total burden shown below, even with the burden for the Chicago site that is omitted from the table, is less than what OMB approved previously, because the estimated number of sites has been reduced from eight to seven.
Table A.4 Estimated Annual Respondent Burden – Baseline Survey
Site/Program |
Annualized number of respondents |
Number of responses per respondent |
Average burden hours per response |
Total burden hours (annual) |
Children’s Hospital of Los Angeles/Project AIM |
467 |
1 |
.7 |
327 |
Oklahoma Institute of Child Advocacy/Power Through Choices |
360 |
1 |
.6 |
216 |
Engender Health/Gender Matters |
375 |
1 |
.6 |
225 |
Ohio Health/ T.O.P.P. |
200 |
1 |
.7 |
140 |
Live the Life Ministries/WAIT Training |
533 |
1 |
.7 |
373 |
Princeton Center for Leadership Training (PCLT) /TeenPEP |
533 |
1 |
.6 |
320 |
Total |
2468* |
|
|
1601 |
* Total annual number of respondents does not include the 1518 respondents for Chicago.
These data were collected in 2010-2011.
These information collection activities do not place any additional cost on respondents.
This clearance request is specifically for collecting data at baseline. Total estimated cost to the government is $1,676,325 for instrument development and data collection. Because baseline data collection will be carried out over three years, as successive sites start up and enroll samples, the estimated annualized cost to the government for baseline data collection is $558,755 per year.
OMB gave approval on November 24, 2008, for outreach discussions with stakeholders, experts in the field, and program developers (OMB Control No. 0970-0360). OMB also gave approval for baseline survey data collection and the collection of youth participant records was on July 26, 2010 (OMB Control No. 0970-0360).
HHS now seeks OMB approval for the site-specific baseline surveys. The collection of these data will take place over three years, as successive sites continue evaluation sample enrollment and implementation of their programs.
This phase of the PPA demonstration and evaluation involves collecting baseline information that will be used for the impact evaluation during the follow-up data collection.
Before estimating impacts, HHS will conduct two analyses of the data from the baseline survey. First, HHS will use the data to describe the study sample and help define subgroups of policy interest. This step will enable HHS to compare the characteristics of youth in the study with youth nationwide and provide guidance on how the study sample and findings might generalize to a broader policy setting. Second, HHS will assess whether random assignment resulted in similar baseline characteristics of youth, on average, for the treatment and control groups.
Pregnancy prevention approaches emphasize different outcomes. Some focus on providing information about avoiding sexual risk behavior. Others emphasize abstinence, and some focus on use of contraceptives and avoiding STDs. The baseline data collected from program participants will ultimately be used to evaluate the effectiveness of these promising approaches with particular emphasis on the outcomes they target, as well as common outcomes across all approaches.
Unbiased impact estimates can be obtained from the difference in the mean outcomes between the treatment and control groups. However, we can improve precision by controlling in our regression model for covariates, especially baseline measures of outcomes. Regression adjustment can also address any differences between the treatment and control groups in baseline characteristics that arose by chance or from survey nonresponse.
The empirical specification for the model will depend on the unit of random assignment, which will depend on the type of program provided at a specific site. As we discuss further in section B1, most sites will use random assignment of entire schools, but some sites will employ random assignment of individuals within the site. With random assignment of students, our model can be expressed as:
(1) ,
where yi is the outcome of interest for student i; xi is a vector of baseline characteristics for student i, including baseline measures of the key outcomes; Ti is an indicator equal to one if the student is in the treatment group and zero if in the control group; and i is a random error term for student i. The vector of baseline characteristics xi will include demographic characteristics such as age, gender, race/ethnicity, and baseline measures of key outcomes. The parameter estimate for is the estimated impact of the program.
In most sites, schools will be randomly assigned and the estimation must account for the correlation of outcomes between students in the same school, as they may be exposed to similar influences not otherwise captured in the regression model. Therefore, each student cannot be considered statistically independent. We can modify the previous regression model as:
(2) .
The general structure of the model is the same, but now yis is the outcome measure for student i in school s (and similarly for the vector of baseline characteristics xis and the error term is). The treatment status Ts is now defined by school rather than by individual. Most importantly, the error term in Equation (2) accounts for the clustering of students within schools because of the inclusion of the school-level error term s—a school “random effect.” If this error term is excluded, the precision of the impact estimates could be seriously overstated. As in Equation (1), the estimated impact of the program is .
The specific maximum-likelihood methods for estimating the parameters of the models will depend on the form of the dependent variable. Logistic regression procedures will be specified for binary outcomes (such as whether the student has an STD) and multinomial regression procedures will be specified for categorical outcomes (such as the number of sexual partners).
Random assignment provides an unbiased estimate of the impact on all eligible youth, but some youth may never show up for services or classes. Assuming the program has no effect on youth who never show up, we can make a simple adjustment to calculate the impact on participants by dividing the impact on eligible youth by the participation rate. (However, this adjustment cannot be used in the more likely scenario that youth receive some, but not all, of the intervention.)
The effects of pregnancy prevention approaches may differ for different groups of youth. We will estimate impacts for subgroups of youth by adding to Equations (1) and (2) a term that interacts the treatment indicator by a binary indicator indicating whether the youth is in the subgroup or not. The estimate of the coefficient on this term provides an estimate of the difference in the program effect across the subgroups.
The entire PPA evaluation will be conducted over an eight-year period. HHS began consultation with stakeholders about the design of the study and identification of potential programs and sites in September 2008. Recruitment of sites is being completed in May-June 2011. The baseline data collection, for which HHS received OMB approval on July 26, 2010 (OMB Control No. 0970-0360)will take place over a three-year period beginning in November 2010 and ending by May 2013. The first and second follow-up data collections are projected to occur between fall 2011 and fall 2015.
Reports on program impacts will be prepared for each site separately. Interim reports on program impacts, based on the first follow-up survey in each site, will be completed between spring 2012 and 2014. Final reports will be completed between summer 2013 and June 2016.
All instruments will display the OMB number and the expiration date.
No exceptions are necessary for this information collection.
SUPPORTING REFERENCES FOR
INCLUSION OF SENSITIVE
QUESTIONS OR GROUPS OF QUESTIONS
Boyer, Cherrie B., Jeanne M. Tschann, and Mary-Ann Shafer. "Predictors of Risk for Sexually Transmitted Diseases in Ninth Grade Urban High School Students." Journal of Adolescent Research, vol. 14, no. 4, 1999, pp. 448-65.
Buhi, Eric R. and Patricia Goodson. "Predictors of Adolescent Sexual Behavior and Intention: A Theory-Guided Systematic Review." Journal of Adolescent Health : Official Publication of the Society for Adolescent Medicine., vol. 40, no. 1, 2007, pp. 4.
Dermen, K. H., M. L. Cooper, and V. B. Agocha. "Sex-Related Alcohol Expectancies as Moderators of the Relationship between Alcohol use and Risky Sex in Adolescents." Journal of Studies on Alcohol., vol. 59, no. 1, 1998, pp. 71.
DiClemente RJ, Durbin M, Siegel D, Krasnovsky F, Lazarus N, and Comacho T. "Determinants of Condom use among Junior High School Students in a Minority, Inner-City School District." Pediatrics, vol. 89, no. 2, 1992, pp. 197-202.
DiClemente RJ, Lodico M, Grinstead OA, Harper G, Rickman RL, Evans PE, and Coates TJ. "African-American Adolescents Residing in High-Risk Urban Environments do use Condoms: Correlates and Predictors of Condom use among Adolescents in Public Housing Developments." Pediatrics, vol. 98, no. 2, 1996, pp. 269-78.
DiIorio, Colleen, William N. Dudley, Johanna E. Soet, and Frances Mccarty. "Sexual Possibility Situations and Sexual Behaviors among Young Adolescents: The Moderating Role of Protective Factors." Journal of Adolescent Health : Official Publication of the Society for Adolescent Medicine., vol. 35, no. 6, 2004, pp. 528.
Dittus PJ and Jaccard J. "Adolescents' Perceptions of Maternal Disapproval of Sex: Relationship to Sexual Outcomes." The Journal of Adolescent Health : Official Publication of the Society for Adolescent Medicine, vol. 26, no. 4, 2000, pp. 268-78.
Fergusson, David M. and Michael T. Lynskey. "Alcohol Misuse and Adolescent Sexual Behaviors and Risk Taking." Pediatrics, vol. 98, no. 1, 1996, pp. 91.
Li, Xiaoming, Bonita Stanton, Lesley Cottrell, James Burns, Robert Pack, and Linda Kaljee. "Patterns of Initiation of Sex and Drug-Related Activities among Urban Low-Income African-American Adolescents." Journal of Adolescent Health : Official Publication of the Society for Adolescent Medicine., vol. 28, no. 1, 2001, pp. 46.
Santelli, John S., Leah Robin, Nancy D. Brener, and Richard Lowry. "Timing of Alcohol and Other Drug use and Sexual Risk Behaviors among Unmarried Adolescents and Young Adults." Family Planning Perspectives, vol. 33, no. 5, 2001.
Sen, Bisakha. "Does Alcohol-use Increase the Risk of Sexual Intercourse among Adolescents? Evidence from the NLSY97." Journal of Health Economics., vol. 21, no. 6, 2002, pp. 1085.
Tapert, Susan F., Gregory A. Aarons, Georganna R. Sedlar, and Sandra A. Brown. "Adolescent Substance use and Sexual Risk-Taking Behavior." Journal of Adolescent Health : Official Publication of the Society for Adolescent Medicine., vol. 28, n3, 2001, pp.181.
1 Abma, J. C., G. M. Martinez, W. D. Mosher, and B. S. Dawson. “Teenagers in the United States: sexual activity, contraceptive use, and childbearing”, Vital and Health Statistics, vol. 23, no. 24, 2004, pp. 1–48.
2 Albert, B., S. Brown, and C. Flannigan, eds. 14 and Younger: The Sexual Behavior of Young Adolescents. Washington, DC: National Campaign to Prevent Teen Pregnancy, 2003.
3 Teen birth rates declined by 34% from 1991–2005. See: Hamilton, B. E., J. A. Martin, and S. J. Ventura. “Births: Preliminary data for 2006.” National Vital Statistics Reports, vol. 56, no. 7. Hyattsville, MD: National Center for Health Statistics, 2007.
4 Hamilton BE, Martin JA, Ventura SJ. Births: Preliminary data for 2007. National vital statistics reports, Web release; vol 57 no 12. Hyattsville, MD: National Center for Health Statistics. Released March 18, 2009.
5 The feasible number of evaluation sites has been adjusted from eight (projected in earlier submissions to OMB) to seven because one site tentatively recruited was unable to enlist the requisite number of schools.
6 In two sites, the baseline survey will be administered to individuals: one using paper and pencil instrument (PAPI), the other using audio computer assisted self interview (ACASI) to address literacy concerns. In a third site, the baseline survey will be read aloud to respondents in a group-administered setting (using PAPI).
7 Turner, C.F., L. Ku, S.M. Rogers, L.D. Lindberg, J.H. Pleck, and F.L. Sonenstein. “Adolescent Sexual Behavior, Drug Use, and Violence: Increased Reporting with Computer Survey Technology.” Science, vol. 280, 1998, pp. 867–873.
8 Beebe, Timothy J., Patricia A. Harrison, James A. McCrae Jr., Ronald E. Anderson, and Jayne A. Fulkerson. “An Evaluation of Computer-Assisted Self-Interviews in a School Setting.” Public Opinion Quarterly, vol. 62, 1998, pp. 623–632.
9 Beebe, Timothy J., Patricia A. Harrison, Eunkyung Park, James A. McRae, Jr., and James Evans. “The Effects of Data Collection Mode and Disclosure on Adolescent Reporting and Health Behavior.” Social Science Review, vol. 24, no. 4, 2006, pp. 476–488.
10 Brener, Nancy D., Danice K. Eaton, Laura Kann, JoAnne Grunbaum, Lori A. Gorss, Tonja M. Kyle, and James G. Ross. “The Association of Survey Setting and Mode with Self-Reported Health Risk Behaviors Among High School Students.” Public Opinion Quarterly, vol. 70, 2006, pp. 354–374.
11 Webb, P.M., G.D. Zimet, J.D. Fortenberry, and M.J. Blythe. “Comparability of a Computer-Assisted Versus Written Method for Collecting Health Behavior Information from Adolescent Patients.” Journal of Adolescent Health, vol. 24, no. 6, 1999, pp. 383–388.
12 Schochet, Peter Z. “An Approach for Addressing the Multiple Testing Problem in Social Policy Impact Evaluations.” Evaluation Review, vol.33, no.6, December 2009.
13 The amounts of these payments or gift cards vary by site, because they were determined by grantees and their local evaluators in applying for the grants they have been awarded. Gift card amounts are: $10 for Oklahoma and Ohio Health grantee sites; and $20 for the Children’s Hospital of Los Angeles grantee site, where the baseline will be administered in the respondent’s home.
File Type | application/msword |
Author | Barbara Collette |
Last Modified By | DSHOME |
File Modified | 2011-07-05 |
File Created | 2011-07-05 |