PREP Eval - FFUP+Implementation - Part A -revised 12- 18-13_clean

PREP Eval - FFUP+Implementation - Part A -revised 12- 18-13_clean.docx

Personal Responsibility Education Program (PREP) Multi-Component Evaluation

OMB: 0970-0398

Document [docx]
Download: docx | pdf

U.S. Department of Health
and Human Services

Office of Planning, Research and Evaluation & Family and Youth Services Bureau, Administration for Children and Families

7th floor West Aerospace Building

370 L'Enfant Promenade, SW

Washington, DC 20047

Project Officers: Clare DiSalvo, Dirk Butler




Part A: Justification for the Collection of Performance Measures, Implementation and Outcome Data - Personal Responsibility Education Program (PREP) Multi-Component Evaluation

0970-0398

Draft

June 2013








CONTENTS

Part a. Introduction 1

A1. Circumstances Making the Collection of Information Necessary 5

1. Legal or Administrative Requirements that Necessitate the Collection 5

2. Study Objectives 5

A2. Purpose and Use of the Information Collection 9

A3. Use of Information Technology to Reduce Burden 12

A4. Efforts to Identify Duplication and Use of Similar Information 13

A5. Impact on Small Businesses 14

A6. Consequences of Not Collecting the Information/Collecting Less Frequently 15

A7. Special Circumstances 15

A8. Federal Register Notice and Consultation Outside the Agency 15

A9. Payments to Respondents 15

A10. Assurance of Confidentiality 16

A11. Justification for Sensitive Questions 19

A12. Estimates of the Burden of Data Collection 21

1. Annual Burden for Youth Participants 21

2. Annual Burden for Grantees and Implementation Sites 24

3. Overall Burden 26

A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers 26

A14. Annualized Cost to Federal Government 26

A15. Explanation for Program Changes or Adjustments 29

A16. Plans for Tabulation and Publication and Project Time Schedule 29

1. Analysis Plan 29

2. Time Schedule and Publications 31

A17. Reason(s) Display of OMB Expiration Date is Inappropriate 32

A18. Exceptions to Certification for Paperwork Reduction Act Submissions 32

SUPPORTING REFERENCES FOR INCLUSION OF SENSITIVE QUESTIONS OR GROUPS OF QUESTIONS 33











TABLES

A.1 PREP Evaluation Instruments – Approved and Requested Burden, by Study 4

A2.1 Collection Frequency for CPREP Performance Measures Data 10

A9.1 Thank You Gifts for the IIS Data Collections 16

A11.1 Summary of Sensitive Questions to be Included on the Participant Entry and Exit Surveys and Their Justification 19

A11.2 Summary of Sensitive Questions to be Included on the IIS Follow-Up Surveys and Their Justification 20

A12.1 Estimate of Burden and Cost for the PREP Evaluation for Youth Participants 22

A12.2 Estimate of Burden and Cost for the Grantees and Implementation Sites 25

A12.3 Estimate of Burden and Cost for the PREP Evaluation – Approved and Requested Burden 27





INSTRUMENTS

INSTRUMENT #1 : PARTICIPANT ENTRY SURVEY (PAS)

INSTRUMENT #2 : PARTICIPANT EXIT SURVEY (PAS)

INSTRUMENT #3 : PERFORMANCE REPORTING SYSTEM DATA ENTRY FORM (PAS)

INSTRUMENT #4 : IMPLEMENTATION SITE DATA COLLECTION PROTOCOL (PAS)

INSTRUMENT #5 : MASTER FOLLOW-UP SURVEY (IIS)

INSTRUMENT #6 : HFSA FOLLOW-UP SURVEY (IIS)

INSTRUMENT #7 : MASTER LIST OF TOPICS FOR STAFF INTERVIEWS (IIS)

INSTRUMENT #8 : STAFF SURVEY (IIS)

INSTRUMENT #9 : TOPIC GUIDE FOR FOCUS GROUP DISCUSSION WITH PARTICIPATING YOUTH (IIS)

INSTRUMENT #10: PROGRAM ATTENDANCE DATA COLLECTION PROTOCOL (IIS)

INSTRUMENT #11: NY BASELINE SURVEY (IIS)

INSTRUMENT #12: NY FOLLOW-UP SURVEY (IIS)







ATTACHMENTS

ATTACHMENT A: OVERVIEW OF THE PREP EVALUATION

ATTACHMENT B: ANALYSIS PLAN FOR PREP IIS STUDY

ATTACHMENT C: QUESTION BY QUESTION SOURCE TABLE FOR THE FOLLOW-UP SURVEY

ATTACHMENT D: SOURCES REFERENCED FOR THE FOLLOW-UP SURVEY

ATTACHMENT E: QUESTION BY QUESTION SOURCE TABLE FOR THE STAFF SURVEY

ATTACHMENT F: SOURCES REFERENCED FOR THE STAFF SURVEY

ATTACHMENT G: 60-DAY FEDERAL REGISTER NOTICE

ATTACHMENT H: PERSONS CONSULTED ON COLLECTION OF THE PAS AND IIS DATA

ATTACHMENT I: CONSENT LETTERS AND FORMS

ATTACHMENT J: ADVANCE LETTERS AND PROMPTS

Part a. Introduction

In March 2010, Congress authorized the Personal Responsibility Education Program (PREP) as part of the Patient Protection and Affordable Care Act (ACA). PREP provides grants to states, tribes, tribal communities, and local organizations to support evidence-based programs to reduce teen pregnancy and sexually transmitted infections (STIs). The programs are required to provide education on both abstinence and contraceptive use. The programs will also offer information on adulthood preparation subjects such as healthy relationships, adolescent development, financial literacy, parent–child communication, education and employment skills, and healthy life skills. Grantees are encouraged to target their programming to high-risk populations—for example, homeless youth, youth in foster care, pregnant or parenting teens, youth residing in geographic areas with high teen birth rates, and Native American youth. The program is administered by the Family and Youth Services Bureau (FYSB), within the Administration for Children and Families, in the U.S. Department of Health and Human Services.

The PREP program has multiple components. However, this ICR is specifically related to two components of the program – state formula grant funding and “Competitive PREP” discretionary grant funding. States and territories could acquire PREP funding through a formula grant program. In 2010-2011, forty-five states, the District of Columbia, the Virgin Islands, Puerto Rico, and the Federated States of Micronesia applied for and received PREP funding. (In addition, in 2011, FYSB awarded 16 competitive grants to tribes and tribal communities.) Ten states and territories did not choose to apply for PREP formula grant funding. In these states and territories, the unallocated funding was available to local organizations in these states and territories via a competitive grant process. In 2012, FYSB awarded these “Competitive PREP” grants directly to 37 organizations in those states and territories that did not apply for the formula grant program.1 The Competitive PREP grantees, as a whole, are only being added to the Performance Analysis Study, which entails the collection of performance measures data from grantees.

In line with PREP’s emphasis on evidence-based programming, Congress also mandated a federal evaluation of the PREP program. To meet this need, FYSB and the Office of Planning, Research and Evaluation (OPRE) within the Administration for Children and Families (ACF) of the U.S. Department of Health and Human Services (HHS) have contracted with Mathematica Policy Research and its subcontractors to conduct the PREP Multi-Component Evaluation, a seven year evaluation to document how PREP-funded programs are operationalized in the field, collect performance measure data from PREP grantees, and assess the effectiveness of selected PREP-funded programs on reducing teenage pregnancies, sexual risk behaviors, and STIs.

Components of the PREP Evaluation. The evaluation includes three complementary components, each with distinct data collection activities:

  1. the Design and Implementation Study (DIS), a broad descriptive analysis of how states are using PREP grant funding to support evidence-based teen pregnancy and STI prevention programs;

  2. the Performance Analysis Study (PAS), focused on the collection and analysis of performance management data from state grantees, tribal grantees, and CPREP grantees; and

  3. the Impact and In-Depth Implementation Study (IIS), designed to assess the impacts and implementation of funded programs in four to five selected PREP sites.

Attachment A provides an overview of the components of the PREP evaluation, including the components that have received OMB approval and the components included in this ICR. Attachment B provides an overview of the analysis plan for the IIS component of the evaluation.

Previous Information Clearance Requests Approved by OMB. OMB has previously approved three information collection requests related to the PREP Evaluation:

  • November 6, 2001 – OMB approved “Field Data Collection” as part of the IIS, which involved collecting data on various program models and assessing the feasibility of conducting a rigorous evaluation (OMB Control # 0970-0398).

  • March 7, 2012 – OMB approved the “Design Survey” conducted as part of the DIS Study, which involved interviewing state administrators about key decisions they made about the design of their PREP programs (OMB Control #0970-0398).

  • March 12, 2013 – OMB approved the instruments associated with two data collection efforts: (1) collection of PREP performance measures from state and tribal PREP grantees for the PAS through participant entry and exit surveys and the Performance Reporting System Data Entry; and (2) collection of baseline data for the IIS through a baseline survey (OMB Control # 0970-0398).

Current Information Clearance Request. In this submission, ACF is now requesting OMB approval for instruments related to two of the three study components – the Performance Analysis Study (PAS) and the Impact and In-Depth Implementation Study.

  • Instruments Related to the Performance Analysis Study (PAS). For the PAS, we are seeking clearance to increase the burden associated with the performance measures data collection efforts approved in the last ICR to include collecting performance data from “Competitive PREP” (CPREP) grantees. There are four data collection instruments associated with that effort:

    1. Participant Entry Survey (Instrument 1),


    1. Participant Exit Survey (Instrument 2),


    1. Performance Reporting System Data Entry Form (Instrument 3), and


    1. Implementation Site Data Collection Protocol (Instrument 4).


With the exception of minor revisions to Instrument 3 to reflect the fact that there are no subawardees for CPREP grants, the instruments are identical to those approved in the last ICR.

  • Instruments Related to the Impact and In-Depth Implementation Study (IIS). For the IIS, we are requesting clearance for instruments associated with the impact analysis and instruments associated with the implementation analysis.

For the IIS Impact Analysis, we are requesting clearance for four instruments:

  1. Master Follow-up Survey (Instrument 5). The Master Follow-up Survey is almost identical to the IIS baseline survey approved in the last ICR. The small differences between the baseline and follow-up surveys include a small number of background questions that were dropped for the follow-up and a small set of questions concerning knowledge of STIs that were added for the follow-up. Attachment C illustrates the similarities between these two instruments and the few instances where the follow-up survey contains new questions. The same Master Follow-up Survey will be administered to youth twice, once 8-12 months after random assignment and then again 12 months later.

  2. Healthy Families San Angelo Master Survey (Instrument 6). The Master Follow-up Survey will be administered in

  3. two sites. Healthy Families San Angelo (HFSA) will be administering a slightly modified survey. HFSA offers a program for pregnant and parenting adolescent mothers, which necessitated a revised version of the Master Follow-up Survey. The Healthy Families San Angelo (HFSA) Follow-up Survey (Instrument 6) is a modified version of the Master Follow-up Survey (Instrument 5), revised to reflect the fact that the sample is all female and that all youth have already had sexual intercourse, and therefore do not need to be asked about sexual initiation. In addition, there are questions about subsequent pregnancies, parenting behaviors, and couple relationships to assess program effectiveness on these outcomes, which are goals of that program.

New York Master Survey (Instrument 11 & 12). The Master Baseline and Follow-up Surveys administered in Iowa and Kentucky will be slightly modified for the Teen Choice (TC) program in New York. The population being served in the TC program is middle and high school students attending one of four schools. These schools serve a combination of residential and day students (who return to their home in the evening). Most youth in this sample have emotional and behavioral issues, as well as low literacy levels; roughly 40% are expected to be foster care youth. Additionally, most youth (approximately 80%) are anticipated to be sexually active). Note. The burden associated with Instrument 11, the NY Baseline Survey, is not discussed in this package. The burden for this instrument has already been approved and was included in the package that OMB approved in March 2013. The changes to this survey do not change the burden.

The New York Baseline Survey (Instrument 11) and the New York Follow-up Survey (Instrument 12) have been revised due to the charachteristics of the population in this site. Both of these surveys have been streamlined and shortened slightly to accommodate the low literacy levels of this population. Specifically, the Master Baseline and Follow-up Surveys have been consolidated into one part, following a similar format to the HFSA survey. The Master Baseline and Follow-up Surveys have three parts: Part A, Part B1, and Part B2. All participants complete Part A. At the end of Part A, participants are then asked to complete either part B1 or B2 depending on whether or not they are sexually active. In the New York site, the survey is streamlined; there is only one part to the survey in order to reduce the complexity of the survey. Participants in the New York site will follow a skip pattern within this consolidated survey in order to answer only the questions that apply to them.

Additionally, in New York, several questions have been dropped from the Master Baseline and Follow-up Surveys in order to shorten the length of the survey. Participants in the New York site are anticipated to have lower literacy levels then participants at the other sites (e.g., Kentucky, Iowa, San Angelo), which may make the survey take longer. The questions have been dropped in order to ensure that the survey will not take longer than the Master Baseline and Follow-up Survey take other participants. Dropping the questions will keep the burden the same for all participants regardless of the site the participants are from. Note: the questions that have been dropped are not part of the core questions that will be analyzed across all sites.

Furthermore, some of the question responses have been altered to reflect the characteristics of the population. For example, many of the participants are expected to be foster care youth and may not have a mother or father figure. Therefore, any questions asking about a particpant’s mother or father figure have been altered to allow the participants to select an option that indicates they do not have a mother or father figure.

For the IIS In-Depth Implementation Analysis, we are requesting clearance for four instruments:

  1. The Master List of Topics for Staff Interviews (Instrument 7),

  2. The Staff Survey (Instrument 8),

  3. The Topic Guide for Focus Group Discussions with Participating Youth (Instrument 9), and

  4. The Program Attendance Data Collection Protocol (Instrument 10).

The data collected from these instruments will provide a detailed understanding of program implementation.

Table A.1 provides a list of all of the data collection instruments for the PREP evaluation that OMB has received thus far and a notation for whether the instrument has already been approved or is being submitted for approval in this ICR.

Table A.1. PREP Evaluation Instruments – Approved and Requested Burden, by Study

Data Collection Instrument

Type of Respondent

Status of Instrument

Performance Analysis Study

Participant Entry Survey

PREP State and Tribal Participants

Approved

Participant Exit Survey

PREP State and Tribal Participants

Approved

Performance Reporting System Data Entry Form

PREP State and Tribal Grantee Administrators

Approved

Sub-awardee Data Collection and Reporting

PREP State and Tribal Sub-Awardee Administrator

Approved

Implementation Site Data Collection

PREP State and Tribal Site Facilitator

Approved

Participant Entry Survey

CPREP Participants

Current request

Participant Exit Survey

CPREP Participants

Current request

Performance Reporting Data System Entry Form

CPREP Grantees

Current request

Implementation Site Data Collection Protocol

CPREP Implementation Sites

Current request

Design and Implementation Study

Design Survey: Discussion Guide for Use with PREP State-Level Coordinators and State-Level Staff

State-Level Coordinators and State-Level Staff

Approved

Impact and In-Depth Implementation Study

Impact Analysis



Discussion Guide for use with Macro-Level Coordinators

Macro-Level Coordinators

Approved

Discussion Guide for use with Program Directors

Program Directors

Approved

Discussion Guide for Use with Program Staff

Program Staff

Approved

Discussion Guide for Use with School Administrators

School Administrators

Approved

Baseline Survey

Participants

Approved

NY Baseline Survey

Participants

Burden Approved – Requesting Approval for a Revised Instrument

Master Follow-Up Survey

Participants

Current Request

HFSA Follow-Up Survey

Participants

Current Request

NY Follow-Up Survey

Participants

Current Request

Implementation Analysis



Master List of Topics for Staff Interviews

State, Grantee, Subawardee and Implementation Site Staff

Current Request

Staff Survey

Implementation Site Staff

Current Request

Focus Group Discussion Guide

Participants

Current Request

Program Attendance Data Collection Protocol

Implementation Site Staff

Current Request

Value of the PREP Evaluation. The multiple components of the PREP Evaluation play a unique role in the mix of current federal evaluation efforts designed to expand the evidence base on teen pregnancy prevention programs. First, unlike other evaluations, the PREP effort will provide information on large-scale replication of evidence-based programs, with particular emphasis on (1) lessons learned from replication among high-risk populations in new settings, such as youth in foster care group homes, in the juvenile justice system, or youth living on tribal lands, (2) how and why states, tribes, and localities choose and implement evidence-based programs most appropriate for their local contexts, and (3) adaptations made to support the unique PREP requirements, such as the inclusion of adulthood preparation subjects. Data from both the DIS and PAS will help answer these questions about large-scale replication. Second, the evaluation will also offer a unique opportunity to test the effectiveness of four or five program models on various high-risk populations, contributing further to building a more comprehensive evidence base on effective programming. The IIS will complement the broad national perspectives of the DIS and PAS with a much more detailed look at a subset of four or five PREP programs. The IIS features an in-depth implementation study, with in-person site visits, stakeholder interviews, and focus groups in all in-depth study sites. The implementation findings will be examined in conjunction with program impacts to examine the relationship between the quality of implementation and the observed outcomes on youth.

A1. Circumstances Making the Collection of Information Necessary

1. Legal or Administrative Requirements that Necessitate the Collection

On March 23, 2010 the President signed into law the Patient Protection and Affordable Care Act, H.R. 3590 (Public Law 111-148, Section 2953). In addition to its other requirements, the act amended Title V of the Social Security Act (42 U.S.C. 701 et seq.) to include formula grants to states and territories, and competitive grants to tribes and local organizations to “replicate evidence-based effective program models or substantially incorporate elements of effective programs that have been proven on the basis of scientific research to change behavior, which means delaying sexual activity, increasing condom or contraceptive use for sexually active youth, or reducing pregnancy among youth.” The legislation mandates that the Secretary evaluate the programs and activities carried out with funds made available through PREP. To meet this requirement, FYSB and OPRE within ACF have contracted with Mathematica Policy Research and its subcontractors to conduct the PREP Multi-Component Evaluation.

In addition, the collection of performance measures, one component of this evaluation and of this request for OMB approval, will support compliance with the GPRA Modernization Act of 2010 (Public Law 111-352).

2. Study Objectives

The objectives of the PREP Evaluation are to document how PREP-funded programs are operationalized in the field, collect performance measure data from PREP grantees, and assess the effectiveness of selected PREP-funded programs on reducing teenage pregnancies, sexual risk behaviors, and STIs. The evaluation will expand the evidence base on teen pregnancy prevention programs and serve as a case study on the successes and challenges of replicating, adapting, and scaling up evidence-based programs through federal grant-making to states, tribes, and tribal communities.

As described above, the evaluation has three main components: (1) the Design and Implementation Study; (2) the Performance Analysis Study; and (3) the Impact and In-Depth Implementation Study. We provide more background on the relevant aspects of these components for this ICR below. Attachment A also provides an overview of the multiple components of the PREP evaluation.

Performance Analysis Study. The purpose of the PAS for CPREP grantees is identical to the purpose of the PAS for state and tribal grantees, which was approved March 12, 2013 (OMB Control Number 0970-0398).

For CPREP grantees, the PAS will collect information from all grantees on the extent to which the federal PREP objectives are being met. The PAS data can also be used to create a foundation for program improvement efforts based on federal, grantee, and implementation site examination of the data. The PAS will not be used to evaluate program effectiveness, which will be estimated only in the four or five sites participating in the IIS component.

CPREP grantees (like the state and tribal grantees) will be responsible for ensuring that all performance measures are reported to ACF. The data that the CPREP grantees will report to ACF will originate from two levels – the grantee and the implementation sites. For some performance measures, grantees will provide data about activities or decisions that they undertake directly at the grantee level. For other measures, data will need to be gathered from each implementation site that provides direct programming to youth. In addition, some data will come from the youth themselves, who will be asked to complete entry and exit surveys. The efforts expected to be undertaken at each level and the estimated levels of burden are further explained in Section A.12.

ACF is currently seeking clearance to increase the burden associated with the performance measures data collection efforts approved in the last ICR to include collecting performance data from “Competitive PREP” (CPREP) grantees. Specifically, ACF is requesting OMB approval for the Participant Entry and Exit Surveys (see Instruments 1 and 2), as well as the data collected through the Performance Reporting System Data Entry Form and the Implementation Site Data Collection Protocol (see Instruments 3 and 4). Importantly, with the exception of minor revisions to Instrument 3 to reflect the fact that there are no subawardees for CPREP grants, the instruments are identical to those approved in the last ICR. Section A.16 describes what will be collected through the participant surveys and grantee reporting for the PAS and how it will be used.

Impact and In-Depth Implementation Study (IIS). The objective of the IIS is to assess the impacts and implementation of four to five selected PREP-funded programs. The study will help ACF determine the effectiveness of PREP-funded programs in improving key outcomes related to teen pregnancy, sexually transmitted infections, and associated sexual risk behaviors. It will also provide important information on the success and challenges sites face in implementing evidence-based teen pregnancy prevention programs and the quality with which the programs were implemented.

The evaluation team is currently working with ACF to identify four or five PREP-funded sites to participate in this component of the evaluation. The sites are not meant to be representative of PREP-funded programs as a whole. Rather, site selection is focusing on grantees that (1) are large enough to support an impact and in-depth implementation study, (2) are implementing programs in a way that is amenable to random assignment for the program impact study (discussed below), and (3) address priority gaps in the existing research literature on evidence-based approaches to teen pregnancy prevention. These gaps include evidence on effective programs for high-risk populations such as youth living in rural areas or youth in the foster care or pregnant and parenting teens.

In each site, youth will be randomly assigned to a treatment group that receives the program being tested or to a control group that does not. The evaluation team will work collaboratively with site leaders to develop a plan for randomly assigning either individuals or organizations (such as schools, clinics, or group homes) to the treatment or control groups. Random assignment of individuals will be preferred when the risk of cross-over is low or when the program focuses more on individualized services or voluntary group programs; a cluster design is optimal when the risk of cross-over is high or when the program model features group- or community-level components intended to have broad contextual effects on the target population.

In each site, ACF expects to recruit and enroll a sample of 1,200 to 1,500 youth (for a total of 6,000 youth across four or five sites). Each site will be analyzed separately, so the relatively large samples of 1,200 to 1,500 youth per site are needed to detect policy-relevant impacts on key behavioral outcomes. ACF does not plan to pool data across sites or compare the effectiveness of one program versus another. The target sample sizes have been determined to support this goal of site-specific analyses. Minimum detectable impacts for the target sample sizes are presented in Part B.

IIS Impact Analysis. We have already received clearance for the IIS baseline survey on March 12, 2013 (OMB Control # 0970-0398), which includes the burden for the NY Baseline Survey (see Instrument 11). In this submission, we are seeking OMB approval for only the revisions to theNY Baseline Survey (see Instrument 11), not the burden.

Additionally, we are seeking OMB approval for the Master and HFSA-specific Follow-up Surveys (see Instruments 5 and 6), as well as the NY Follow-up Survey (see Instruments 12). These instruments will be administered twice: (1) approximately eight to 12 months after random assignment, and (2) approximately 12 months after the short-term follow-up survey. The exact timing of data collection rounds in each site will depend on what makes most sense based on the length of the site’s program and, in school-based programs, what timing works best with the school calendar. Attachment B provides an analysis plan for the IIS component of the PREP evaluation and describes how the data collected with these instruments will be used.

We are submitting three versions of the IIS follow-up survey for approval (Instruments 5, 6, and 12). The Master Follow-up Survey (Instrument 5) is the same as the IIS baseline survey, which was previously approved by OMB, with the exception of a small number of questions that were added or dropped for the follow-up (see Attachment C for a list of the differences). This instrument includes the full list of questions the evaluation team plans to ask in all IIS sites except HFSA and NY, which will use its own specially tailored follow-up instrument (Instrument 6 and Instrument 12), which is described below. For IIS sites using the Master Follow-up Survey, the evaluation team will drop some questions in certain sites that are not appropriate for the population that will be served there. For instance, in a site that serves middle school youth, the evaluation team will not ask about high school completion.

The HFSA Follow-up Survey (Instrument 6) will only be used in that one evaluation site. HFSA is a home visiting program that serves pregnant and parenting adolescent mothers. This instrument has been tailored for the unique circumstances in that site. For example, the instrument does not use the three-part survey design used in the Master Follow-up Survey, where youth complete either the second or third sections depending on whether they are sexually experienced. This structure avoids having to ask non-sexually experienced youth detailed questions about their sexual activity. Since all youth served by HFSA will be sexually experienced, this three-part design is not necessary. The evaluation team also revised various question wordings to reflect the fact that all sample members are female and all will be parents, and removed some questions due to local sensitivities to the content. Finally, this instrument includes additional questions to capture subsequent pregnancies and births, parenting, and the relationship of the adolescent mother with the father of her baby, since HFSA aims to improve these outcomes. See Attachment C for the additional questions and sources for both Instruments 5 and 6.

Similar to the HFSA Follow-up Survey, the NY Follow-up Survey (Instrument 12) will only be used in one evaluation site. NY is a program that is being administered to youth who are either in residential or day schools. These schools serve youth with emotional or behavioral issues; over 40% of these youth are anticpated to be foster care youth. Additionally, these youth typically have lower literacy levels. The vast majority (approximately 80%) of youth are anticipated to be sexually active. This instrument has been tailored due to the unique characteristics of this site. For example, like HFSA, this revised survey does not follow the three-part design of the Master Follow-up Survey. Rather, participants in the NY site will complete a survey with one part. This survey is designed this way to reduce the complexity of the survey to make the survey easier to administer to youth with lower literacy levels. Additionally, questions have been dropped in order to shorten the length of the survey. Finally, question responses have been adapted to reflect the charachteristics of this popuatlion. For example, questions involving a mother or father figure have added an answer choice allowing respondents to select a response that indicates that they do not have a specific mother or father figure.

Wherever possible, the evaluation team will use group administration of a self-administered pencil-and-paper instrument (PAPI) for the follow-up surveys used in the IIS Impact Analysis. This method is preferred when it is feasible because it greatly reduces data collection costs. Many potential IIS sites are operating school-based programs which are well suited for this kind of group administration. When necessary to increase response rates because not all youth will be present at the group administration or when group administration is not feasible (for example, for programs that are not school based), the evaluation team will use telephone surveys as either a supplemental or a primary data collection mode. In circumstances in which this method will be used to supplement PAPI administration, trained interviewers will read the survey aloud to respondents over the telephone, and the interviewers will record the respondent’s answers on a hard copy survey. In circumstances in which telephone interviewing will be the primary data collection mode (because there will be no group administration), the evaluation team will use a computer assisted telephone interview (CATI) to administer the survey, where the survey questions and skip logic are programmed and the interviewer reads the questions from a computer screen and records the responses directly into a computer. Because HFSA is not a school-based program and thus group administration is not feasible, the evaluation team plans to administer both rounds of the follow-up survey using CATI.

IIS In-Depth Implementation Analysis. Through the IIS implementation analysis, ACF seeks to understand how each program actually operated in the participating sites and how the services actually delivered help to interpret the impacts that are observed. The evaluation team will obtain detailed information on the services delivered and the extent to which youth participate and engage with them. The analysis will document the location and community context in which these services and activities were provided, whether each program implemented with fidelity to the developer’s intentions and the site’s implementation plans, and the quality of the delivery of the key program components. Finally, the implementation analysis will describe the contrast between the program as implemented and the “business as usual” counterfactual. Specifically, it will address: how were the activities and services provided by the program similar to and different from those available to control group youths? and how did the experiences of program group youths differ from those of control group youths? Understanding the programs, documenting their implementation and context, and assessing fidelity of implementation will facilitate descriptions of the implemented program and the treatment-control contrast evaluated in each site. This information will help to interpret impact analysis findings and may help explain any unexpected findings, differences in impacts across programs, and differences in impacts across locations or population subgroups. It may also identify key elements of implementation important for program replication and scale-up.

The implementation analysis will include two rounds of data collection. Each round will include (1) in-person site visits and staff interviews (Instrument 7), (2) a web-based staff survey (Instrument 8), and (3) focus groups with youth participants (Instrument 9). The timing of site visits will be determined after sites are selected and specific implementation plans are known. The goal will be to conduct the first site visit early in the implementation period and the second visit later in the implementation period. State-level and program staff interviews will be guided by the Master List of Topics for Staff Interviews (Instrument 7). Frontline staff and their supervisors will be invited to complete an online survey (Instrument 8), around the time of each visit. Finally, the evaluation team

plans to hold two sets of focus groups with youth – once during early implementation and a second time later in the implementation process to allow for program maturation and to help capture variations in youth experiences over time. The focus groups will be guided by the Topic Guide for Focus Group Discussion with Participating Youth (Instrument 9). The implementation analysis will also use data on program participation. Administrative data on program attendance will be collected through extracts from pre-existing program administrative data systems or through spreadsheets provided by Mathematica (Instrument 10). This information will allow the evaluation team to document the proportion of program services that were actually delivered to participants.

ACF is currently requesting OMB approval for the four IIS Implementation instruments (Instruments 7, 8, 9, and 10) that will be used to gather the IIS In-Depth Implementation Analysis data. Section A.16 and Attachment B provide overviews of how the data will be used.

A2. Purpose and Use of the Information Collection

Performance Analysis Study. As with state and tribal PREP grantees, the purpose of performance measurement for the CPREP grantees is to track outputs and outcomes over time in order to provide information on how CPREP grantees and the programs that they operate are performing. Through the PAS, CPREP grantees will be required to submit annually on two broad topics – PREP program structure and the PREP program delivery. These are the same reporting requirements approved in the recent ICR for state and tribal PREP grantees.

  • PREP program structure refers to how grant funds are being used, the program models selected, the ways in which grantees support program implementation, and the characteristics of youth served.

  • PREP program delivery refers to the extent to which the intended program dosage was delivered, youths’ attendance and retention, youths’ perceptions of program effectiveness and their experiences in the programs, and challenges experienced implementing the programs.

To understand PREP program structure, CPREP grantees will be asked to provide the amount of their grant allocated for various activities, including direct service provision; approach to staffing at the grantee level; grantee provision of training, technical assistance, and program monitoring: number of program facilitators, their training on the program model, and the extent to which they are monitored to ensure program quality (Instrument 3). Grantees will also describe the characteristics of youth enrolling in CPREP programs, using Instrument 1 to collect the information from the youth participants, and Instrument 3 to submit the aggregated measures to ACF.2

To understand PREP program delivery, CPREP grantees will be asked to provide the number of completed program hours for each cohort; number of youth who ever attended a CPREP program, and by subpopulations (such as youth in foster care or the juvenile justice system); youths’ attendance and retention;3 youths’ perceptions of program effectiveness and program experiences; and challenges providers face implementing their programs. This information will be collected from youth completing the programs (Instrument 2), implementation sites involved in the direct delivery of programs (Instrument 4), and submitted to ACF by the grantees (Instrument 3). Unlike the IIS, participants’ responses on the PAS will not be linked; we are not attempting to understand change overtime with the administration of the two surveys. Rather, this information will be used to understand the characteristics of program completers relative to program initiators. Grantees will not report individual-level responses, but will instead report aggregated measures.

The frequency with which performance data will be collected from grantees is summarized in Table A2.1.

Table A2.1. Collection Frequency for CPREP Performance Measures Data

Category

Collection Frequencya

Demographic Items: Age, Grade, Gender, Ethnicity, Race

Program Entry and Exit

Risk Behaviors and Intentions3

Program Entry

Participant Perceptions of Program Effects

Program Exit

Participant Assessments of the Program Experience

Program Exit

Features and Structures: Grantees, Programs

Once a Year

Program Fidelity (Dosage)

At Program Sessions

Participant Engagement (Attendance, Reach, Retention)

At Program Sessions and Cohort Completion

Staff Perceptions of Quality Challenges and Technical Assistance Needs

Once a Year

a“Collection frequency” refers to when grantees and program staff collect the data that will later be compiled and reported to ACF.

ACF will then use the performance measures data to (1) track how CPREP grantees are allocating their PREP funds; (2) assess whether the PREP objectives are being met (for example, in terms of the populations served); and (3) help drive CPREP programs toward continuous improvement of service delivery. In addition, ACF will use this information to fulfill reporting requirements to Congress and the Office of Management and Budget concerning the PREP initiative. ACF also intends to share grantee level findings with each CPREP grantee to inform their own program improvement efforts.

The Participant Entry Survey (Instrument 1), Participant Exit Survey (Instrument 2), the Performance Reporting System (Instrument 3), and the Implementation Site Data Collection Protocol (Instrument 4) are attached.

Impact and In-Depth Implementation Study. Attachment B provides a detailed analysis plan for the IIS component of the PREP evaluation.

IIS Impact Analysis.

Data collected on the PREP follow-up surveys (Instruments 5,6, and 12) will be used to measure youth outcomes, with the ultimate purpose of measuring program impacts. The follow-up data collection for which approval is now sought will focus on two types of outcomes – both of which can only be measured through surveys of youth. The first are sexual risk outcomes, including the extent and nature of sexual activity, use of contraception (if sexually active), pregnancy, and testing for and diagnoses of STDs. The second are a series of intermediate outcomes that may be associated with the sexual risk outcomes and therefore important to measure as potential pathways of any program effects on sexual risk behavior. Examples of these outcomes include participation in and exposure to pregnancy prevention programs and services, intentions and expectations of sexual activity, relationships with family and friends, knowledge of contraception and sexual risks, dating behavior and alcohol and drug use. In addition, the survey includes a small number of questions that identify socio-demographic or other characteristics of youth in the study sample, which will be used for descriptive purposes. Finally, for sample youth who report not being sexually active, the survey includes questions to support a descriptive analysis of these youth and a future investigation of their potential transition into sexual activity. To ensure privacy of youth who respond to the surveys, the length of the series of questions for non-sexually active youth has been timed to approximate to the length of the series for sexually active youth.

Follow-up data will be used to address the following research questions on program impact:

  • Are the approaches effective at meeting their immediate objectives (for example, improving knowledge of pregnancy risks)?

  • Are the approaches effective at reducing adolescent pregnancy?

  • What are their effects on related outcomes, such as postponing sexual activity and reducing or preventing sexual risk behaviors and STDs?

  • Do these approaches work better for some groups of adolescents than for others?

The evaluation team will use the specially tailored HFSA Follow-up Survey (Instrument 6) to collect outcome data in that site. The tailored NY Follow-up Survey (Instrument 12) will be used to collect outcome data only in the NY site. Outcome data in all other IIS sites will be collected via the Master Follow-up Survey (Instrument 5).

Data collected on the NY PREP Baseline Survey (Instrument 12) will be used as a central component to the in-depth study. Specifically, the data will be used to establish baseline equivalence of the treatment and control groups and thus to confirm the integrity of the random assignment process. Baseline data will also be used to define subgroups for which impacts will be estimated, and to adjust impact estimates to account for survey non-response. The burden associated with this survey was already approved in March 2013. We are requesting clearance for the modified version of the survey.

IIS In-Depth Implementation Analysis. Data collected for the implementation analyses will provide rich data to contextualize the analysis of program impacts. Data will be obtained from the following four sources: (1) individual and group interviews with program developers, program leaders and staff, and program partners and other stakeholders (Instrument 7); (2) an online survey of frontline staff and supervisors (Instrument 8); (3) group interviews with participating youth (Instrument 9) and (4) program attendance (Instrument 10). Through these data collection efforts, the study will document the program context in each site, the planned intervention, the implementing organization, other organizational partners participating in implementation, implementation systems, youth’s program dosage and youth’s experiences and satisfaction with the programs.

The data will serve two main purposes. First, the information will enable the study team to produce clear, detailed descriptions of each intervention that is evaluated and the counterfactual in each site. This documentation is critical for understanding the meaning of impact estimates. Second, the data will be used to assess fidelity of implementation and the quality of program delivery. This information is essential for determining whether the interventions were implemented well and whether the evaluation provided a good test of each site’s intervention.

A3. Use of Information Technology to Reduce Burden

Performance Analysis Study. To comply with the Paper Reduction Act of 1995 (Pub. L. 104-13) and to reduce CPREP grantee burden, ACF is streamlining the performance data reporting process and generation of reports for CPREP grantees, just as they are for state and tribal PREP grantees. The performance data reporting process for state and tribal grantees was approved March 12, 2013 (OMB Control Number 0970-0398).

ACF will accomplish this by (1) providing common data element definitions across all PREP grantees and program models, (2) collecting those data in a uniform manner from all PREP grantees through the PREP reporting system, and (3) using the PREP reporting system to calculate common performance measures across all PREP grantees and program models. Using the PREP reporting system will reduce reporting burden and minimize PREP grantee costs related to implementing the reporting requirements.

Impact and In-Depth Implementation Study.

IIS Impact Analysis. The data collection plan for the IIS follow-up surveys reflects sensitivity to issues of efficiency, accuracy, and respondent burden. Wherever possible, there will be a group administration of a self-administered pencil and paper survey instrument (PAPI). The advantages of PAPI over other data collection approaches, such as laptops or personal digital assistants (PDAs), are that it enables respondents to set their own pace (allowing for more accurate responses to sensitive questions); reduces costs; and simplifies administration logistics. Studies have shown no difference between PAPI and computer-assisted self-interviewing (CASI) in reports of most measures of male-female sexual activity, including reports such as ever having had sexual intercourse, recent sexual activity, number of partners, condom use, and pregnancy.4,5,6,7,8,9 This method is also consistent with other national youth surveys (for example, the National Youth Risk Behavior Survey) and the ongoing Evaluation of Adolescent Pregnancy Prevention Approaches (PPA), sponsored by the Office of Adolescent Health (OAH) within HHS.

In those instances in which the survey must be administered outside a group-based setting, respondents will be surveyed via telephone. For example, in HFSA, the structure of the home visiting program does not provide a natural group setting for survey administration. Therefore, the follow-up surveys will all be conducted via computer assisted telephone interviewing (CATI). Telephone interviewing is more cost efficient than CASI and has been used successfully on other teen pregnancy prevention evaluations, including the ongoing federal PPA study for OAH. In PPA, telephone interviewing is being used in several evaluation sites, including one site serving pregnant and parenting teens, similar to the HFSA site in PREP. In PPA, telephone interviewing has so far yielded response rates over 80 percent and shown no evidence of underreporting of sexual risk behaviors or other key outcome measures.


IIS Implementation Analysis. The data collection associated with two of the implementation instruments, the semi-structured staff interviews and the youth focus groups, will be conducted in person by the data collection team, without the use of information technology. The staff survey will be administered via the web and is expected to take no longer than 30 minutes to complete. The web instrument will offer the easiest means of providing data. As it will be programmed to automatically skip questions not relevant to the respondent; this approach will reduce respondent burden. The instrument will also allow respondents to complete the survey at a time convenient to them without the risk of their losing a paper survey questionnaire. Since the survey instrument will automatically skip to the next appropriate question based on a respondent’s answers, the instrument will also provide high-quality data. If respondents are unable to complete the survey in one sitting they may save their place in the survey and return to the questionnaire at another time, which reduces the burden on the respondent. In addition to offering the web instrument, participants may request a paper questionnaire or receive telephone assistance in completing the survey from the contractor’s site liaison.

For program attendance data, sites will be able to either submit an extract from their existing information systems or use a spreadsheet to facilitate data entry (Instrument 10), whichever method is least burdensome to them. The spreadsheet has been designed based on experience from prior studies with similar types of grantees and staff. As such, it is flexible and easy-to-use, while ensuring the quality of the data collected.

A4. Efforts to Identify Duplication and Use of Similar Information

ACF has carefully reviewed the information collection requirements for PREP to avoid duplication with either existing studies or other ongoing federal teen pregnancy prevention evaluations and believes that the PREP Evaluation complements, rather than duplicates, the existing literature and the other ongoing federal teen pregnancy prevention evaluations.

As background, the other federal teen pregnancy prevention-related evaluations currently in the field are (1) the Evaluation of Adolescent Pregnancy Prevention Approaches, sponsored by the Office of Adolescent Health within HHS; (2) the Teen Pregnancy Prevention Replication Study, also sponsored by the Office of Adolescent Health within HHS; and (3) the Evaluation of Community-Based Approaches, sponsored by the Centers for Disease Control and Prevention.

Each of these three evaluations has a specific focus. The Evaluation of Adolescent Pregnancy Prevention Approaches is focused on testing promising and innovative new models for reducing teen pregnancy. The Teen Pregnancy Prevention Replication Study is focused on the testing of evidence-based models for reducing teen pregnancy (which are being scaled up through the Teen Pregnancy Prevention Program administered by the HHS Office of Adolescent Health). The Evaluation of Community-Based Approaches is focused on testing community saturation models for reducing teen pregnancy.

Although the information from these other federal evaluations will increase understanding of reducing teenage sexual risk behavior, the focus of the PREP Evaluation is different from the foci of the other three federal evaluations. Specifically, ACF believes that the PREP evaluation complements the other evaluations by providing the following unique opportunities:

  • Opportunity to learn about using a state formula grant to scale up evidence-based programs. The PREP Evaluation will allow us to learn about both the opportunities and the challenges of scaling up evidence-based teen pregnancy prevention programs through a state formula grant process (as opposed the competitive discretionary grant process being used for the Teen Pregnancy Prevention Program). It is the only federal evaluation to do so.

  • Opportunity to understand the special components of the PREP program. The PREP Evaluation will help us to understand the unique components of the programs funded through PREP, such as the adulthood preparation topics which are being incorporated in the teen pregnancy prevention programming funded through PREP. These components are not part of the other teen pregnancy prevention models being evaluated.

  • Opportunity to test programs being implemented with high-risk populations. In the process of recruiting and selecting sites for the impact evaluation component of the PREP Evaluation, programs which are implemented with high-risk and vulnerable populations, such as foster care youth, homeless youth, and youth in the juvenile justice system (although we are considering a range of programs for the impact evaluation) are being targeted. These high-risk groups, which are a priority population of interest to ACF, are currently underrepresented in the teen pregnancy prevention literature and are not the focus of other ongoing federal teen pregnancy prevention evaluations.

In addition, the evaluation team will also take steps to avoid duplication across the different components of the evaluation. For example, data collected through the PAS Participant Entry Survey are also included in the IIS baseline survey. To avoid duplication of data collection among youth enrolled in programs selected for inclusion in the IIS, these youth will complete only the PREP baseline survey at program entrance. Participant entry data required for submission via the PREP reporting system will be obtained from these baseline surveys.

A5. Impact on Small Businesses

Programs in some sites may be operated by community-based organizations. The data collection plan is designed to minimize burden on such sites by providing staff from Mathematica Policy Research to manage the group administered data collection for the IIS. For respondents who do not complete the survey in the group setting, Mathematica will conduct a telephone data collection, thus minimizing requirements for extensive “sample pursuit” by site staff.

A6. Consequences of Not Collecting the Information/Collecting Less Frequently

Performance Analysis Study. The Government Performance and Results Act (GPRA) requires federal agencies to report annually on measures of program performance. Therefore, it is essential that all PREP grantees, including CPREP grantees, report performance data described in this ICR to ACF on an annual basis. This request for information in this ICR for performance management purposes is the same as that approved by OMB for state and tribal grantees on March 12, 2013 (OMB Control Number 0970-0398). Further, collection and reporting of data for performance measurement is a requirement of all grantees, as stated in the various PREP funding opportunities announcements for the state formula grants, as well as the tribal and CPREP funding opportunities announcements.

Selecting only a subset of the Competitive PREP grantees from which to collect performance measure data will not allow us to meet these goals. One of the key goals of this portion of this project is to promote accountability. Collecting performance measure data allows us to ensure that grantees are delivering the agreed upon services to clients in a timely manner, and consistent with PREP legislation (e.g. with regard to Adult Preparation Subjects). Without this information from all grantees we would be unable to ascertain whether each grantee was following the guidelines of the program. By acquiring the performance measurement data from all CPREP grantees, we are able to ensure accountability for each CPREP grantee (and also be able to intervene with assistance if necessary).


Program improvement is another major goal. Programming is quite varied among grantees. Numerous grantees can be administering the same curriculum but have varying levels of program attendance, for example. This variation is likely related to programs having very different participant outcomes, because of the varying levels of participant attendance. It is important to track performance data with all CPREP grantees because we are looking to improve aspects of program implementation, such as fidelity, for all CPREP grantees.


Moroever, the CPREP grantees will only be included in the PAS portion of the project (not the DIS or IIS portions). Therfore, it is important to obtain PAS data from all CPREP grantees, since we will not learn anything from them in the other components of the evaluation.



Impact and In-Depth Implementation Study. Outcome data are essential to conducting a rigorous evaluation of PREP programs supported under Public Law 111-148. Without outcome data, we cannot estimate program effectiveness.

Implementation data also are essential to conducting a rigorous evaluation of pregnancy prevention programs. Data collection early in program implementation is crucial for documenting site implementation plans and early program experiences, while data collection late in program implementation is essential for learning about actual service delivery and unplanned adaptations, fidelity to plans, participant engagement, and changes in program context during the evaluation period. Without implementation data, we lose the opportunity to document the evolution of program implementation during the evaluation and provide lessons based on the experiences of the sites. Collecting implementation data less frequently would either make it impossible to assess fidelity of program implementation or require reliance on program documents and respondent recall to document program implementation plans.

A7. Special Circumstances

There are no special circumstances for the proposed data collection efforts.

A8. Federal Register Notice and Consultation Outside the Agency

The 60-day Federal Register Notice was posted on February 5, 2013. No comments have been received. A copy of the 60-day Federal Register Notice is included in Attachment G.

The names and contact information of the persons consulted in the drafting and refinement of the PAS and IIS instruments are found in Attachment H.

A9. Payments to Respondents

Performance Analysis Study. No payments to CPREP respondents are proposed for collection of data for the PAS.

Impact and In-Depth Implementation Study. For the IIS, gift cards will be provided to study participants in appreciation of their participation in the study. These gift cards are important because many of our respondents are members of hard-to-reach populations, such as pregnant and parenting teens or youth aging out of foster care. In addition, our surveys include highly sensitive questions, and thus impose additional burden on respondents. Research has shown that incentives are effective at increasing response rates for populations similar to participants in PREP programs,10,11,12 Research also suggests that providing an incentive for earlier surveys may contribute to higher response rates for subsequent surveys,13 Therefore, providing a modest gift of appreciation at the first follow-up may reduce attrition for second follow-up data collection.

Table A9.1 provides a summary of the gift cards to be provided to participants for the IIS data collection. For the school-based or other group administrations, a $15 gift card will be provided to participants completing the first follow-up survey and a $20 gift card will be provided to participants completing the second follow-up survey. For participants who complete the survey by phone either because group administration is not feasible or they are not able to attend a group administration, a $20 gift card will be provided to those completing the first follow-up survey and a $25 gift card will be provided to participants completing the second follow-up survey. Slightly larger gifts are offered to respondents who complete surveys outside of group administration because of the additional burden associated with phone administration, requiring greater initiative and cooperation on behalf of the respondent, as well as additional time outside of school or their ordinary day. Compared to first follow-up surveys, slightly larger gifts are offered to respondents for the second follow-up surveys for both group and phone administration in order to ensure high response rates. Attrition from surveys tends to increase over time due to mobility of participants and study fatigue. Higher incentives are needed to continue to ensure participant responses. The typically lower response rates for second follow-ups increase the value of each response, making slightly higher incentives cost-effective. For youth who participate in a focus group, a $25 gift card will be provided as a token of appreciation for the time commitment associated with their participation.

Table A9.1. Thank You Gifts for the IIS Data Collections

Type of Administration

Length of Activity(minutes)

First Follow-up

Second Follow-up

Group Administration

45 minutes

$15 gift card

$20 gift card

Phone Administration

45 minutes

$20 gift card

$25 gift card

Focus Group

90 minutes

$25 gift card

Not applicable



A10. Assurance of Confidentiality

Performance Analysis Study. The assurance of confidentiality for CPREP grantees will be identical to those for state and tribal PREP grantees, as approved by OMB for state and tribal grantees on March 12, 2013 (OMB Control Number 0970-0398).

CPREP grantees will enter all PAS performance measure data into a national reporting system that will be developed and maintained by RTI International. The PREP performance measure reporting system is designed to ensure the security of data that are maintained in the system. Electronic data are stored in a location within the RTI network that provides the appropriate level of security based on the sensitivity or identifiability of the data. Further, all data reported by CPREP grantees related to program participants will be aggregated; no personal identifiers or data on individual participants will be submitted to ACF. Reports generated by the system will present data in aggregate form only.

System users designated by the individual CPREP grantees will be assigned user names and passwords that will grant them limited access to the PREP reporting system. The database server, located at RTI International, will be accessible only to authorized users. Electronic communications will occur via a secure Internet connection. All transmissions will be encrypted with 128-bit encryption through secure socket layers (SSL) and verified by a VeriSign®, the leading SSL Certificate authority.

To further ensure data security, all RTI project staff are required to adhere to strict standards and to sign security agreements as a condition of employment on the PREP project. All data files on multi-user systems will be under the control of a database manager, with access limited to project staff on a “need-to-know” basis only.

Participant-level data required for PAS reporting will be gathered by grantees and entered in aggregated form into the national reporting system. CPREP grantees will be responsible for ensuring privacy of participant level data and securing institutional review board (IRB) approvals to collect these items, as necessary. Some of the CPREP grantees may need IRB approval based upon their local jurisdiction mandates. Therefore, we are informing CPREP grantees that they should determine whether they need IRB approval and follow the proper procedures of their locality. CPREP grantees will be required to inform participants of the measures that are being taken to protect the privacy of their answers.

These data will be reported by CPREP grantees only as aggregate counts. There will be no means by which individual response can be identified by ACF, RTI International, Mathematica Policy Research, or other end-users of the data.

Impact and In-Depth Implementation Study. Mathematica Policy Research has secured IRB approval for the initial study design and will be responsible for securing any additional local IRB approvals for each site prior to information collection and for other data collection instruments, as necessary.

IIS Impact Analysis. Prior to collecting baseline data, the evaluation team will seek consent from a parent or legal guardian if the respondent is a minor, or from respondents themselves if they are 18 or older. For the follow-up surveys, the evaluation team will seek assent from respondents before data will be collected. The consent and assent forms were included in the earlier OMB package that included the baseline survey and that has already been cleared. The assent forms state that answers will be kept private and not seen by anyone outside of the study team, that participation is voluntary, and that they may refuse to participate at any time without penalty. Participants will be told that, to the extent allowable by law, individual identifying information will not be released or published; rather, data collection will be published only in summary form with no identifying information at the individual level.

Trained Mathematica field staff will administer the follow-up surveys in group settings whenever possible. In HFSA, surveys will be administered by phone by trained Mathematica interviewers because youth are served individually in their homes. All field staff and interviewers are required to sign a confidentiality pledge when hired by Mathematica. On the day of the survey administration for group administration, field staff will distribute an assent form to participants, providing them with a chance to opt out of the follow-up data collection should they want to do so (the assent form was included with the baseline OMB package). The survey administration protocol provides reassurance that the evaluation team takes the issue of privacy seriously. Participants will be informed that all of their answers will be kept private, that identifying information will be kept separate from their answers, and that no one outside of the study team will see their responses.

The questionnaire and outer packet envelope will have a label with a unique ID number; no identifying information will appear on the questionnaire or return envelope. Before turning completed questionnaires in to field staff, respondents will place them in blank return envelopes and seal them. This approach has been shown in research to yield the same reports of sexual activity as computer-assisted surveys in school settings, and a lower incidence of student concerns about privacy. Field staff are trained to keep all data collection forms in a secure location and are instructed not to share any materials with anyone outside of the study team. Completed surveys are immediately shipped via FedEx to Mathematica’s Survey Operations Center for receipting. Any forms with identifying information (assent forms) will be shipped separately from the surveys.

All electronic data will be stored in secure files, with identifying information kept in a separate file from survey and other individual-level data. Survey responses will be stored on a secure, password-protected computer shared drive.

In sites that rely on group administration of follow-up surveys, but when group administration of follow-up surveys is not feasible for some youth (e.g. they have moved or miss group administrations), telephone surveys will be conducted by interviewers recording respondent’s answers on a hard copy (PAPI) of the survey. In sites where all survey administration will be via telephone, such as HFSA, CATI will be used. In both telephone interviewing modes, prior to beginning the survey, the interviewer will read the privacy provisions of the study to the respondent and the respondent will be given a chance to verbally opt out of the survey. As with the hard copies for the group administrations, no identifying information is attached to the questionnaire or entered into the computer data; only a unique study ID will be included.

IIS In-depth Implementation Analysis. Site and state staff participating in group or individual interviews will receive information about privacy protection when arrangements are made for meeting with them, and information about privacy will be repeated as part of the study field staff’s introductory comments during site visits. Site visit staff will be informed about privacy procedures during training and will be prepared to describe them and to answer questions raised by local program staff.

There will be a separate consent process for participation in youth focus groups. Youth under age 18 will need a signed parental consent form, as well as youth assent, for participation in a focus group. Youth 18 or older must provide consent to participate in a focus group. A copy of these forms is included as Attachment I. Focus group consent and assent forms state that answers will be kept private, that youths’ participation is voluntary, that they may refuse to participate, and that identifying information about them will not be released or published. The focus group consent forms also include additional language explaining the unique confidentiality risks associated with participation in a group interview.

All program attendance data will be transmitted with a unique identifier rather than personally identifying information. The unique identifier is necessary to support combining the program attendance data with outcome data. All electronic data will be stored in secure files.

A11. Justification for Sensitive Questions

A key objective of PREP programs is to prevent teen pregnancy through a decrease in sexual activity and/or an increase in contraceptive use. Because this is the primary focus of the programs, some questions on the PAS Participant Entry and Exit Surveys and the IIS Follow-up Surveys are necessarily related to these sensitive issues.

Performance Analysis Study. Table A11.1 provides a list of sensitive questions that will be asked on the participant entry and exit surveys and the justification for their inclusion.

Table A11.1. Summary of Sensitive Questions to be Included on the Participant Entry and Exit Surveys and Their Justification

Topic

Justification

Sexual orientation

Instrument 1 - Participant Entry Survey - question 6

ACF has a strong interest in improving programming that serves lesbian, gay, bisexual, transgendered, and questioning (LGBTQ) youth. This question will provide documentation of the proportion of youth that are being served by CPREP nationwide and that are part of this subpopulation.

Sexual activity, incidence of pregnancy, and contraceptive use 

Instrument 1 - Participant Entry Survey - questions 9 – 15

Intentions to engage in sexual activity, the level of sexual activity, incidence of pregnancy, and contraceptive use are all central to the PREP evaluation. Collecting this information will provide documentation of the characteristics of the population served by CPREP and the degree to which they engage in risky behavior.

Participants’ perceptions of PREP’s effects on their sexual activity and contraceptive use Instrument 2 - Participant Exit Survey - questions 8a-8d

Reducing risky adolescent sexual behavior and increasing contraceptive use for those who are sexually active are among the central goals of CPREP- and PREP-funded programs. These measures of perceived impact will not allow ACF to determine program effects; that is being done through the impact analysis component of the evaluation. However, they will serve as important “customer service” measures, which will help ACF to understand how youth perceive the effectiveness of the programs and to assess whether youth’s perceptions improve over time, as the quality of the implementation of the programs improves.

To address concerns about asking questions about sexual behavior and sexual orientation of younger youth at program entry (before they have been through the program), CPREP grantees will not be required to collect this information from youth in middle schools or youth younger than age 14 in non-school settings. In addition, CPREP grantees will inform program participants that they may refuse to answer any or all of the questions in the entry and exit surveys. This process is identical to the one approved for the PREP grantees on March 12, 2013 (OMB Control Number 0970-0398).

Impact and In-Depth Implementation Study.

IIS Impact Analysis. Table A11.2 provides a list of the sensitive questions found on the PREP follow-up surveys, along with a justification for their inclusion.

Sensitive questions are drawn from previously-successful youth surveys and evaluations (see Attachments C and D). The items have been carefully selected, and we have been guided by past experience in determining whether or not the benefits of measures may outweigh concerns about the heightened sensitivity among sample members, parents, and program staff to specific issues. Although these questions are sensitive, they are commonly and successfully asked of youth similar to those who will be in the PREP study.

Table A11.2. Summary of Sensitive Questions to be Included on the IIS Follow-Up Surveys and Their Justification

Topic

Justification

Sexual orientation

Instrument 5 - Master follow-up survey - question 3.5; Instrument 11 – NY Baseline Survey – question 3.5; Instrument 12 NY Follow-up Survey– question 3.5

ACF has a strong interest in improving programming that serves lesbian, gay, bisexual, transgendered, and questioning (LGBTQ) youth. This question will provide documentation of the proportion of youth in in-depth study sites that are part of this subpopulation.

Sexual activity, incidence of pregnancy and STDs, and contraceptive use 

Instrument 5 - Master follow-up survey - questions 4.12, 5.1 in B1 and B2; 5.2-5.21 in B1; 6.1-6.7 in B1; 6.1-6.4 in B2; 7.6.f in B1 and B2; Instrument 6 - HFSA follow-up survey - questions 5.1-5.14 and 6.1-6.12; Instrument 11 – NY Baseline Survey – questions 5.1-5.6; 5.8-5.13; Instrument 12 – NY Follow-up Survey – questions 5.1-5.8; 5.10-5.20

Sexual activity, incidence of pregnancy and STDs, and contraceptive use are all key outcomes for the evaluation. The majority of these questions are asked only of youth who report being sexually active.

Intentions regarding sexual activity

Instrument 5 - Master follow-up survey - question 5.13 in B2; Instrument 11 – NY Baseline Survey – question 5.7; Instrument 12 – NY Follow-up Survey - 5.9

Intentions regarding engaging in sex and other risk-taking behaviors are extremely strong predictors of subsequent behavior (Buhi and Goodson, 2007). Intentions are strongly related to behavior and will be an important mediator predicting behavior change.

Drug and alcohol use

Instrument 5 - Master follow-up survey - questions 7.1–7.5 in B1 and B2; Instrument 6 - HFSA follow-up survey - questions 7.1-7.5 ; Instrument 11 – NY Baseline Survey – questions 7.1-7.5; Instrument 12 – NY Follow-up Survey – questions 7.1-7.5

There is a substantial body of literature linking various high-risk behaviors of youth, particularly drug and alcohol use, sexual intercourse, and risky sexual behavior. The effectiveness of various program strategies is expected to differ for youth who are and are not experimenting with or using drugs and alcohol (Tapert et al., 2001; Li et al., 2001; Boyer et al., 1999; Fergusson and Lynskey, 1996; Sen, 2002; Dermen et al., 1998; Santelli et al., 2001.)



In addition, the Master Follow-up Survey instrument is designed so that only sexually active youth will receive most of these sensitive questions. The instrument is designed with three parts, Part A, Part B1, and Part B2. All participants will complete Part A. At the end of Part A, they will be directed to complete either Part B1 (for youth who report being sexually active) or Part B2 (for youth who report they are not sexually active). Many of the sensitive items related to sexual activity will be included only in Part B1 and thus asked only of sample members who report being sexually active.

The design will vary when the survey is administered with groups of adolescents who are known to be sexually active, such as the pregnant and parenting teens in HFSA. In sites serving populations such as these, Part B2 is eliminated, and Parts A and B1 are combined into one survey, since there will be no decision point regarding which Part B the respondent should use. This structure and variation have been used successfully in other federally funded teen pregnancy prevention evaluations, such as the Evaluation of the Title V, Section 510 Abstinence Education Program and the Evaluation of Adolescent Pregnancy Prevention Approaches. In addition, the design of the survey will vary when the survey is administered to participants in the New York site involved in the IIS, due to the low-literacy level of the program participants and the need for a simplified survey. In this site, Parts A and B are combined into one streamlined survey. Survey participants will follow a skip pattern within the streamlined survey in order to answer the questions that are appropriate for themThe vast majority of participants in this site (approximately 80%) are anticipated to be sexually active.

IIS In-Depth Implementation Analysis. There are no sensitive questions in the IIS implementation instruments. The questions focus on program experiences and context, and do not ask participants about their sexual activity or other risk-taking behavior.

A12. Estimates of the Burden of Data Collection

Tables A12.1 through A12.3 provide the estimated annual reporting burden calculations for Performance Analysis Study and Impact and In-Depth Implementation Study. The estimates are broken out separately for Youth Participants (Table A12.1) and for CPREP and PREP grantees and staff (Table A12.2). Table A12.3 provides a summary of burden hours and costs approved to-date, as well as those requested in this ICR.

1. Annual Burden for Youth Participants

Performance Analysis Study. Table A12.1 presents the hours and cost burden for the CPREP participant entry and exit surveys. The number of participants completing these surveys is based on a review of CPREP grantee applications. The amount of time it will take for youth to complete the entry and exit surveys is estimated based on pretest results of each of these instruments with nine youth. The cost of this burden is estimated by assuming that 10 percent of all youth served by the program will be age 18 or older and then assigning a value to their time of $7.25 per hour, the federal minimum wage.

CPREP grantees are expected to serve approximately 60,420 participants over the three year OMB clearance period. However, grantees will not collect participant entry surveys among the CPREP program participants for the first grant year and for half of the second year (e.g., data collection beginning in February 2014), which reduces the estimated number of participants over the three year OMB clearance period to 30,211. Similar to the approach taken to reduce burden for state and tribal PREP grantees, the participant entry survey will not be administered to middle school youth in school-based settings. Once we exclude those participants and apply a 95 percent response rate to the remaining participants, we anticipate 17,673 respondents to the entry survey (18,603 x 0.95 = 17,673).14 Based on pretesting of this instrument, the participant entry survey is estimated to take 5 minutes (0.08333 hour) to complete. The total burden for this data collection is estimated to be 17,673 x 0.08333 = 1,473 hours. It is estimated that 25 percent of respondents (4,418) will be 18 or older at the time of the participant survey and, therefore, the total burden for youth aged 18 or older is (4,418 x 0.08333) 368 hours.15 The annualized the cost of the burden is estimated to be (368/3) x $7.25 = $ 892.



Table A12.1. Estimate of Burden and Cost for the PREP Evaluation for Youth Participants


TOTAL

ANNUALIZED

Instrument (#)

Total Number of Respondentsa

Number of Responses per Respondent

Average Burden Hours per Response

Total Burden Hours

Total Number of Respondents Age 18 or Olderb

Number of Responses per Respondent Age 18 or Older

Average Burden Hours per Response (for Age 18 or Older)

Total Burden Hours for Respondents Age 18 or Older

Total Annual Burden Hoursc

Total Annual Burden Hours (Age 18 or Older)c

Average Hourly Wage (Age 18 or Older)

Total Annualized Costc

Performance Analysis Study

1. Entry Survey

17,673

1

0.08333

1,473

4,418

1

0.08333

368

491

123

$7.25

$892

2. Exit Survey

22,961

1

0.16667

3,827

2,296

1

0.16667

383

1,276

128

$7.25

$928

Total









1,767

251


$1,820

Impact and In-Depth Implementation Study

5/6/12. First Follow-Up Survey

4,800

1

0.75

3,600

480

1

0.75

360

1,200

120

$7.25

$870

5/6/12. Second Follow-Up Survey

2,250

1

0.75

1,688

225

1

0.75

168

563

56

$7.25

$406

9. Focus Group Discussion Guide

320

1

1.5

480

32

1

1.5

48

160

16

$7.25

$117

Total









1,923

192


$1,393

aYouth participating in programming offered in middle schools will complete the exit survey but not the entry survey.

bTwenty five percent of youth completing the entry survey are assumed to be age 18 or older. Ten percent of youth completing the exit survey are assumed to be age 18 or older.

cAll burden estimates are annualized over three years.

It is estimated that about 20 percent of the participants will drop out of the program prior to completion, leaving approximately 24,169 (30,211 x 0.80 = 24,169) participants at the end of the program.16 Of those, we expect 95 percent, or 22,961 participants, will complete the participant exit survey. Based on pretesting, the exit survey is estimated to take youth 10 minutes (0.16667 hour) to complete. The total burden for this data collection is estimated to be 22,961 x 0.16667 = 3,827 hours. It is estimated that 10 percent of respondents (2,296) will be aged 18 or older at the time of the exit survey and, therefore the total burden for youth aged 18 or older is (2,296 x 0.16667) 383 hours.17 The annual cost of the burden is estimated to be (383/3) x $7.25 = $928.

Therefore, the total annual burden for youth participants associated with the PAS instruments included in this package is 1,767 (491 + 1,276). The total annual cost is $1,820 ($892 + $928).

Impact and In-Depth Implementation Study. It is expected that 6,000 youth will be enrolled in the evaluation sample across the four to five evaluation sites for IIS. Sample intake will take place over three years, for an average of 2,000 participants per year. The expected response rate for the IIS first follow-up survey is 80 percent, for an average of 1,600 first follow-up survey completions per year, with 4,800 total responses across the three years. The expected response rate for the second follow-up survey is 75 percent; however, since second follow-ups will occur 20 to 24 months after baseline, we estimate that only half of the second follow-up surveys will be conducted during the three years covered by this OMB package. Therefore an average of 750 second follow-up surveys will be expected to be completed per year, with 2,250 total responses across the three years. Based on previous experience with similar questionnaires, it is estimated that it will take youth 45 minutes (0.75 hour) to complete the follow-up surveys, on average. The total annual burden for the first follow-up survey is estimated to be 1,200 hours (1,600 respondents per year x 0.75 hours burden). The annual cost of this burden is estimated be $870 (120 hours for youth over 18 X $7.25). The total annual burden for the second follow-up survey is estimated to be 563 hours (750 respondents per year x 0.75 hours burden). The annual cost of this burden is estimated be $406 (56 hours for youth over 18 X $7.25).

It is expected that 320 youth across all four to five IIS sites will participate in a focus group. The focus group is expected to take 1.5 hours, yielding a burden estimate of 320 x 1.5 = 480 hours. It is estimated that 10 percent of the sample will be aged 18 or older and have a wage rate of $7.25 (see justification above). Annualizing these estimates over three years yields an annual burden of 160 hours ((320 x 1.5)/3), annual burden hours of 16 for youth over aged 18 (160 x .10), and an annual cost estimate of $117 (16 x $7.25).

Therefore, the total annual burden for youth participants associated with the IIS instruments included in this package is $1,393 ($870 + $406 + $117).

Note. The burden associated with Instrument 11, the NY Baseline Survey, is not discussed in this package. The burden for this instrument has already been approved and was included in the package that OMB approved in March 2013. The changes to this survey do not change the burden.

2. Annual Burden for Grantees and Implementation Sites

Performance Analysis Study. The 37 CPREP grantees will report performance measure data into a national reporting system developed for the PREP Performance Analysis Study. They will gather this information with the assistance of their implementation sites (estimated to be 300 across all grantees).18 The grantee and implementation site data collection and reporting efforts described below are record-keeping tasks.

Total Annual Burden and Cost for Grantees

Once per year, for two years, all 37 CPREP grantees will be required to submit all of the required performance measures into the national system. Time for a designated CPREP grantee administrator to aggregate the data from each of the grantee’s implementation sites and submit all of the required data into the system is estimated to be 10 hours for the first (half) year of data collection and 20 hours for the second (full) year of data collection per grantee. Grantee administrators will also spend an estimated 4 hours each year collecting information at the grantee-level that pertain to grantee structure, cost, and support for program implementation. The Performance Reporting System Data Entry Form includes all of these required data elements that the grantee will collect, aggregate, and submit into the national system (see Instrument 3). The total burden hours are (37 x [(10+4)+(20+4)]) = 1,406 hours. Annualizing these estimates over the three year period yields an annual burden of 469 hours. The cost burden for this activity is estimated to be 469 hours times an hourly wage of $20.76, for a total annual cost of $9,736. This hourly wage rate represents the mean hourly wage rate for community and social service occupations (National Occupational Employment and Wage Estimates, Bureau of Labor Statistics, Department of Labor, May 2010).

Total Annual Burden and Cost for Implementation Sites

The 300 estimated program implementation sites will collect program implementation data to support the performance analysis study twice over the requested three years of clearance (see Instrument 4). They will record youth program attendance at sites operating during out of school time (estimated at an average of 1.5 hours for the first (half) year of data collection and 3 hours for the second (full) year of data collection for each implementation site facilitator to complete19) and will record the program session hours delivered at each implementation site (estimated at 2.5 hours for the first (half) year of data collection and 5 hours for the second (full) year of data collection for one implementation site facilitator to complete). The total burden for this data collection activity is estimated to be (300 x [(1.5+2.5)+(3+5)]) = 3,600 hours, which yields an annual estimate of1,200 hours. The cost burden for this activity is estimated to be 1,200 hours times an hourly wage of $20.76, for a total annual cost of $24,912. This hourly wage rate represents the mean hourly wage rate for community and social service occupations (National Occupational Employment and Wage Estimates, Bureau of Labor Statistics, Department of Labor, May 2010).

Table A12.2. Estimate of Burden and Cost for the Grantees and Implementation Sites


TOTAL

ANNUALIZED

Instrument (#)

Total Number of Respondents

Number of Responses per Respondent

Average Burden Hours per Response

Total Burden Hours

Total Annual Burden Hoursa

Average Hourly Wage

Total Annualized Costa

Performance Analysis Study

3. Performance Reporting System Data Entry Form

37

2

19

1,406

469

$20.76

$9,736

4. Implementation Site Data Collection Protocol

300

2

6

3,600

1,200

$20.76

$24,912

Total





1,669


$34,648

Impact and In-Depth Implementation Study

7. Master List of Topics for Staff Interviews

160

2

1

320

107

$20.76

$2,221

8. Staff Survey

100

2

0.5

100

33

$20.76

$685

10. Program Attendance Data Collection Protocol

90

12

0.25

270

90

$20.76

$1,868

Total





230


$4,774

aAll burden estimates are annualized over three years.

Impact and In-Depth Implementation Study. There is no grantee, sub-awardee, or implementation site burden associated with administration of the follow-up surveys. Data collectors from Mathematica Policy Research will be responsible for the survey data collection.

It is expected that across the four to five evaluation sites for the IIS, there will be a total of 20 grantee staff, 50 sub-awardee managers, and 90 implementation site staff. Each will be interviewed twice, once at each site visit. Interviews and group discussions with staff will average 1 hour in length. The web survey with implementation site staff will occur twice and take 30 minutes to complete. Administrative data on program attendance data will be collected by extracting data from program information systems, when available. In the event that there are no pre-existing administrative records systems that have the required attendance data, program attendance will be collected by implementation site staff and entered into the spreadsheet tool (Instrument 10). To estimate burden, we use an upper-bound estimate and assume that all 90 implementation site staff across the four to five IIS evaluation sites would need to take attendance using the spreadsheet tool. Based on prior studies, we estimate each quarterly submission will take an average of 15 minutes to complete. The total annual burden for all IIS data collection at the grantee and implementation site level is estimated to be 230 hours per year. Assuming a wage rate of $20.76 (see discussion above), the cost of this burden is estimated to be 230 hours x $20.76 = $4,774.

3. Overall Burden

Table A12.3 details the overall burden approved and requested for data collection associated with the PREP Multi-Component Evaluation. A total of 31,34220 hours (and a cost of $421,189) has been approved thus far with the prior three ICRs for this project. A total of 5,589hours (and a cost of $42,635) is requested in this ICR. If approved, the total annual approved burden for this project (i.e. the prior burden summed with the requested burden) will be 36,931 hours (and a cost of $463,824).

A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers

These information collection activities do not place any additional cost on respondents. ACF will provide grantees with access to the PREP reporting system that will be used for reporting the required CPREP performance management data.

This process is identical to the one that will be used to collect PAS data from PREP grantees, which was approved March 12, 2013 (OMB Control Number 0970-0398).

A14. Annualized Cost to Federal Government

Costs for previously-approved data collection. On November 6, 2011, OMB approved field data collection, which involved collecting data on various program models and assessing the feasibility of conducting a rigorous evaluation. Annualized costs for that effort are $216,625. On March 7, 2012, OMB approved data collection for the Design Survey. Annualized costs for that effort are $83,333.21 On March 12, 2013, OMB approved data collection from the PREP grantees for the Performance Analysis Survey. Annualized costs for that effort are $196,703. Also on March 12, 2013, OMB approved data collection for the baseline survey for the Impact and In-Depth Implementation Study. Annualized costs for that effort are $382,758.

Costs for proposed data collection. The estimated cost for the completion of the PAS for CPREP grantees is $296,729 over four years. The cost over the three years for the requested clearance is $222,547. The annual cost to the federal government is estimated to be $74,183.

The total cost for the IIS instrument development and data collection is $2,101,213. Because data collection will be carried out over three years, the estimated annualized cost to the government for IIS implementation and follow-up data collection is $700,404.

Total costs. If this proposed ICR is approved, the total annual cost to the federal government for this and all previously approved collections as part of the PREP Multi-Component Study is $1,654,066.

Table A12.3. Estimate of Burden and Cost for the PREP Evaluation – Approved and Requested Burden



TOTAL

ANNUALIZED

Data Collection Instrument

Type of Respondent

Total Number of Respondents

Number of Responses per Respondenta

Average Burden Hours per Response

Total Burden Hours

Annual Burden Hours

Annual Burden Hours for Age 18 or Older

Hourly Wage Rate

Total Annualized Cost

Field Data Collection for Impact and In-Depth Implementation Study (Approved November 6, 2011)

Discussion Guide for use with Macro-Level Coordinators

Macro-Level Coordinators

30

1

1

30

10

N/A

$33.59

$333.90

Discussion Guide for use with Program Directors

Program Directors

60

2

2

240

80

N/A

$27.21

$2,176.80

Discussion Guide for Use with Program Staff

Program Staff

120

1

2

240

80

N/A

$23.76

$1,900.80

Discussion Guide for Use with School Administrators

School Administrators

210

1

1

210

70

N/A

$35.54

$2,487.80

Design and Implementation Study (Approved March 7, 2012)

Design Survey: Discussion Guide for Use with PREP State-Level Coordinators and State-Level Staff

State-Level Coordinators and State-Level Staff

90a

1

1

90

30

N/A

$37.45

$1,123.50

Performance Analysis Study and Baseline Data (Approved March 12, 2013)

Participant Entry Survey

PREP State and Tribal Participants

105,309

1

0.08333

8,775

2,925

731

$7.25

$5,300.00

Participant Exit Survey

PREP State and Tribal Participants

133,722

1

0.16667

22,287

7,429

743

$7.25

$5,386.00

Baseline Survey (includes the NY Baseline Survey)

PREP State and Tribal Participants

5,700

1

0.75

4,275

1,425

143

$7.25

$1,037.00

Performance Reporting System Data Entry Form

PREP State and Tribal Grantee Administrators

195

1

24

4,680

1,560

N/A

$21.35

$33,306.00

Sub-awardee Data Collection and Reporting

PREP State and Tribal Sub-Awardee Administrator

1,050

1

18.6667

19,600

6,533

N/A

$20.76

$135,625.00

Implementation Site Data Collection

PREP State and Tribal Site Facilitator

4,200

1

8

33,600

11,200

N/A

$20.76

$232,512.00

Subtotal: Burden Approved To-Date




31,342



$421,189.00

Performance Analysis Study (Currently Requested)

Participant Entry Survey

CPREP Participants

17,673

1

0.08333

1,473

491

123

$7.25

$892

Participant Exit Survey

CPREP Participants

22,961

1

0.16667

3,827

1,276

128

$7.25

$928

Performance Reporting Data System Entry Form

CPREP Grantees

37

2

19

1,406

469

N/A

$20.76

$9,736

Implementation Site Data Collection Protocol

CPREP Implementation Sites

300

2

6

3,600

1,200

N/A

$20.76

$24,912

Impact and In-Depth Implementation Study (Currently Requested)

First Follow-Up Survey

Participants

4,800

1

0.75

3,600

1,200

120

$7.25

$870.00

Second Follow-Up Survey

Participants

2,250

1

0.75

1,688

563

56

$7.25

$406.00

Focus Group Discussion Guide

Participants

320

1

1.5

480

160

16

$7.25

$117.00

Master List of Topics for Staff Interviews

State, Grantee, Subawardee and Implementation Site Staff

160

2

1

320

107

N/A

$20.76

$2,221.00

Staff Survey

Implementation Site Staff

100

2

0.5

100

33

N/A

$20.76

$685.00

Program Attendance

Implementation Site Staff

90

12

0.25

270

90

N/A

$20.76

$1,868.00

Subtotal: Burden Currently Requested




5,589



$42,635

Estimated Total Annual Burden




36,931



$463,824

aNumber of responses over the three year period.

A15. Explanation for Program Changes or Adjustments

OMB gave approval on March 7, 2012 for the Design Survey under the DIS (OMB Control No.: 0970-0398). Approvals for two data collection efforts and the associated instruments: (1) collection of PREP performance measures for the Performance Analysis Study (PAS) through participant entry and exit surveys and the Performance Reporting System Data Entry Form for state and tribal grantees; and (2) collection of baseline data for the Impact and In-Depth Implementation Study (IIS) through the PREP baseline survey was received on March 12, 2013 (OMB Control # 0970-0398). We now seek approval for the data collections associated with the Performance Analysis Study for CPREP grantees and for the collection of youth follow-up data, staff interviews, a staff survey, and youth focus groups under the In-Depth Impact and Implementation Study. This request will increase the total burden requested for the PREP Evaluation, under OMB Control No. 0970-0398.

A16. Plans for Tabulation and Publication and Project Time Schedule

1. Analysis Plan

This phase of the PREP Evaluation involves collecting performance measures data for CPREP grantees, outcome data in four to five sites that will be used for estimating program impacts, and implementation data in four to five sites to support understanding of program impacts.

Performance Analysis Study. The analysis plan for CPREP grantees is the same as for the state and tribal PREP grantees, which OMB approved on March 12, 2013 (OMB Control Number 0970-0398).

A major objective of the performance measure analysis will be to construct, for Congress, a picture of PREP implementation that includes state, tribal, and CPREP grantees. A basic set of descriptive statistics will be constructed separately, for each type of grantee. These statistics, for example, will answer questions such as:

  • What programs were implemented, and for how many youth?

  • What are the characteristics of the population served?

  • To what extent were members of vulnerable populations served?

  • How fully did programs deliver their program models?

  • How many youth participated in most program sessions or activities?

  • How many entities are involved in delivering PREP programs?

  • How do grantees allocate their resources?

  • How do participants feel about the programs, and how do they perceive its effect on them?

  • What challenges do grantees and their implementation partners see in implementing PREP programs on a large scale?

Answers to these questions will help ACF understand whether, overall, PREP objectives are being met. For CPREP, the purpose of this ICR, using the performance data for accountability requires constructing indicators for the same measures being collected for state and tribal grantees. Indicators at the CPREP grantee level help fulfill federal responsibilities to hold CPREP grantees accountable for performance. The structure of the data will also allow for examining several of these questions by program model to better understand success and challenges implementing the various programmatic approaches.

The results of the performance measures analysis will help ACF identify areas for possible improvement of program implementation. For example, ACF will be able to determine which CPREP grantees deliver their complete program content and hours to a high percentage of participant cohorts, and for which program models that is true. CPREP grantees will be able to determine from the performance data which of the program models they implement are succeeding in delivering complete content, in which locations, or in getting participants to complete at least 75 percent of the program sessions. ACF will be able to generate statistics showing how programs serving vulnerable populations compare to programs serving more general teen populations with regard to participant completion, participants’ assessments and perceived effects. ACF will learn which implementation challenges are pertinent to CPREP grantees and which are topics for technical assistance. Over time, data can demonstrate which CPREP grantees are improving with respect to elements of program delivery and which areas of technical assistance require on-going attention.

Impact and In-Depth Implementation Study.

IIS Impact Analysis. Program impacts will be analyzed separately for each site using survey data collected at baseline, first follow-up (eight to 12 months after baseline), and second follow-up (12 months after first follow-up). Impact analysis will begin after the completion of first follow-up data collection for each site and will be repeated after the second follow-up is complete. Regression-adjusted impact estimates will be estimated for each primary outcome in each site, drawing on baseline and follow-up data. The set of primary analyses for each site will be limited to a small set of key outcomes, including measures of sexual risk behavior and its health consequences. To support these analyses, the follow-up surveys will include measures of pregnancy, STIs, and associated sexual risk behaviors. Subgroup analyses will be performed according to characteristics captured in the baseline survey data, including prior sexual experience and other risk factors. Variation in impacts by participation level will be calculated using propensity score matching based on demographic characteristics taken from the baseline survey data. See Attachment B for more detail on the planned analyses.

In-Depth Implementation Analysis. The instruments included in this OMB package for the implementation analyses will yield data that will be analyzed using qualitative and quantitative methods to describe program implementation, assess the program’s overall quality, and examine fidelity to the program model and experience with scale up. A thorough understanding of program implementation will provide context for interpreting program impacts, while a greater understanding of how programs can be implemented with high quality is expected to inform the next generation of programming.

The research team will create a coding scheme consisting of a hierarchy of conceptual categories and classifications linked to the evaluation research questions, dimensions of implementation, and program logic models. Team members will then use software (NVivo or Atlas.ti) to assign codes to specific text in the electronic file of site visit notes and other documents. Coding the qualitative data in this way will enable the team to access data on a specific topic quickly and to organize information in different ways to facilitate the identification of themes and compile the evidence supporting them. As data collection proceeds, the coding scheme will be refined to better align it with both themes and topics that emerge from the data and with the research questions (Ritchie and Spencer, 2002).22 To facilitate analyses of patterns and themes across sites, we will also code key site-level characteristics, such as type of program model and characteristics of the youths served.

After all the qualitative data have been coded, we will use the software to retrieve data on the research questions and subtopics to identify themes and triangulate across data sources and individual respondents. Much of the meaning of the data will be discerned through descriptive analyses—qualitative and quantitative--that organize data thematically; create summary statistics that characterize overall experiences in each site, as well as variations across and within sites; and examine themes and topics from multiple perspectives and highlight the similarities and differences among them (Patton, 2002).23 We will also explore relationships across themes (for example, relationships between the types of implementation challenges sites face and their staffing patterns and partnership arrangements).

2. Time Schedule and Publications

The PREP evaluation will be conducted over a seven-year period. This request is for a three year period and subsequent packages will be submitted as necessary for new collections or to extend collection periods. Below is a schedule of the data collection efforts for the Performance Analysis Study and the Impact and In-Depth Implementation Study:

Performance Analysis Study. The schedule for collecting and analyzing CPREP performance measures data is identical to the schedule for the state and tribal PREP grantees, which OMB approved on March 12, 2013 (OMB Control Number 0970-0398).

The performance analysis reporting schedule is designed to complement the timing of CPREP grantees’ program implementation and the availability of the tools to support the data collection. While the CPREP grantees will provide data once each year to ACF, the analytical results based on their reported data will be compiled into reports twice each year. With the program year ending in August, grantees could be expected to report performance measure data in October of each year, allowing time for collection of data from implementation sites. Analysis of the performance data could then proceed in two stages. Stage 1, to be completed within four months of data receipt, will focus on generating national statistics for reporting to Congress using data from both PREP grantees and CPREP grantees. Stage 2, to be completed within eight months of data receipt, will involve more detailed and exploratory analyses by CPREP grantee and program model. The exact timing of both stages will depend on the quality of data submitted to the ACF data system. Improvement in data quality over time, driven in part by technical assistance to CPREP grantees, could result in acceleration of this schedule for producing results.

Impact and In-Depth Implementation Study. One site began enrolling sample members and administering baseline surveys in April 2013. Other sites will begin later, and because ACF plans to analyze each site separately (discussed in Section A.3), it is acceptable for the data collection schedule to vary across sites. The current project schedule assumes that all sites will begin enrolling members and administering baseline surveys by November 2013. To generate sufficient sample sizes for the impact study, the project schedule allows for sample enrollment to continue for up to three years after the initial sites have started—that is, through Fall 2015. Therefore collection of follow-up survey data would continue for approximately two additional years, through Fall 2017.

The timing of site visits will be determined after sites are selected and specific implementation plans are known, but the goal is to conduct the first site visit early in the implementation period for most sites and to conduct a second visit later in the implementation period to allow for program maturation and to help capture variations in youth experiences over time. The timelines for the staff survey and the focus groups will coincide with the site visits.

We will produce several reporting products, including interim (2016) and final (2017) impact and implementation reports, implementation site profiles (2014 and 2015), and one or two topical research briefs (2016 and 2017) that convey information that policy and program decision makers need on key subtopics of interest.

A17. Reason(s) Display of OMB Expiration Date is Inappropriate

All instruments, consent and assent forms and letters will display the OMB Control Number and expiration date.

A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.

SUPPORTING REFERENCES FOR INCLUSION OF SENSITIVE
QUESTIONS OR GROUPS OF QUESTIONS

Boyer, Cherrie B., Jeanne M. Tschann, and Mary-Ann Shafer. "Predictors of Risk for Sexually Transmitted Diseases in Ninth Grade Urban High School Students." Journal of Adolescent Research, vol. 14, no. 4, 1999, pp. 448-65.

Buhi, Eric R. and Patricia Goodson. "Predictors of Adolescent Sexual Behavior and Intention: A Theory-Guided Systematic Review." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 40, no. 1, 2007, pp. 4.

Dermen, K. H., M. L. Cooper, and V. B. Agocha. "Sex-Related Alcohol Expectancies as Moderators of the Relationship between Alcohol use and Risky Sex in Adolescents." Journal of Studies on Alcohol., vol. 59, no. 1, 1998, pp. 71.

Fergusson, David M. and Michael T. Lynskey. "Alcohol Misuse and Adolescent Sexual Behaviors and Risk Taking." Pediatrics, vol. 98, no. 1, 1996, pp. 91.

Li, Xiaoming, Bonita Stanton, Lesley Cottrell, James Burns, Robert Pack, and Linda Kaljee. "Patterns of Initiation of Sex and Drug-Related Activities among Urban Low-Income African-American Adolescents." Journal of Adolescent Health : Official Publication of the Society for Adolescent Medicine., vol. 28, no. 1, 2001, pp. 46.

Santelli, John S., Leah Robin, Nancy D. Brener, and Richard Lowry. "Timing of Alcohol and Other Drug use and Sexual Risk Behaviors among Unmarried Adolescents and Young Adults." Family Planning Perspectives, vol. 33, no. 5, 2001.

Sen, Bisakha. "Does Alcohol-use Increase the Risk of Sexual Intercourse among Adolescents? Evidence from the NLSY97." Journal of Health Economics., vol. 21, no. 6, 2002, pp. 1085.

Tapert, Susan F., Gregory A. Aarons, Georganna R. Sedlar, and Sandra A. Brown. "Adolescent Substance use and Sexual Risk-Taking Behavior." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 28, n3, 2001, pp.181.



1 Competitive PREP funding was available to organizations in Florida, North Dakota, Texas, Virginia, Indiana, Guam, American Samoa, the Northern Mariana Islands, the Marshall Islands. and Palau.

2 Middle school youth are not required to answer the questions on sexual risk behavior to avoid asking sensitive questions of younger youth. To reduce burden, middle school youth served by CPREP programs in school-based settings will not complete entry surveys. This is consistent with the plan for State and Tribal PREP grantees that has already been cleared by OMB.

3 To reduce burden, attendance will not be collected for youth participating in CPREP programs during the school day for the PAS. ACF assumes that attendance in these programs will be high. This is consistent with the plan for State and Tribal PREP grantees that has already been cleared by OMB.

4 Turner, C.F., L. Ku, S.M. Rogers, L.D. Lindberg, J.H. Pleck, and F.L. Sonenstein. “Adolescent Sexual Behavior, Drug Use, and Violence: Increased Reporting with Computer Survey Technology.” Science, vol. 280, 1998, pp. 867–873.

5 Beebe, Timothy J., Patricia A. Harrison, James A. McCrae Jr., Ronald E. Anderson, and Jayne A. Fulkerson. “An Evaluation of Computer-Assisted Self-Interviews in a School Setting.” Public Opinion Quarterly, vol. 62, 1998, pp. 623–632.

6 Beebe, Timothy J., Patricia A. Harrison, Eunkyung Park, James A. McRae, Jr., and James Evans. “The Effects of Data Collection Mode and Disclosure on Adolescent Reporting and Health Behavior.” Social Science Review, vol. 24, no. 4, 2006, pp. 476–488.

7 Brener, Nancy D., Danice K. Eaton, Laura Kann, JoAnne Grunbaum, Lori A. Gross, Tonja M. Kyle, and James G. Ross. “The Association of Survey Setting and Mode with Self-Reported Health Risk Behaviors Among High School Students.” Public Opinion Quarterly, vol. 70, 2006, pp. 354–374.

8 Webb, P.M., G.D. Zimet, J.D. Fortenberry, and M.J. Blythe. “Comparability of a Computer-Assisted Versus Written Method for Collecting Health Behavior Information from Adolescent Patients.” Journal of Adolescent Health, vol. 24, no. 6, 1999, pp. 383–388.

9 Schochet, Peter Z. “An Approach for Addressing the Multiple Testing Problem in Social Policy Impact Evaluations.” Evaluation Review, vol.33, no.6, December 2009.

10 Berlin, Martha, Leyla Mohadjer, Joseph Waksberg, Andrew Kolstad, Irwin Kirsch, D. Rock, and Kentaro Yamamoto. 1992. An experiment in monetary incentives. In JSM proceedings, 393–98. Alexandria, VA: American Statistical Association.

11 James, Jeannine M., and Richard Bolstein. 1990. The effect of monetary incentives and follow-up mailings on the response rate and response quality in mail surveys. Public Opinion Quarterly 54 (3): 346–61.

12 Singer, Eleanor, and Richard A. Kulka. 2002. Paying respondents for survey participation. In Studies of welfare populations: Data collection and research issues, eds. Michele Ver Ploeg, Robert A. Moffitt, and Constance F. Citro, 105–28. Washington, DC: National Academy Press.

13 Singer, Eleanor, John Van Hoewyk, and Mary P. Maher. 1998. Does the payment of incentives create expectation effects? Public Opinion Quarterly 62:152–64.

14 This figure also excludes those youth participating in programs at impact study sites who will complete only an IIS Baseline Survey at program entry. The baseline survey will include the items on the entry survey.

15 It is assumed that 25 percent of the sample, not 10 percent, will be 18 or older because middle school youth in schools settings will not complete an entry survey.

16 Based on our review of CPREP plans and other documents, we estimate that 60 percent of the youth served in the CPREP programs will be in school-based programs and that 40 percent will be served in out-of-school programs. We assume that 90 percent of youth in school-based CPREP programs will complete the program and that 65 percent of youth in out-of-school CPREP programs will complete the program. These assumptions yield an overall program completion rate of 80 percent.

17 The assumption that 10 percent of youth will be 18 or older at the exit survey, as opposed to 25 percent at the entry survey, is because middle school youth, while not eligible for the entry survey, are eligible for the exit survey.

18 Our initial estimates were compiled based upon CPREP grantee applications. CPREP grantees do not have subawardees.

19 These estimated burden hours are being reduced to reflect the lower number of expected participants attending programs, and that attendance data will not be processed for PREP programs operating during the school day.

20 The burden for the second package approved was originally annualized over two years. Since the current request is for three years, burden for all packages has been annualized over three years.

21 Annual costs for the ICR approving the Design Survey data collection were reported as $125,000. However, reported costs were calculated over two years. The figure reported in this ICR – $83,333 – is the annualized cost, that is, the cost calculated over three years.

22 Ritchie, J., and Spencer, L. (2002). Qualitative data analysis for applied policy research. In Huberman, A.M., and Miles, M.B. The qualitative researcher’s companion. Thousand Oaks, CA: Sage Publications.

23 Patton, M.Q. (2002). Qualitative research and evaluation methods: Third edition. Thousand Oaks, CA: Sage Publications.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorBCollette
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy