MPC_SS_A_revised_MPR 011217_CLEAN

MPC_SS_A_revised_MPR 011217_CLEAN.docx

Federal Evaluation of Making Proud Choices! (MPC!

OMB: 0990-0452

Document [docx]
Download: docx | pdf

Part A: Justification for the Collection of Baseline Survey Data – Federal Evaluation of Making Proud Choices!

OMB Control Number 0990 - new

July 2016

Submitted to:

U.S. Department of Health and Human Services Office of Adolescent Health
Office of the Director Department
of Health and Human Services

1101 Wootton Parkway, Suite 700

Rockville, MD 20852

Project Officer: Amy Farb


Part A: Justification for the Collection of Baseline Survey Data - Federal Evaluation of Making Proud Choices!

OMB Control Number 0990 - new

July 2016





CONTENTS

Part a Introduction 1

A.2. Purpose and Use of the Information Collection 4

A.3. Use of Information Technology to Reduce Burden 5

A.4. Efforts to Identify Duplication and Use of Similar Information 6

A.5. Impact on Small Businesses 6

A.6. Consequences of Not Collecting the Information/Collecting Less Frequently 6

A.7. Special Circumstances 6

A.8. Federal Register Notice and Consultation Outside the Agency 6

A.9. Payments to Respondents 7

A.10. Assurance of Confidentiality 7

A.11. Justification for Sensitive Questions 9

A.12 Estimates of the Burden of Data Collection 9

A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers 12

A.14. Annualized Cost to Federal Government 12

A.15. Explanation for Program Changes or Adjustments 12

A16. Plans for Tabulation and Publication and Project Time Schedule 12

A17. Reason(s) Display of OMB Expiration Date is Inappropriate 14

A18. Exceptions to Certification for Paperwork Reduction Act Submissions 14

SUPPORTING REFERENCES 15

TABLES

Table A11.1. Summary of Sensitive Topics to be Included on the Baseline Survey and Their Justification 9

Table A12.1 Calculations of Burden Hours and Cost 11

Table A.16.1. Timeline for Use of Data Collection Instruments 13





ATTACHMENTS

ATTACHMENT A: QUESTION BY QUESTION SOURCE LIST FOR THE baseline survey

ATTACHMENT b: SOURCES REFERENCED FOR THE baseline survey

ATTACHMENT c: PERSONS CONSULTED ON INSTRUMENT DEVELOPMENT AND/OR ANALYSIS OF THE MPC! baseline survey

ATTACHMENT d: Consent Letters and Forms, Youth Assent Form (Baseline Survey)

ATTACHMENT e: Consent Form – Focus Group

ATTACHMENT f: CONFIDENTIALITY PLEDGE

ATTACHMENT g: ANALYSIS PLAN

ATTACHMENT H: BASELINE SURVEY PRETEST REPORT

ATTACHMENT I: MPC! 60 day notice



INSTRUMENTS

Instrument 1: baseline survey

Instrument 2: Master Topic Guide for interviews

Instrument 3: Master Staff Survey

Instrument 4: Program Attendance Data Collection Protocol

Instrument 5: Fidelity Facilitator Log

Instrument 6: Master Protocol for Youth Focus Groups



PART A INTRODUCTION

Research on programs to prevent teen pregnancy is at a turning point. Much of the available research evidence dates to the late 1980s and early 1990s, when public health officials were facing the twin threats of the emerging HIV/AIDS epidemic and a sharp, unexpected increase in the teen birth rate in the United States. In response to these threats, researchers launched a broad and sustained effort to identify and test new programs and curricula with the potential to reduce high rates of teen pregnancy, sexually transmitted infections (STIs), and associated sexual risk behaviors. These efforts were manifest in such events as the founding of the National Campaign to Prevent Teen Pregnancy in 1996 and publication of Douglas Kirby’s seminal “No Easy Answers” report in 1997.

Much has changed in the intervening years. The teen birth rate ultimately peaked in the early 1990s and has now plunged to historic lows (Ventura et al. 2014). Researchers have succeeded in identifying dozens of prevention programs with demonstrated evidence of success in reducing adolescent sexual risk behaviors (Goesling et al. 2014), and the federal government has invested millions of dollars in disseminating knowledge of the programs and implementing them in communities around the country (Kappeler and Farb 2014; Zief et al. 2013). Overall rates of adolescent sexual activity have also declined since the early 1990s, but there has been less progress on addressing dissimilar rates by race and ethnicity. The prevalent contraceptive method among adolescents remain those with relatively high typical-use failure rates (primarily condoms and birth control pills) and not the more effective long-acting reversible contraceptives, which have much lower failure rates under typical use (Martinez and Abma 2015). This current context shifts the research agenda towards a new primary challenge: how to use existing evidence-based programs to sustain the ongoing decline in teen birth rates in the United States, reduce remaining disparities in rates across communities and between different racial/ethnic groups, and encourage the use of the most highly effective contraceptive methods.

In response to this shifting research agenda, the Office of Adolescent Health (OAH) seeks to design a new large-scale, multisite random assignment evaluation of an evidence-based teen pregnancy prevention program that will make a significant contribution to the growing portfolio of research activities OAH has sponsored since the office was established in 2010. Much of OAH’s existing evaluation work focuses on documenting and evaluating the first cohort of grantees funded under the OAH Teen Pregnancy Prevention (TPP) program. With this new evaluation, OAH seeks to launch a “second generation” of evaluation activities - one that addresses a more targeted set of research questions of significant practical relevance to OAH and the broader field. In particular, the new evaluation will seek to advance the existing evidence base by identifying and testing (1) replications of a commonly used but understudied evidence-based teen pregnancy prevention program, and (2) the relative effectiveness of the two more prevalent implementation modes.

This proposed information collection activity focuses on collecting (a) baseline survey data for the impact study, and (b) data for the implementation and fidelity assessment which will provide a detailed understanding of program implementation in the impact study sites and between health teachers and outside health educators.

A.1. Circumstances Making the Collection of Information Necessary

1. Legal or Administrative Requirements that Necessitate the Collection

The current federal emphasis on evidence-based approaches to teen pregnancy prevention began in 2010 with congressional authorization of the TPP program and creation of OAH. The TPP program was one of six early evidence-based initiatives proposed by the Obama administration and authorized by Congress to increase the use of data and evidence in social policy (Haskins and Margolis 2015). The program provides roughly $100 million annually to state and local organizations to implement evidence-based and promising new teen pregnancy prevention programs. As with several of the other federal evidence-based initiatives, the TPP program features a “tiered evidence” grant structure: the majority of funding goes to disseminate and scale up Tier 1 programs that have some existing evidence of effectiveness, whereas a smaller amount supports Tier 2 demonstration projects, which support innovation in the field by developing and rigorously testing promising new approaches to teen pregnancy prevention. Additional federal funding for evidence-based teen pregnancy prevention programs comes from PREP, authorized under the Affordable Care Act to provide formula block grants to states to support evidence-based approaches to teen pregnancy prevention (Zief et al. 2013).

The first cohort of TPP grantees was announced in fall 2010, consisting of five-year awards running through fall 2015 (Kappeler and Farb 2014). A total of 75 organizations received funding under Tier 1 of the TPP program, with awards ranging from roughly $400,000 to $4 million annually. In line with the program’s emphasis on evidence-based approaches, grantees were required to select from a list of 28 existing programs and curricula that the U.S. Department of Health and Human Services (HHS) had identified as having demonstrated evidence of effectiveness in reducing teen pregnancy, STIs, or associated sexual risk behaviors. More than three-quarters of these eligible programs (23 of 28) were selected for use by at least one grantee. The TPP program was successful in reaching a very large segment of the population, with about 100,000 youth per year receiving services across a broad network of schools and other community organizations (Wilson and Lawson 2014). In addition, nearly 20 of these grantees conducted impact evaluations of their TPP program.

The experience of the first cohort of TPP grantees highlighted challenges local communities can face when implementing evidence-based programs (Margolis and Roper 2014). For example, grantees needed practical guidance on how to replicate evidence-based programs with fidelity within the time and scheduling constraints of their local schools and community-based organizations. In other cases, grantees found that the content of some of the older evidence-based programs was outdated or did not resonate with local youth. Implementation fidelity was often difficult to maintain, and varied based on the implementation setting and mode.

OAH drew on these lessons when developing plans for the next cohort of TPP grantees, for whom awards were recently announced on July 6, 2015. For this second cohort of the TPP program, OAH retained the overall tiered structure of the grant program and provides the greatest amount of funding for the replication of evidence-based programs (Tier 1). For Tier 1, a total of 50 organizations received funding to replicate evidence-based programs in high-need communities (Tier 1B). In addition, eight organizations received funding to serve as intermediaries to support capacity building for implementing and evaluating evidence-based programs (Tier 1A).

2. Study Objectives

OAH has designed a new research agenda to complement the second cohort of TPP funding. Building on the experiences of the first cohort of grantees and the grantee-led evaluations, OAH seeks to launch a “second generation” of evaluation activities - one that addresses a more targeted set of research questions of significant practical relevance to OAH and the broader field. In particular, the new evaluation will seek to advance the existing evidence base by identifying and testing (1) replications of a commonly used but understudied evidence-based teen pregnancy prevention program, and (2) the relative effectiveness of the two more prevalent implementation modes.

To meet these objectives, OAH is designing a three-arm randomized controlled trial of Making Proud Choices! (MPC!) The MPC! curriculum aims to increase students’ knowledge of sexually transmitted diseases (STDs) and HIV, and understanding of the effectiveness of condoms at reducing STDs and pregnancy. The curriculum emphasizes abstinence as the safest choice for avoiding pregnancy and STDs, but also encourages youth to use condoms if they do have sex. Two lessons focus on developing participants’ condom use and negotiation skills, including a condom demonstration. The curriculum also covers refusal methods to improve participants’ feelings of self-efficacy regarding condom use and sexual activity.

MPC! is a very popular program across the two largest federal grant programs for comprehensive teenage pregnancy prevention – the OAH TPP program and the Administration on Children and Families’ Personal Responsibility Education Program (PREP). It is implemented by over 100 providers nationwide. The program’s evidence of effectiveness is limited to a single study that meets HHS evidence review standards (Jemmott et al. 1998).1 The study is nearly 20 years old, and was conducted in a highly controlled implementation context by the program developers. New evidence is needed on the effectiveness of the program as it is replicated nationwide, and in schools.

Across its first cohort of TPP grantees, OAH observed variation in implementation fidelity across different facilitators. A large portion of TPP programming is delivered in schools. Regular school teachers can be trained to deliver the program, or outside “health educators” (from a local health department or community based organization) can deliver the program in the school. Having teachers deliver the curriculum can be less expensive and help promote program sustainability. However, the teachers may be less well trained in the curriculum and less comfortable with the material than outside health educators. To better understand whether any one approach leads to greater implementation fidelity and improved youth outcomes, this evaluation will also address the relative effectiveness of both types of facilitators – school health teachers and outside health educators. The study will be designed to address two questions:

  1. Does MPC!, implemented by health educators in schools, change youth sexual behavioral outcomes, relative to a business as usual sexual health program?

  2. Does MPC!, implemented by health educators in schools, change youth sexual behavioral outcomes, relative to MPC! implemented by classroom teachers?

The study will be conducted in 39 middle and high schools, and in required health classes. It is expected that most youth in the study will be 8th or 9th grade students. Survey data will be collected from youth study participants at baseline and about 9-months and 15-months after baseline. 2 The baseline survey data will be used describe the evaluation sample and as covariates in the impact estimation models. The follow-up survey data will be used to estimate program impacts on knowledge, attitudes, beliefs, and behaviors such as sexual initiation and contraception use. See Table A1.1 for a summary of the outcome domains and constructs. See Table A1.1 for a summary of the outcome domains and constructs – these will be described in greater detail in a subsequent ICR that focuses the 9 and 15 month follow-up surveys. Survey items that measure each outcome construct will be used as the dependent variable in a regression analysis used to estimate intent-to-treat program impacts of the MPC! program.

Table A1.1: Summary of outcome domains and constructs

Outcome Domain

Outcome Construct

Exposure to information

Attended classes on reproductive health topics

Received information about birth control from a doctor, nurse, or clinic

Knowledge

Knowledge about condoms

Knowledge about birth control

Knowledge about STIs

Knowledge about IUDs

Knowledge about other hormonal methods of birth control

Knowledge about pregnancy

Attitudes

Support for abstinence

Support for condom use

Refusal skills

Perceived refusal skills

Communication with parents

Communication about romantic relationships and sex

Intentions

Intentions to have sexual intercourse

Sexual risk behavior

Sexual initiation

Sex in the past three months

Sex without a condom in past three months


The impact study will also be complemented by the implementation and fidelity assessment. This study component will take a detailed look at program operations along four key aspects: (1) inputs required for implementation to succeed and be sustained, (2) contextual factors that influence implementation, (3) fidelity and quality of program implementation, and (4) participants’ responsiveness to service.

OAH is currently requesting OMB approval for the collection of the baseline data for the impact study and the data for the implementation and fidelity assessment. A three year clearance is needed because enrollment into the study will be conducted in three cohorts. The first cohort will begin enrollment and baseline data collection on February 1, 2017. The second cohort will enroll throughout the 2017-2018 school year. The third cohort will enroll throughout the 2018-2019 school year; it is anticipated that baseline data collection could occur through April 2019. Implementation study data collection will occur shortly after each cohort enrolls and begins programming; therefore, a three year clearance is needed for the implementation study, as well.

A.2. Purpose and Use of the Information Collection

Baseline Survey. Data collected on the Federal Evaluation of MPC! baseline survey will be used as a central component to the impact study. Specifically, the data will be used to establish baseline equivalence of the treatment and control groups and thus to confirm the integrity of the random assignment process. Baseline data will also be used to adjust impact estimates to account for survey non-response at follow-up. The primary impact analysis will focus on individuals who provide follow-up survey data, regardless of their level of participation in the program, or whether they complete the baseline survey – doing so enables the team to conduct a rigorous, intent-to-treat impact analysis that meets the standards of the HHS Evidence Review. Many baseline measures will be measured again at follow-up; their baseline values can be used to improve the precision of impact estimates by their inclusion as covariates in the impact models, for those individuals with both baseline and follow-up data.

We also plan on conducting exploratory analyses on subgroups defined by baseline measures. These analyses will be considered exploratory, and not used as a primary test of the effectiveness of the intervention. Instead, they are intended to help program providers and practitioners understand if the pattern of the findings for the full sample is similar or different to trends observed for particular subgroups. We will observe trends for subgroups defined by (1) gender, and (2) sexual experience at baseline.


We acknowledge that statistical power for these exploratory analyses may be insufficient due to smaller sample sizes within the subgroups. For that reason, these analyses are not intended as a primary test of the intervention’s effectiveness, but instead to understand whether the overall pattern of findings are similar to trends observed within and across particular subgroups.



Many of the items included on the baseline survey3 are taken directly from similar surveys OMB has already approved for use in the ongoing Evaluation of Adolescent Pregnancy Prevention Approaches (PPA), the Teen Pregnancy Prevention Replication Study, the Personal Responsibility Education Program (PREP - OMB Control Number 0970-0398) Multi-Component Evaluation, and the Pregnancy Assistance Fund (PAF - OMB Control Number 0990-0424 for baseline survey, OMB Control Number 0990-0428 for the implementation study) Study4. To date, the PPA baseline survey has been administered to approximately 7,441 adolescents; the Replication Study baseline has been administered to 7,945 adolescents; the PREP baseline has been administered to 3,991 youth; the PAF study has been administered to 1,349 youth.

HHS has made a priority of aligning measures being used in other federal evaluations of similar programs. For the Federal Evaluation of MPC!, the evaluation team drew items directly from the OMB-approved PPA, Replication Study, PREP, and PAF baseline instruments, making modest changes to account for the content and goals of the MPC! curriculum. Instrument 1 is the baseline survey, Attachment A includes a question by question source list for items on the baseline survey, Attachment B includes a description of each of the sources referenced.

Implementation and Fidelity Assessment. The implementation and fidelity assessment will collect and analyze data to contextualize the analysis of program impacts. Data will be obtained from the following sources: (1) individual discussions with administrators, teachers, and health educators (Instrument 2); (2) a paper and pencil survey of teachers and health educators (Instrument 3); (3) group interviews with participating youth (Instrument 6); and (4) a protocol for recording attendance and content coverage (Instruments 4 and 5). Through these data collection efforts, the study will document the context in which the program is delivered, the planned program, the implementing organizations, administrator and teacher/educator reaction to the program, youth’s program dosage and youth’s experiences and satisfaction with the programs. The master topic guide for the interviews (Instrument 2), the staff survey (Instrument 3), the youth focus group protocol (Instrument 6), and the protocol for recording attendance (Instrument 5) have successfully been used in the PREP and PAF evaluations, and approved by OMB. The fidelity log for recording content coverage (Instrument 6) is modeled upon the program developer’s fidelity log and captures the planned lessons and their content.

The data will serve two main purposes. First, the information will enable the study team to produce clear, detailed descriptions of MPC!, as planned and as implemented, and the counterfactual in each site. This documentation is critical for understanding the meaning of impact estimates. Second, the data will be used to assess fidelity of implementation and the quality of program delivery. This information is essential for determining whether the interventions were implemented well, whether the evaluation provided a good test of each site’s intervention, and whether fidelity and quality differed by whether a school teacher or outside health educator implemented the program.

A.3. Use of Information Technology to Reduce Burden

Baseline Survey. The baseline survey will be a web-based survey administered to students in school, in a group setting. Trained Mathematica field staff will provide participants with smartphones, along with a unique PIN and password to access the survey from the device. This data collection plan reflects sensitivity to issues of efficiency, accuracy, and respondent burden.

Web-based surveys are an attractive option for surveys of adolescents and young adults, and in particular for surveys that ask sensitive questions and have various pathways based on responses to those questions. Web-based surveys can decrease respondent burden and improve data quality. Unlike paper instruments in which respondents must determine question routes themselves, the web-based application will include built-in skips and will route respondents to the next appropriate question based on their answers. The web-based program automatically skips them out of any questions that are not relevant to them, thus reducing burden on respondents having to navigate through various paths. Additionally, data checks can be programmed into the survey to eliminate responses that are out of range as well as conflicting responses.

Implementation and Fidelity Assessment. For program attendance and fidelity data, sites will be able to either submit an extract from their existing information systems or use spreadsheets that have been developed by Mathematica to facilitate data entry (Instruments 4 and 5), whichever method is least burdensome to them. The spreadsheets have been designed based on experience from prior studies, such as the PREP Multi-Component Evaluation and PAF, which similarly ask sites to provide attendance and fidelity data using spreadsheets. The spreadsheets are flexible and easy-to-use, while ensuring that high quality data is collected.

A.4. Efforts to Identify Duplication and Use of Similar Information

The information collection requirements for the Federal Evaluation of MPC! have been carefully reviewed to avoid duplication with existing and ongoing studies of MPC!, determine what information is already available from existing studies, and what will need to be collected for the first time. Although the information from the one prior 1998 study of MPC! that meets the HHS evidence review standards provides value to our understanding of the effectiveness of this curriculum on behavioral outcomes, OAH does not believe that it provides current information on program effectiveness, and with a broader population of youth participating in schools. The data collection for the Federal Evaluation of MPC! evaluation is an essential step to providing essential information on program effectiveness, and the relative effectiveness of teachers and health educators, on this very popular program being implemented in schools.

A.5. Impact on Small Businesses

No small businesses are expected to be impacted. Mathematica staff will work with the study sites (schools) to lead and coordinate the data collection activities. The data collection plan is designed to minimize burden on schools by providing staff from Mathematica Policy Research, and its subcontractor Decision Information Resources, to lead the data collection activities.

A.6. Consequences of Not Collecting the Information/Collecting Less Frequently

Baseline Survey. Baseline data are essential to conducting a rigorous evaluation of the MPC! curriculum. Specifically, without these baseline data, we would not be able to monitor whether random assignment was conducted correctly and created two very similar research groups. In addition, we would not be able to estimate impacts for key subgroups or to improve the precision of our impact estimates by including baseline covariates in our statistical models used to estimate program impacts.

Implementation and Fidelity Assessment. Implementation data are essential for understanding the results of a rigorous evaluation of pregnancy prevention programs. Data collection early in program implementation is crucial for documenting site implementation plans and early program experiences, while data collection later in program implementation is essential for learning about actual service delivery and unplanned adaptations, fidelity to plans, participant engagement, and changes in program context during the evaluation period. Without implementation data, we lose the opportunity to document the evolution of program implementation during the evaluation and provide lessons based on the experiences of the sites. Collecting implementation data less frequently would either make it impossible to assess fidelity of program implementation or require reliance on program documents and respondent recall to document program implementation plans.

A.7. Special Circumstances

There are no special circumstances for the proposed data collection efforts.

A.8. Federal Register Notice and Consultation Outside the Agency

A 60-day Federal Register Notice was published in the Federal Register on May 11, 2016, vol. 81, No. 91; pp. 29282-29283 (see Attachment I). There were no public comments.

A 30-day Federal Register Notice is included with this submission.

A.9. Payments to Respondents

Baseline Survey. Mathematica will provide a gift bag worth $5 to any student whose parent/guardian submits a signed consent form, regardless of whether the parent/guardian consents or refuses permission for their child to participate in the study. No payment or gift to youth respondents will be made for the baseline survey.

Achieving a high study consent rate is critical for three reasons. First, it is necessary for achieving the minimum sample size needed to demonstrate the statistical significance of the findings. With schools as the level of random assignment, we are constrained by the eligible sample in each school. It is imperative that we achieve a high consent rate across that sample in order to maintain the study’s statistical power. The federal government’s investment in an underpowered study would be questionable. Second, when schools are the unit of assignment, which is the case in this study, rigorous evidence reviews (such as the HHS Teen Pregnancy Prevention Evidence review, which will eventually assess the evidence from this study), assess study attrition beginning with the youth enrolled in the study schools at the time of random assignment (i.e. non-consent will be considered as a source of study attrition). High attrition from the study can result in a low evidence rating; a low evidence rating means the federal government has invested in a study that does not have sufficient internal validity to draw conclusions about program effectiveness. Achieving high consent rates is therefore necessary in order to avoid high attrition. Third, and finally, OAH has directed its contractor to study the effectiveness of Making Proud Choices! in schools in low-income, disadvantaged areas. OAH intends for the study population to be representative of the youth in these school districts. If high consent rates are not received, the study sample could be biased in the direction of the higher achieving, higher income, more highly motivated youth with more supportive parents – the population that is likely to return a consent form early and without any or much incentive.

OAH has also funded numerous rigorous evaluations of its grant programs, conducted by independent evaluators. Four of these grantee-led evaluations in low-income areas with a similar design (schools as the level of random assignment) used a gift card to encourage a high rate of consent form return, and therefore a sufficient consent rate. The consent rates ranged from 70 percent to 81 percent, with an average of 75 percent. Two of these grantee-led evaluations in low-income areas with a similar design did not use an incentive for the return of the consent form, and consent rates were much lower - 57 percent and 60 percent.

There is a body of literature that shows that a lack of incentive can result in a less representative consented sample, and in particular that incentives are useful in compensating for lack of motivation to participate (Shettle & Mooney, 1999Groves, Singer, & Corning, 2000). Incentives have also been shown to induce participation among sample members for whom the topic is less salient (Baumgartner and Rathbun, 1997; Martinez-Ebers, 1997). This is particularly important for OAH’s efforts to find effective programs to reduce sexual risk behaviors – the youth (and their parents) who don’t find the topic salient are likely to be those that are in most need of the intervention and could contribute the most to the study.

The power calculations for the MPC! study assumed a 70 percent consent rate, and was based upon the experience of these recent studies. To achieve that necessary consent rate, we propose providing a gift bag worth $5 for the return of a signed study consent form. We also propose maintaining a gift bag worth $5 to better ensure that the study sample will represent the majority of the youth in the study schools.

Implementation and Fidelity Assessment. The focus groups will be scheduled at a time that is most convenient for the school and its students, which may be during school or after school. We will first ask schools if we can schedule the focus groups at school, either during the school day or immediately after school, and will provide pizza or other snacks to the students and the school staff who help to organize the focus groups. If the focus group cannot be held at the school, and students must provide their own transportation to or from in order to participate in the focus group, we will provide a $25 gift card to defray the cost of the transportation.





A.10. Assurance of Confidentiality

Baseline Survey. Prior to collecting baseline data, Mathematica will seek active consent from a parent or legal guardian (Attachment D). The consent form will explain the purpose of the study, the data being collected and its use. The form will also state that answers will be kept private and not seen by anyone outside of the study team, that participation is voluntary, and that they may refuse to participate at any time without penalty. Participants and their parents/guardians will be told that, to the extent allowable by law, individual identifying information will not be released or published; rather, data collection will be published only in summary form with no identifying information at the individual level. In addition, our protocol during the self-administration of the web instrument will provide reassurance that we take the issue of privacy seriously. It will be made clear to respondents that identifying information will be kept separate from questionnaires. To access the web survey application, each questionnaire will require a unique PIN and password; this will ensure that no identifying information will appear on the questionnaire and also prevent unauthorized users from accessing the web application. Any personally identifiable information will be stored in secure files, separate from survey and other individual-level data. Field staff will collect the tablets or smartphones used for survey administration at the end of the survey and will be trained to keep the devices in a secure location at all times.

Trained Mathematica field staff will administer the baseline survey in a group setting. All field staff are required to sign a confidentiality pledge (see Attachment F) when hired by Mathematica. On the day of the survey administration, field staff will distribute a student assent form to participants, providing them with a chance to opt out of the baseline data collection, should they want to do so (Attachment D).

Mathematica has established security plans for handling data during all phases of the data collection. The plans include a secure server infrastructure for online data collection of the web-based survey, which features HTTPS encrypted data communication, user authentication, firewalls, and multiple layers of servers to minimize vulnerability to security breaches. Hosting the survey on an HTTPS site ensures that data are transmitted using 128-bit encryption; transmissions intercepted by unauthorized users cannot be read as plain text. This security measure is in addition to standard user PIN and password authentication that precludes unauthorized users from accessing the web application.

Implementation and Fidelity Assessment. Program facilitators and school staff participating in interviews will receive information about privacy protection when arrangements are made for meeting with them, and information about privacy will be repeated as part of the implementation study team’s introductory comments during site visits. Site visit staff will be trained on privacy procedures, and will be prepared to describe them and to answer questions raised by local program staff.

There will be a separate consent process for participation in youth focus groups. Youth under age 18 (likely to be all youth participating in the study) will need a signed parental consent form for participation in a focus group, separate from general evaluation consent. Participating youth will also be given an assent form to sign, and will have an opportunity to refuse participation at the time of the focus group, if they choose to do so. Copies of these forms are in Attachment E. Focus group consent and assent forms state that answers will be kept private, and will not be attributed to any participant. The forms also state that youths’ participation is voluntary, that they may refuse to participate, and that identifying information about them will not be released or published. The focus group consent forms also include additional language explaining the unique confidentiality risks associated with participation in a group interview.

All program attendance and fidelity data will be transmitted with a unique identifier rather than personally identifying information. The unique identifier is necessary to support combining the program attendance data with outcome data. We will also use a password protected website to exchange the files. All electronic data will be stored in secure files.

For administration of hard copy staff surveys, site visitors will provide respondents with a chance to opt out of the staff survey, should they want to do so. The questionnaire will be distributed in a sealed envelope, and the questionnaire and distribution envelope will have a label with a unique staff ID number. No identifying information will appear on the questionnaire or the return envelope.

Staff are trained to keep all data collection forms in a secure location and are instructed not to share any materials with anyone outside of the study team. Surveys completed at the time of the site visit will be collected by site visitors and brought back to the Mathematica office. Surveys completed later will be mailed back to Mathematica in postage-paid envelopes.

All electronic data will be stored in secure files, with identifying information kept in a separate file from survey and other individual-level data. Survey responses will be stored on a secure, password-protected computer shared drive. Mathematica’s Confidentiality Pledge, signed by all staff, is included in Attachment F.

A.11. Justification for Sensitive Questions

A key objective of Making Proud Choices! is to prevent teen pregnancy through a decrease in sexual activity and/or an increase in contraceptive use. Because this is the primary focus of the program, some questions on the baseline survey are necessarily related to these sensitive issues.

Table A11.1 provides a list of the sensitive topics found on the baseline survey, along with a justification for its inclusion. Questions about sensitive topics will be drawn from previously-successful youth surveys and similar federal evaluations (see Attachments A and B). The items will be carefully selected, and guided by experience in determining whether or not the benefits of measures may outweigh concerns about the heightened sensitivity among sample members, parents, and program staff to specific issues. Although these topics are sensitive, they are commonly and successfully asked of middle school and high school youth similar to those who will be in the Federal Evaluation of MPC!

Table A11.1. Summary of Sensitive Topics to be Included on the Baseline Survey and Their Justification

Topic

Justification

Sexual activity, incidence of pregnancy, and contraceptive use

Sexual activity, incidence of pregnancy, and contraceptive use are all key outcomes for the evaluation.

Drug and alcohol use and violence

There is a substantial body of literature linking various high-risk behaviors of youth, particularly drug and alcohol use, sexual intercourse, and risky sexual behavior. The effectiveness of various program strategies is expected to differ for youth who are and are not experimenting with or using drugs and alcohol (Tapert et al., 2001; Li et al., 2001; Boyer et al., 1999; Fergusson and Lynskey, 1996; Sen, 2002; Dermen et al., 1998; Santelli et al., 2001.)


In addition, the baseline survey instrument will be designed so that only sexually active youth will receive most of these sensitive questions. The survey will ask all youth for background information and include a screening question about sexual experience. The survey will route youth who report ever having sexual experience to additional questions about sexual behavior and their use of contraceptives; those who report never having sex will be routed to other questions. Thus, many of the sensitive items related to sexual activity will only be asked of sample members who report being sexually active. This structure has been used successfully in other federally funded teen pregnancy prevention evaluations, such as the Evaluation of the Title V, Section 510 Abstinence Education Program, the Evaluation of Adolescent Pregnancy Prevention Approaches (PPA), the TPP Replication Study, and Personal Responsibility Education Program (PREP) Multi-Component Evaluation.

A.12 Estimates of the Burden of Data Collection

OAH is requesting three years of clearance for the Federal Evaluation of MPC!. Table A12.1 provides the estimated annual reporting burden for study participants as a result of the baseline survey for youth, the estimated annual reporting burden for staff as a result of the implementation and fidelity assessment, and the estimated annual burden for youth participating in focus groups.

Baseline Survey. It is expected that 3,900 youth will be enrolled in the expected 39 study schools and eligible to participate in the study because they are enrolled in a required health class. Sample intake will take place in three waves over three school years (spring 2017, fall 2017- spring 2018, and fall 2018 – spring 2019). We expect to recruit 70 percent of the eligible youth, for a total sample size at baseline of 2,730.

The expected response rate for the baseline survey is 95 percent, for a total of 2,594 completed surveys (865 per year). Based on experience with similar questionnaires, it is estimated that it will take youth 30 minutes (30/60 hour) to complete the baseline survey, on average. The total annual burden for this data collection is estimated to be 865 x 30/60 = 432.5 hours.

Implementation and Fidelity Assessment.

  1. Annual Burden for Program Staff

It is expected that across the 39 schools, there will be up to 3 staff respondents per school depending on the instrument (including a school administrator, and approximately two health teachers and/or health educators) for the various data components (i.e. the interview, staff survey, and fidelity assessment). We expect to conduct staff interviews with each of the three possible respondents for a total of 117 respondents (39 schools x 3 respondents), or 39 per year. Each respondent will be interviewed once, for one hour during the site visit, for an annual burden of 39. Two health teachers or health educators from each school ((39 x 2)/3 years = 26) will complete a staff survey, for 26 respondents annually. The staff survey will take 30 minutes to complete, for an annual burden of 26 x 30/60 = 13 hours. Administrative data on program attendance will be collected from the 52 (26 x 2) teachers or health educators providing the program in the two treatment study arms (i.e. in 26 schools), for 17.33 respondents annually. Teachers or health educators will be providing attendance data for the 14 sessions and will spend about 1/60 hours compiling the attendance for each session. Annual burden hours are estimated to be (52 respondents/3 years) x 14 sessions x 1/60 hours = 4.9 hours. Teachers or health educators providing the program in the two treatment study arms (i.e. in 26 schools) will also be expected to complete a program session fidelity log to report actual content that was covered and program components that were completed. Completion of the fidelity log will take about 5 minutes per session, and annual burden hours are estimated to be (52 respondents/3 years) x 14 sessions x 5/60 hours = 19.4 hours.

The hourly wage rate of $20.76 represents the mean hourly wage rate for community and social service occupations (National Occupational Employment and Wage Estimates, Bureau of Labor Statistics, Department of Labor, May 2010).



  1. Annual Burden for Participating Youth

It is expected that about ten students in each of the 26 treatment schools will participate in a focus group at the time of the site visits (260 youth). Each focus group is expected to take 1 hour, yielding an annual burden estimate of (260 youth/3 years) x 1 hours = 87 hours. All youth in the study are expected to be under 18.

Overall Burden

Across the baseline survey and the implementation and fidelity assessment data collections, we estimate a total of 571.5 hours (and a cost of $1,583.99).



Table A12.1 Calculations of Burden Hours and Cost

Form Name

Type of Respondent

Number of Respondents

Number of Responses per Respondent

Average Burden Hours per Response

Total Burden Hours

Average Hourly Wage

Total Respondent Cost

Baseline survey of impact study participants

Participating program participants and control group participants

865

1

30/60

432.5

NA

NA

Master Topic Guide for Staff Interviews

School administrator, health teacher and health educators

39

1

1

39

$20.76

$809.64

Staff Survey

Health teachers and health educators

26

1

30/60

13

$20.76

$269.88

Program Attendance Data Collection Protocol

Health teachers and health educators

17.33

14

1/60

4.9

$20.76

$101.72

Program Fidelity Checklist

Health teachers and health educators

17.33

14

5/60

19.4

$20.76

$402.74

Youth Focus Group

Participating program participants

87

1

1

87

NA

NA

Total





595.8


$1,583.99



A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers

These information collection activities do not place any capital cost or cost of maintaining requirements on respondents.

A.14. Annualized Cost to Federal Government

Data collection will be carried out by Mathematica Policy Research, under contract with OAH to conduct the Federal Evaluation of MPC!, with a sub-contract to Decision Information Resources to collect some implementation-related data. Data collection for the baseline survey and the implementation and fidelity assessment is incrementally funded. The current funding, and cost to the federal government, is $500,150.00

A.15. Explanation for Program Changes or Adjustments

This is a new data collection.

A16. Plans for Tabulation and Publication and Project Time Schedule

1. Analysis Plan

Baseline Survey. Data from the baseline survey will be used for two initial purposes. First, OAH will use the data to describe the study sample. This step will enable OAH to compare the characteristics of youth in the study with youth nationwide and provide guidance on how the study sample and findings might generalize to a broader policy setting. Second, OAH will assess whether random assignment resulted in similar baseline characteristics of youth, on average, for the treatment and control groups.

Ultimately, the baseline data will also be used in estimating program impacts on youth outcomes. The program impact estimates will rely primarily on data from the two planned follow-up surveys, which OAH will submit for OMB approval after the baseline survey is underway. With a random assignment design, unbiased impact estimates can be obtained by comparing mean outcomes for the treatment and control group based on follow-up data alone. However, we can improve precision of impact estimates by controlling in our regression model for baseline covariates, especially baseline measures of outcomes. Regression adjustment can also address any differences between the treatment and control groups in baseline characteristics that arose by chance or from survey nonresponse. Baseline data will also be used for subgroup analysis, to assess whether program impacts vary by baseline characteristics.

A detailed analysis plan is found in Attachment G.

Implementation and Fidelity Assessment. The instruments included in this OMB package for the implementation and fidelity assessment will yield data that will be analyzed using qualitative and quantitative methods to describe program implementation, assess the implementation fidelity and quality, and examine experience with program implementation. A thorough understanding of program implementation will provide context for interpreting program impacts, while a greater understanding of how programs can be implemented with high quality is expected to inform the next generation of programming.

The research team will create a coding scheme consisting of a hierarchy of conceptual categories and classifications linked to the evaluation research questions, dimensions of implementation, and program logic models. Team members will then use software (Atlas.ti) to assign codes to specific text in the electronic file of site visit notes and other documents. Coding the qualitative data in this way will enable the team to access data on a specific topic quickly and to organize information in different ways to facilitate the identification of themes and compile the evidence supporting them. As data collection proceeds, the coding scheme will be refined to better align it with both themes and topics that emerge from the data and with the research questions (Ritchie and Spencer, 2002).5 To facilitate analyses of patterns and themes across sites, we will also code key site-level characteristics, such as type of program model and characteristics of the youths served.

After all the qualitative data have been coded, we will use the software to retrieve data on the research questions and subtopics to identify themes and triangulate across data sources and individual respondents. Much of the meaning of the data will be discerned through descriptive analyses—qualitative and quantitative--that organize data thematically; create summary statistics that characterize overall experiences in each site, as well as variations across and within sites; and examine themes and topics from multiple perspectives and highlight the similarities and differences among them (Patton, 2002).6 We will also explore relationships across themes (for example, relationships between the types of implementation challenges sites face and their staffing patterns and partnership arrangements).

2. Time Schedule and Publications

OAH expects that the Federal Evaluation of MPC! will be conducted over five years, beginning in September 2015. This request is for a three year period beginning in fall 2016, and subsequent packages will be submitted as necessary for new collections or to extend collection periods. Below is a schedule of the data collection efforts for the baseline survey and implementation and fidelity assessment, the focus of this ICR.

Table A.16.1. Timeline for Use of Data Collection Instruments

Instrument

Date of 60-Day Submission

Date of 30-Day Submission

Date Clearance Needed

Date for Use in Field

In-depth Implementation Study

Instrument 1:
Baseline Survey

May 2016

July 2016

January 2017

February 2017

Instrument 2:
Master topic guide for interviews

May 2016

July 2016

January 2017

February 2017

Instrument 3:
Staff survey

May 2016

July 2016

January 2017

February 2017

Instrument 4: Program Attendance Data Collection Protocol

May 2016

July 2016

January 2017

February 2017

Instrument 5:
Fidelity Facilitator Log

May 2016

July 2016

January 2017

February 2017

Instrument 6: Youth Focus Group

May 2016

July 2016

January 2017

February 2017



A17. Reason(s) Display of OMB Expiration Date is Inappropriate

All instruments, consent and assent forms and letters will display the OMB Control Number and expiration date.

A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.

SUPPORTING REFERENCES

Baumgartner R, Rathbun P. Prepaid Monetary Incentives and Mail Survey Response Rates; Paper presented at the Annual Conference of the American Association of Public Opinion Research; Norfolk, VA. 1997.

Boyer, Cherrie B., Jeanne M. Tschann, and Mary-Ann Shafer. “Predictors of Risk for Sexually Transmitted Diseases in Ninth Grade Urban High School Students.” Journal of Adolescent Research, vol. 14, no. 4, 1999, pp. 448-65.

Dermen, K. H., M. L. Cooper, and V. B. Agocha. “Sex-Related Alcohol Expectancies as Moderators of the Relationship between Alcohol use and Risky Sex in Adolescents.” Journal of Studies on Alcohol, vol. 59, no. 1, 1998, pp. 71.

Fergusson, David M. and Michael T. Lynskey. “Alcohol Misuse and Adolescent Sexual Behaviors and Risk Taking.” Pediatrics, vol. 98, no. 1, 1996, pp. 91.

Groves RM, Singer E, Corning AD. A leverage-saliency theory of survey participation: description and illustration. Public Opinion Quarterly. 2000;64:299–308.

Li, Xiaoming, Bonita Stanton, Lesley Cottrell, James Burns, Robert Pack, and Linda Kaljee. “Patterns of Initiation of Sex and Drug-Related Activities among Urban Low-Income African-American Adolescents.” Journal of Adolescent Health Official Publication of the Society for Adolescent Medicine, vol. 28, no. 1, 2001, pp. 46.

Martinez-Ebers V. Using monetary incentives with hard-to-reach populations in panel surveys. International Journal of Public Opinion Research. 1997;9:77–86.

Santelli, John S., Leah Robin, Nancy D. Brener, and Richard Lowry. “Timing of Alcohol and Other Drug use and Sexual Risk Behaviors among Unmarried Adolescents and Young Adults.” Family Planning Perspectives, vol. 33, no. 5, 2001.

Sen, Bisakha. “Does Alcohol-use Increase the Risk of Sexual Intercourse among Adolescents? Evidence from the NLSY97.” Journal of Health Economics, vol. 21, no. 6, 2002, pp. 1085.

Shettle C, Mooney G. Monetary incentives in government surveys. Journal of Official Statistics. 1999;15:231–250.

Tapert, Susan F., Gregory A. Aarons, Georganna R. Sedlar, and Sandra A. Brown. “Adolescent Substance use and Sexual Risk-Taking Behavior.” Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 28, n3, 2001, pp.181.

1 Jemmott, J. B., L. S. Jemmott, and G. T. Fong. “Abstinence and Safer Sex HIV Risk-Reduction Interventions for African American Adolescents: A Randomized Controlled Trial.” Journal of the American Medical Association, vol. 279, no. 19, 1998, pp. 1529–1536.

2 A separate ICR will be submitted for the 9-month and 15-month follow-up surveys.

3 The participant-facing name of the study is the Attitudes, Behaviors, and Choices (or ABC) Study. This name appears on the instrument (Instrument 1) and related consent and assent materials (Attachment D).

4 ACF received initial OMB approval for the PPA baseline survey on July 26, 2010 (OMB Control Number 0970-0360). In summer 2011, oversight of PPA was transferred to the Office of Adolescent Health (OAH) within the Office of the Assistant Secretary, and the project is now tracked with a different OMB Control Number (0990-0382). The OMB Control Number for the Teen Pregnancy Prevention Replication Study is 0990-0394. OMB approval for the PREP baseline survey was received on March 12, 2013 (OMB Control Number 0970-0398). OMB approval for the PAF baseline survey was received on August 30, 2014 (OMB Control Number 0990-0424) and approval for the Implementation Study was received on April 18, 2015 (OMB Control Number 0990-0428).

5 Ritchie, J., and Spencer, L. (2002). Qualitative data analysis for applied policy research. In Huberman, A.M., and Miles, M.B. The qualitative researcher’s companion. Thousand Oaks, CA: Sage Publications.

6 Patton, M.Q. (2002). Qualitative research and evaluation methods: Third edition. Thousand Oaks, CA: Sage Publications.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMathematica Staff
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy