Attachment G

Attachment G.doc

Evaluation of Pregnancy Prevention Approaches - Baseline Data Collection

Attachment G

OMB: 0970-0360

Document [doc]
Download: doc | pdf







Attachment G


Key Questions and Responses

(Frequently Asked Questions)

EVALUATION OF ADOLESCENT PREGNANCY PREVENTION APPROACHES

KEY QUESTIONS AND RESPONSES

Project Synopsis

The evaluation of Adolescent Pregnancy Prevention Approaches (PPA) is being undertaken to expand available evidence on effective ways to reduce teen pregnancy. The evaluation is being conducted under contract from the U.S. Department of Health and Human Services (DHHS), Administration for Children and Families (ACF), by Mathematica Policy Research and its subcontractors Child Trends, the National Campaign to Prevent Teen and Unplanned Pregnancy, Twin Peaks Partners LLC, Public Strategies, Inc, and the National Abstinence Education Association. The evaluation will document and test a range of pregnancy prevention approaches, including comprehensive sex education, abstinence education, and STD/HIV prevention programs, in up to eight program sites. Program impacts will be estimated using a random assignment design, involving random assignment of either schools or individuals depending on the program setting. Overall, the evaluation will be based on a sample of as many as 10,800 youth. The evaluation team will collect baseline information when youth are enrolled and two waves of follow-up data on outcomes. Comparison of outcomes for the program and control groups will indicate the effectiveness of the programs in changing rates of sexual activity and abstinence, the incidence and risks of teen pregnancy, births, and STDs, and rates of depression, alcohol and drug use, school completion, and other related outcomes.

Purpose and Goals of the Evaluation

  1. What is the purpose of the project?

The purpose of this project is to evaluate a variety of strategies designed to prevent and reduce pregnancy and related risks among teens in the U.S. By thoroughly documenting these program approaches and rigorously testing their effectiveness, the project is intended to increase our understanding of the effectiveness of a range of interventions.

  1. Why is this evaluation needed? Don’t we know what works?

Most evaluations of teen pregnancy prevention approaches have relied on nonexperimental methods, which cannot rigorously establish the causal links between an intervention and observed outcomes. Some evaluations have been randomized experiments, but these are rare and have produced mixed results. In some cases, such evaluations have failed to detect impacts, but they used small samples that may have made detection of true effects unlikely. Many studies are dated, have examined approaches now bypassed by further programmatic developments, or may not have focused on limited populations. The recent DHHS study of Title V, Section 510 abstinence-until-marriage education programs conducted by Mathematica had some of these limitations; it examined programs no longer considered “state-of-the-art,” and those programs were for young students (e.g., middle school students) only.

  1. There was already an evaluation of abstinence education programs that showed no impacts, so why should we evaluate abstinence education programs again?

The evaluation of the impact of four Title V, Section 510 abstinence-until-marriage programs, completed in 2007, focused on long-term outcomes four to six years after students in upper elementary and middle school were enrolled in the programs. The main findings were:

  • The programs had no effect on whether or not youth engaged in sex or remained abstinent

  • Program youth were no more likely to have engaged in unprotected sex than control group youth.

The study concluded that programs serving such young children may not be sufficient to alter behavior when they are older and sexual activity becomes more common. The programs did not continue as youth entered high school, when a greater number of youth begin contemplating and engaging in sex.


One study such as this, however, does not usually provide a definitive answer to important research questions. Scientific evidence has to accumulate over multiple studies before a firm consensus can be reached about its implications.


Moreover, in several ways the current evaluation will go beyond the Title V evaluation of abstinence-until-marriage programs. First, it will focus mostly on high-school-age youth, who are more often faced with immediate decisions about abstinence and sexual activity. In addition, new abstinence-until-marriage programs have been developed since the earlier evaluation and have not been rigorously studied. Moreover, the purpose of the current evaluation is to assess the effectiveness not only of abstinence-until-marriage education, but also of other program types, including comprehensive sex education and HIV/STD prevention.


Finally, this study will provide information that responds to the pressing decisions that local communities face. Although earlier research did not find positive results from abstinence-until-marriage programs, many communities are still likely to be seeking effective programs of this type, and need more evidence about what might work. More evidence about the effectiveness of other program types is also widely needed by school and community leaders.

  1. Will this evaluation tell us whether one approach is more effective than another?

The evaluation will test the effectiveness of different types of pregnancy prevention programs, including abstinence education, comprehensive sex education, and STD/HIV education and prevention. The focus of the evaluation is to measure how effective each individual program or program type is in changing behavioral outcomes, compared to outcomes for youth in each site who are part of the “control group” (i.e., those who have access to services as usual, but not the intensive pregnancy prevention program that is being tested).

  1. For whom will this evaluation be useful, and how?

On a broad level, the findings of the evaluation will be of interest to the general public, to policymakers, and to organizations interested in teen pregnancy prevention. Findings on the impacts of the programs will provide schools and educators with reliable evidence on the effectiveness of different pregnancy prevention programs and types of approaches. Findings on the implementation of the programs will offer details on the programs’ setting and operation and on the factors that may affect their success. These latter findings will be of interest to schools, community organizations, foundations, and government agencies interested in replicating successful program approaches.

How Programs will be Selected for Evaluation

  1. How many programs will be selected for evaluation, and how? What will be the process?

The evaluation will be conducted in up to eight sites, and each site will test one program. Thus a total of up to eight programs will be tested. The evaluation team and ACF will select the programs to be tested, with several goals in mind: (1) testing a range of program approaches to maximize chances of finding a variety of effective ones, (2) testing programs that already appear promising based on past research and operational experience, and (3) filling gaps in present evidence about the effectiveness of teen pregnancy prevention approaches. To identify candidate programs, the evaluation team will first review existing research, conduct interviews with a wide range of stakeholders, and visit programs currently in operation. These stakeholders will include public health organizations, professional associations, educational organizations, youth advocacy organizations, researchers, and government agencies.


The next step will be finding local sites where such programs can be tested. The evaluation team and DHHS will publicize the evaluation, and make extensive inquiries to school districts and relevant stakeholder organizations to identify potential sites. The team will then conduct discussions at potential sites with program staff, community partners, pregnancy prevention coalitions, and program or curriculum developers to determine whether they are interested in the evaluation and whether the site would provide the conditions necessary for a rigorous evaluation. The evaluation team will look for sites where programs are well managed, staff are capable of adhering to a defined program curriculum, and there will be a clear difference between the approach being tested and the alternatives available to a control group. The evaluation team will narrow the number of possible sites, and ACF will select those that meet the overall objective of testing a variety of strong and promising programs. Early sites may be selected in time to begin the evaluation in fall 2010, and later sites will start in fall 2011. Some sites may take two years to enroll a full sample of participants, so study enrollment will most likely run through fall 2012.

  1. Do local organizations get to pick the programs they want to run?

Local school districts or other interested organizations can decide whether or not they want to be involved in the study. Potential sites can suggest a program for testing, and the evaluation team will determine, with ACF, whether it meets the overall criteria for the evaluation.

  1. What kinds of programs will be considered?

A range of programs to prevent teen pregnancy will be considered, including abstinence-until-marriage education, abstinence-based education, comprehensive sex education, and STD/HIV education and prevention. The programs tested may be delivered at schools or in the community. Programs delivered at schools could include required or elective classes that meet during the regular school day, or they could include clubs or other groups that meet outside the regular school day.

  1. What kinds of organizations will be operating the programs tested at local sites?

Programs will be offered by school districts or community organizations that can support an evaluation encompassing several hundred youth. Most sites will test school-based programs. These sites may consist of programs offered by a single large district or groups of smaller districts. Community-based sites are most likely to be voluntary programs that serve youth either at schools (outside the normal school day) or at a standalone facility, such as a youth center, place of worship, or health clinic.

  1. How will the evaluators know they are picking strong or promising programs? What criteria will be applied?

Three kinds of criteria will be used. First, programs must have some evidence behind them. It will be important to focus the evaluation mostly on programs that have been the subject of some prior research showing effectiveness. Earlier research, of course, may not have been large scale or as rigorous as what is proposed for this new evaluation, but some indication of promise will be a selection criterion. Extensive consultations with policymakers, program experts, and researchers will be held to identify programs with the potential for wide application and use if the evaluation yields strong evidence of their effectiveness. These consultations may also identify programs already in widespread use that are based on scientifically grounded curricula but that have not yet been subjected to any evaluation testing. These programs may present important opportunities to add to the evidence on pregnancy prevention approaches.


Second, the sites where programs would be tested must offer the right conditions for a rigorous study. A site must be able to implement a program with fidelity with a large enough sample of youth (sample sizes are expected to be approximately 1,600 youth in school sites, and approximately 600 youth in other sites). The site will also need to support a random assignment study design. The program to be tested in an evaluation site has to be clearly distinct in content and intensity from what is already available to youth in that site.


Third, the evaluation team and ACF will select a mix of programs to meet overall evaluation goals. Programs of different types will be selected to increase chances of identifying a variety of effective programs and thus widen the choice of evidence-based strategies for states and local communities.



  1. Why would developers or local users of pregnancy prevention programs want to be evaluated?

The developers of a pregnancy prevention curriculum or program may be interested in helping to find evidence of the program’s effectiveness. They may recognize that expanding use of their curriculum, and securing resources to make that happen, could hinge on the availability of rigorous evidence.


Local school districts or community organizations may wish to showcase the work they are doing, but they may also be motivated by the need for stronger evidence of program effectiveness. They may wish to play a pioneering role in research to improve approaches to reducing teen pregnancy. Being part of the evaluation will give these organizations special insights into ways of assessing program operations and effectiveness, and the results can help them make decisions about which programs they will choose to use in the future. They will also have opportunities to meet with representatives of other sites and exchange lessons learned about implementing their respective programs.

  1. If my program wants to get involved, whom should I contact?

Contact the director or co-director of the evaluation at Mathematica Policy Research. The project director is Alan Hershey (telephone 609-275-2384, email [email protected]). The co-director is Chris Trenholm (telephone 609-936-2796, email [email protected]).

Target Population for Whom Programs will be Tested

  1. Will you test programs just for youth who are attending school, or others as well?

The intervention will test both school and community-based programs. It is anticipated that most of the selected sites will be school districts offering required classes to youth still in school. However, some sites are likely to be in community settings (for example, at a health clinic), and they may serve youth no longer in school, youth attending school, or a mix.

  1. Will the programs be voluntary or mandatory? Will youth have to take an abstinence education or comprehensive sex education class?

Each site will determine the type of program that it wants to have (e.g., abstinence education, abstinence-based education, comprehensive sex education, or STD/HIV prevention).

In some sites, the evaluation will focus on a program that is required of youth, such as a required high school health course. In such sites, with school district approval, a random process will be used to decide which schools will provide the new curriculum of interest and which will operate as usual (for example, by teaching an existing less intensive health curriculum). Schools that operate as usual will form the control group for the evaluation. If parents or youth object to the new program curriculum, local school district policy would determine the procedures available to opt out of the class.

In other evaluation sites, programs will be voluntary. Voluntary programs could include elective classes during or after the school day, delivered by school staff or in collaboration with local community organizations. In other cases, these voluntary programs could be offered by community organizations at community sites rather than schools. Interested youth, with consent of their parents if they are under 18, can apply or be referred to the program and a random assignment process—like a coin toss—will be used to determine whether they can enroll in the program. Youth who are not chosen for the program will form the control group for the evaluation. Members of the control group would be able to receive other services they find on their own from other sources.

Information to be Collected and How it will be Collected

  1. What outcomes will be used to judge program effectiveness?

Program effectiveness will be judged by differences in key outcomes: rates of abstinence and sexual activity; frequency of sexual activity risk behaviors related to pregnancy; and the incidence of pregnancy, births, and STDs. Other outcomes will also be measured, such as the incidence of depression, alcohol and drug use, school completion, and attitudes and knowledge that may affect behavioral outcomes. Individual youth will only be asked about outcomes relevant to their circumstances; for example, youth who say they are abstinent will not be asked further questions about sexual activity. The same overall structure of outcomes will be used in all sites to maintain the consistency of the evaluation.

  1. How will information about these outcomes be collected?

Most outcome data will be collected through self-administered follow-up questionnaires completed by study participants approximately one year and three years after youth are enrolled in the study. These surveys will be administered to members of both the program and control groups in each site. If a site is selected that runs a program emphasizing STD/HIV prevention, most likely in a health clinic or similar setting, we will consider collecting urine samples and cheek swabs to measure the incidence of disease. If such data are collected, they would be gathered by trained health professionals on site, and procedures would be put in place for appropriate notifications and counseling for any positive test results.



  1. What protections will there be for privacy?

Surveys will be administered by trained professional staff from Mathematica Policy Research, and local school and program staff will not have any access to survey response data. The baseline and follow-up surveys will be self-administered, so youth will be able to respond in privacy. All identifying information will be kept separate from the questionnaires. Questionnaires will have no identifying information on them, and participants will place them in envelopes and seal them before turning them over to the field survey staff. Participants will be tracked through a unique ID assigned to them at the beginning of the evaluation. Only a few evaluation team members will have the ability to identify participants by these ID numbers, and only for a limited period. All data will be housed in a secure location and every member of the evaluation team will sign a statement pledging to protect the confidentiality and security of these data. All files created from these data will be handled by team members trained in security awareness, and the files will be stripped of individual identifiers such as name or address. Members of the evaluation team will further ensure that no individual participant can be identified based on his or her demographic characteristics such as age, gender, or race/ethnicity.


There is a possibility that, in just one site, the evaluation might involve collecting biological (or biomarker) data such as a urine sample and cheek swab, to measure the prevalence of STDs, and strong protections of participants’ privacy would be put in place at this site. Such measures would be considered only to test a program focused on STD/HIV prevention, most likely in a health clinic or similar setting with a trained nurse practitioner on site, where such biomarker data are routinely collected as part of existing program services. The program would have to conform to national standards for preserving privacy and confidentiality in service delivery and handling of protected health information. The evaluation team would work with program staff and health officials to establish a confidential system for collecting and handling specimens, obtaining test results, and communicating them to participants. Youth (and parents if the youth are minors) would have the option to withhold consent for collection of biomarkers, but still be able to participate in the program if they consent to the basic study surveys.

  1. Are self-reported data reliable?

Most studies of adolescent sexual activity rely on self-report, because youth are in the best position to report on their own behavior. Self-reported data across a variety of such studies generally indicate consistent results. Even if overall rates of reported behaviors are believed to be accurate, of course, the accuracy of each individual’s responses cannot be verified, and some reporting error is likely. We will minimize error by taking steps such as maximizing privacy in survey administration, wording questions neutrally, and using developmentally appropriate language. We will also run pre-tests to assess the quality of the responses on self-report surveys. Moreover, even if there is some inaccuracy in reporting, it can be expected to occur in both the program and control groups, and thus should not threaten our ability to estimate differences between the two groups and program impacts.

  1. Will youth be forced to answer sensitive questions about things they have never done (or heard of)?

No. Youth will be instructed that responding to the survey is voluntary, and even if they participate in the baseline or follow-up surveys, they can choose to skip any questions they do not wish to answer. Youth will be asked to report on whether they have ever had sex. However, the survey will be designed so that additional questions regarding sexual activity are asked only of youth who report having had sex, and not of youth who have remained abstinent.


Prior to completing any surveys, youth and their parents will be told about the nature of the evaluation and the topics covered in the surveys and asked to provide their consent to participate in the evaluation. Only those who consent will be asked to complete the surveys. In some sites, consent to participate in the evaluation will be required in order for youth to have an opportunity to participate in the program.

  1. Won’t you be telling kids it’s permissible to have out-of-wedlock sex if, on a survey, you ask them if they’ve had it? Therefore, won’t this project actually be encouraging teen sex?

All questions will be phrased in an objective and neutral manner, so as not to suggest either encouragement or discouragement of sexual activity. This neutrality in survey questions is essential to uncovering the effect of the program without letting the survey wording affect responses. In addition, the survey is designed so that additional questions regarding sexual activity are asked only of youth who report having had sex. They are not asked of youth who have remained abstinent.


Moreover, the only way to gather scientific evidence on the effectiveness of pregnancy prevention approaches is conducting rigorous evaluations like this one. A core ingredient of such evidence is clear measures of the specific behaviors the intervention aims to modify and might affect. Without measures of behavior, we won’t be able to determine the effectiveness of programs.

How the Evaluation Will be Conducted

  1. Why is random assignment important as a feature of the evaluation?

The evaluation will use random assignment—like a flip of a coin—to place youth in one of two groups: (1) a program group that is eligible to participate in the program being evaluated, or (2) a control group that is not eligible. Random assignment ensures that the two groups will be alike at the time of enrollment in the study, not only in ways we can measure, but in ways that cannot be easily observed or measured. As a result, it will be possible to attribute later differences in outcomes between the two groups to the intervention with a known degree of statistical confidence.

  1. How will random assignment be carried out?

Two methods of random assignment will be used, depending on the kind of program in each site. If the site is a school district that will be offering the program as a required class, then the evaluation team will work with the district to randomly assign whole schools to the program or control group. All students in the relevant age range in program schools will be subject to the class requirement, while students in control schools will not be offered the class. In a site where a community organization is offering a voluntary program, each applicant or referred youth will be randomly assigned individually to be in the program or control group. In both situations, the impact of the program will be determined by measuring the difference in key outcomes between the two groups.


These two methods will be used because each is best suited to a rigorous evaluation in different circumstances. To evaluate in-school classes or programs that are required of all students in a certain grade or age range, random assignment of schools is preferred. In this design, all students in that grade or age range in a school are either subject to the program requirement or not subject to it. As a result, this design eliminates concerns about what evaluators call “contamination”—the risk that youth in the control group who attend the same school as those in the program may be exposed to program messages. Such exposure could undermine the ability to detect program impacts. For voluntary programs targeting a small group of youth (in after-school or community settings), contamination of the control group is not as likely as in school-wide programs. In such circumstances random assignment of individuals is preferable because it allows for a more precise estimate of a program’s impact for a given sample size.

  1. Will random assignment keep kids from programs that they need?

The evaluation will not make youth worse off in terms of the program choices available to them. Where whole schools are randomly assigned to the program or control group (for example, in the case of a required health class), students in schools selected for the program group will receive a new curriculum, and students in control schools will continue to be offered their existing curriculum or services. Where individual youth are assigned to the program or control group (for example, in the case of a voluntary after-school or community program), a condition of the evaluation is that there must be more youth interested in the program than there are slots available. Interested youth will be chosen through a random process to participate in the program, and the same number of youth will be served as before the evaluation.

Only promising programs will be selected for the evaluation, but stronger evidence is still needed to help decision makers choose interventions that are likely to work. Given a lack of strong scientific evidence, conducting an evaluation involving program and control groups is the best path to expanding opportunities for youth to participate in proven programs.

  1. What information will be available on the details of the programs and how they operate?

The study implementation report will provide a detailed description of program characteristics such as program design and intent, key resources required to implement the program, key features and activities as planned and as implemented, completeness and quality of program delivery, and population targeted and served, as well as characteristics of communities in which they operate.

The Role of Programs and Sites in the Evaluation

  1. Will the developers of program curricula be involved in the evaluation?

To ensure objectivity and consistency across programs and sites, the data collection and analysis will be undertaken entirely by the evaluation team led by Mathematica Policy Research, and this team will not include program developers. However, the developers of program curricula may play other important roles in the project. For example, they may be in the best position to train program staff at evaluation sites in how to use their program materials and convey key messages to participating youth. The role of program developers will be discussed and determined on a site-by-site basis to ensure that the site gets the support it needs from the curriculum developer to implement the program it selects as well as possible.

  1. What will program sponsors and staff have to do in the evaluation?

School districts involved in the evaluation will be asked to provide overall information about their schools, so they can be randomly assigned to test the program curriculum or to be in the control group. Staff in the schools will be asked to provide information about class schedules and requirements for taking the pregnancy prevention class. School staff will obtain informed consent from parents for their children to be involved in the study (in both program and control schools), with support from the evaluation team. When it is time to conduct baseline or follow-up surveys in schools, Mathematica Policy Research will send its own staff to the schools to administer the questionnaires. However, they will need the school’s cooperation in scheduling the survey sessions and assembling students for the survey (and possibly for locating students for follow-up). School district officials will be asked to provide administrative data on students in the sample, including transcript information on grades, course-taking, and attendance.


Where the site’s program enrolls youth continuously rather than only at the start of a school term, program staff will be asked to obtain consent from potential participants, arrange for completion of baseline questionnaires, and securely transfer completed questionnaires to the evaluation team. Staff will also be asked to help the evaluation team arrange for participants in the study to complete follow-up questionnaires.

  1. Will individual staff be assessed? Will these assessments be made public?

No. The purpose of the evaluation is to assess the impacts of the overall program on participants, and to document how the programs are delivered. No information will be published about the work or performance of individual staff. As part of the evaluation, the evaluation team will visit sites and talk to individual staff about how the programs work, but the comments of individual staff will be kept confidential.

  1. Will individual sites be compared with other sites? Will individual schools be compared with other schools? Will these comparisons be made public?

The evaluation is not designed to compare programs or sites against one another, and no comparisons will be made in this way. In sites where whole schools are randomly assigned to either a program group (that receives the program of interest) or a control group (that does not), program impacts are measured by the overall difference between these two groups. The evaluation will not make any comparisons between individual schools.

  1. Will there be any compensation to local programs for the cost and burden of participating in the evaluation?

Yes. The evaluation has funds to compensate local programs for the work they carry out to support the evaluation. The agreements that are negotiated with sites will include specification of the work to be performed and the amount of compensation.



  1. What will the youth involved in the evaluation be expected to do as part of the evaluation?

The individual youth who are enrolled in the evaluation, whether in the program or control group, will be asked to complete questionnaires. Our current plan is for three questionnaires: a baseline and two follow-up questionnaires. The first questionnaire will be completed at baseline (i.e., before the program begins), shortly after the youth or their parents provide consent. The first follow-up questionnaire will be administered about one year later, or at the end of the first school year. The second follow-up questionnaire will be administered two years after the first, or about three years after youth are enrolled in the study. The questionnaires will be paper-and-pencil forms that the youth will complete, in most cases, in group settings in schools. In cases of youth who are not available for group survey sessions, trained interviewers from the evaluation team will administer the questionnaire by telephone.

Schedule for the Evaluation

  1. How long will the overall project take?

This is a multi-year project that started in September 2008 and will take eight years to complete. Sites will enter the study at various times, so enrollment of youth will occur over several years. Early sites may begin enrolling their samples in fall 2010, and later sites in fall 2011. Some sites may take two years to enroll participants, so study enrollment will most likely run through fall 2012. The evaluation team will administer follow-up surveys, which will similarly be spread over several years because of the staging of sites’ entry points into the study and enrollment of study samples. Thus, the follow-up surveys will be administered in the earliest sites beginning in spring 2011, but the final follow-up survey will not be completed until spring 2015.

  1. When will information about the implementation of the programs be available?

A report describing how the programs were implemented is scheduled to be available shortly after all of the sites have enrolled their samples or begun delivering the program.

  1. When will results about program impacts be available?

Two reports on program impacts will be prepared. The first will be completed in spring 2014, after the first follow-up survey has been completed. The final impact report will be completed in 2016, after the second follow-up survey is complete.

More Information about the Evaluation

  1. Who is conducting the evaluation?

The evaluation is being conducted by Mathematica Policy Research and its subcontractors: Child Trends, the National Campaign to Prevent Teen and Unplanned Pregnancy, Twin Peaks LLC, Public Strategies, Inc, and the National Abstinence Education Association. The work is being performed under a contract awarded in fall 2008 by the U.S. Department of Health and Human Services, Administration for Children and Families.


Questions on the project—including sites or programs interested in participating—should be directed to the director or co-director of the evaluation at Mathematica Policy Research. The project director is Alan Hershey (telephone 609-275-2384, email [email protected]). The co-director is Chris Trenholm (telephone 609-936-2796, email [email protected]).

  1. Is there a website with more information about the evaluation?

A website where interested parties can track the progress of the evaluation is accessible at: www.pregnancypreventionapproaches.info.

  1. Is this the only study of approaches to preventing teen pregnancy that DHHS is sponsoring?

DHHS has a range of research and evaluation activities underway related to teen pregnancy prevention. For more information on other DHHS studies, contact Seth Chamberlain at 202-260-2242 or [email protected].


13

File Typeapplication/msword
File TitleATTACHMENT A
AuthorComputer and Network Services
Last Modified BySeth F. Chamberlain
File Modified2010-02-01
File Created2010-02-01

© 2024 OMB.report | Privacy Policy