2 Part B_PREP_DIS_IS OMB Outline_2014-08-19

2 Part B_PREP_DIS_IS OMB Outline_2014-08-19.docx

Personal Responsibility Education Program (PREP) Multi-Component Evaluation

OMB: 0970-0398

Document [docx]
Download: docx | pdf


U.S. Department of Health
and Human Services

Office of Planning, Research and Evaluation & Family and Youth Services Bureau,

Administration for Children and Families

7th floor West Aerospace Building

370 L'Enfant Promenade, SW

Washington, DC 20047

Project Officers: Clare DiSalvo, Dirk Butler



PART B: Justification for the Collection of Implementation Survey Data - Personal Responsibility Education Program (PREP) Multi-Component Evaluation

0970-0398

Draft

August 2014









CONTENTS

Part b: STATISTICAL METHODS FOR THE COLLECTION OF DATA 1

B.1. Respondent universe and sampling methods 1

B.2. Procedures for the collection of information 3

1. Data collection 3

2. Statistical methodology, estimation, and degree of accuracy 4

3. Unusual problems requiring specialized sampling procedures 4

4. Periodic data collection cycles to reduce burden 4

B.3. Methods to maximize response rates and deal with nonresponse 4

B.4. Tests of procedures to be undertaken 4

B.5. Individuals consulted on statistical aspects and individuals collecting and/or analyzing data 5

REFERENCES 6



ATTACHMENTS

ATTACHMENT A: 60 Day Federal Register Notice a.1

ATTACHMENT B: PREP Evaluation Description b.1



INSTRUMENTS

INSTRUMENT #1: Implementation Survey Interview Topic Guide



Part b: STATISTICAL METHODS FOR THE COLLECTION OF DATA

The Family and Youth Services Bureau (FYB) and the Office of Planning, Research and Evaluation (OPRE) within the Administration for Children and Families (ACF) in the U.S. Department of Health and Human Services (HHS) have contracted with Mathematica Policy Research and its subcontractors conduct the Personal Responsibility Education Program Multi-Component Evaluation (PREP Evaluation). The purpose of the evaluation is to assess the implementation, outcomes, and impact of programs implemented as part of the Personal Responsibility Education Program (PREP). This package requests clearance for a second round of data collection conducted for the evaluation’s Design and Implementation Study (DIS). For more information on statistical methods related to previously approved activities, see Information Collection Requests (ICRs) under OMB Control # 0970-0398.

B.1. Respondent universe and sampling methods

The study team will interview staff from four states for the Implementation Survey phase of the DIS. Through interviews with state grantee staff for the previous Design Survey phase of the DIS component, we developed a broad, across-state understanding of states’ plans to implement evidence-based programming under PREP. For the next phase of the DIS, we will focus more narrowly on providing an in-depth description of how a sub-set of states ensure that program providers implement high quality programs with fidelity to their evidence-based designs. Therefore, rather than interviewing one respondent from each state that received a PREP grant (as was the case for the Design Survey), the study team will identify four states for in-depth analysis and interview multiple respondents within each state.

Selecting states. The study team will purposively select four states for participation in the Implementation Survey to illustrate various structures and practices that may support PREP program implementation using the key dimensions described below.

  1. State involvement in staff training, and program technical assistance and monitoring will be the study team’s primary consideration for state selection. During Design Survey interviews, some states reported that they directly oversee training, technical assistance, and monitoring activities, while others have contracted with organizational partners to undertake this work. By selecting states that vary with respect to where the level of implementation support is directly coming from, the study team expects to be able to detail the exact approaches states have taken to support program implementation, and how the varying approaches may be perceived and experienced by the program providers. As a first step in state selection, the study team will sort states into these two categories—those that directly oversee training and those that contracted with third parties to do so—with the eventual goal to select two states in each category.

To finalize the purposive selection of states, and to the extent possible, the study team will also look for variation among the states along the following additional dimensions:

  1. Proportion of grant funds devoted to supporting implementation. Using information from the 2011-2012 and 2012-2013 performance measures (collected as part of the previously approved Performance Analysis Study), the study team will calculate the proportion of grant funds devoted to supporting implementation for each state PREP grantee as a proxy for the relative importance placed on these efforts across the grantees. The variation in proportion devoted to these efforts is likely to result in different structures and intensity of practices. The study team will order states by the proportion of resources targeted to supporting implementation within the two categories above, with the intent to select states in each category that differ from each other in terms of the proportion of the grant devoted to these efforts.

  2. Number of allowable program models. Some states require that all program providers implement the same program model, while others have allowed providers to choose the programs they prefer to implement from among a list of evidence-based programs. The states’ role in selecting the number of program models could influence the structures that states put in place to support program implementation, and their successes and challenges in sustaining effective support structures.

  3. Implementation setting. Most PREP programming is being implemented in schools. But in some states, program delivery occurs exclusively in settings out of school. Support for program implementation may vary across states that are primarily providing PREP programs during the school day, versus those that are providing PREP at community based organizations, clinics, and other congregate youth settings.

Until the study team examines states along these secondary dimensions, they cannot predict the ways in which these dimensions will add variation across the states in the primary two categories (defined by the state role overseeing implementation). For example, they cannot say that, among the states that are using state staff to provide training, technical assistance, and monitor implementation, that there will be one state that devoted a larger proportion of their grant to these efforts, allowed just one program model and is implementing only in schools and one state that devoted a smaller proportion of their grant to these efforts, allowed all programs, and is implementing in a variety of settings. In applying the secondary considerations, the final goal will be to select four states that represent different compositions of the full set of dimensions described above in order to examine implementation supports within different contexts, to the extent possible.

Selecting program providers. The study team will select four program providers from each state to participate in the Implementation Survey interviews. Selecting four providers for interviews will help the study team to capture variation in program experiences within states, in addition to capturing variation across them. For the Performance Analysis Study component of the PREP Evaluation, the study team is collecting data from providers about key implementation challenges, and how often they request technical assistance or implementation support from the state during the first two years of PREP program implementation. The study team will purposively select providers to capture a range in the implementation challenges and technical assistance needs they report. In making this selection, another criteria will also be considered. The study team collected systematic data on the number and types of allowable PREP program models from states during the Design Survey. In selecting the four program providers in each state, the study team will want to ensure inclusion of providers that together deliver a variety of PREP program models, although selection cannot ensure full coverage of all PREP models that may be implemented within each state. Ultimately, the four program providers selected within each study state will vary in both the implementation challenges and requests for technical assistance, as well as the program models they deliver.


Identifying respondents. During Design Survey interviews, the study team collected information on who was overseeing the grant at the state level, and what other organizations were providing implementation support, quality assurance, and/or technical assistance and evaluation activities. During initial Implementation Survey discussions with states, the study team will ensure that these data are accurate and complete and will adjust each state’s respondent list for the training and technical assistance providers and program evaluators as necessary. The study team will work with the state grantee to obtain the proper contact for each of the selected program providers.

B.2. Procedures for the collection of information

1. Data collection

The study team will conduct semi-structured telephone interviews with staff involved in PREP program implementation and support at a variety of levels to capture multiple perspectives on the effectiveness of the structures and practices that are in place. This will also ensure that the study team members understand not only how service delivery and support processes are intended to work, but also how they actually work. Based on program structure and staffing information collected during Design Survey interviews, we expect that interview respondents will include: (1) state grantee lead staff, (2) training and technical assistance staff, (3) evaluator staff, and (4) program provider managers.

In total, we anticipate that the study team will interview an average of 8 respondents per state, for a total of 32 respondents across the 4 selected states. While the specific respondents in each state will likely vary, we expect that among these 8 respondents per state will be four state-level staff—one state grantee respondent, two training and technical assistance respondents, and one evaluator respondent—and one manager from each of four program providers from around the state.

The data collection instrument attached to this submission will guide interview arrangement and execution (see Instrument 1). The study team will send the state lead grantee from each selected state information about: (1) the PREP Evaluation and the importance of their participation as part of receipt of the PREP grant, (2) the specifics of the Implementation Survey, and, (3) proposals for interview dates and times (see Attachment C). If the study team is unable to schedule the interview via email, they will call the state grantee administrator and establish an interview date and time. During this phone call or during the interview itself, the study team will also ensure that the list of other relevant respondents from the state—identified as part of the Design Survey—is accurate and complete. The study team will also confirm these respondents’ contact information, as well as the contact information for the selected program providers. After the interview, the study team will send similar emails to pertinent technical assistance provider staff, evaluator staff, and program providers (see Attachment C). Attached to all initial contact emails will be a summary of the PREP Evaluation (see Attachment B).

Part A of this submission lists the topics that the study team will explore during the semi-structured telephone interviews. The specific questions asked by the study team will vary by respondent type, but all questions will remain within the scope of the constructs detailed in attached Instrument 1. Each interview will last for an average of one hour.

2. Statistical methodology, estimation, and degree of accuracy


The Implementation Survey does not require statistical methodology or estimation. The data collected from telephone interviews will be analyzed using qualitative and descriptive methods.

3. Unusual problems requiring specialized sampling procedures

There are no unusual problems requiring specialized sampling procedures.

4. Periodic data collection cycles to reduce burden

There will be only one cycle of data collection.

B.3. Methods to maximize response rates and deal with nonresponse

We expect to achieve a 100% response rate for the Implementation Survey phase of the Design and Implementation Study for a number of reasons. First, states agreed to participate in the PREP Evaluation as a requirement of receiving a PREP grant and should therefore respond positively to Implementation Survey interview requests. Second, the study achieved a 100% response rate for the Design Survey phase of the DIS, which had a similar respondent population and data collection mode as will the Implementation Survey. Third, the respondents who will be interviewed for the Implementation Survey have a vested interest in the success of the PREP program and might thus be motivated to participate in interviews about maintaining program quality and fidelity. Finally, interviews will be scheduled well in advance and around respondents’ schedules to ensure their availability.


B.4. Tests of procedures to be undertaken

The Implementation Study discussion guide will be very similar in length and style to the protocols used for the Design Survey, which were approved by OMB on March 7, 2012 (under OMB Control # 0970-0398) and achieved a 100% response rate. Nonetheless to ensure that the Implementation Survey topic guide is used effectively, and that it yields comprehensive and comparable data across states, senior research team members will conduct pilot interviews (with fewer than nine respondents total) before any other interviews are conducted. The purpose of the pilot test is to ensure that the topic guide includes appropriate probes that assist interviewers in delving deeply into topics of interest, and that it does not omit relevant topics of inquiry. Furthermore, use of the topic guide during a pilot interview can enable the research staff leading this task to assess that the topic guide is practical, given the amount of data that is to be collected and the amount of time allotted for each interview. Adjustments to the topic guide will be made as necessary and updates submitted to OMB.

Based on the pilot experience, the study team will train all Implementation Survey interviewers on the topic guide to ensure (a) a common understanding of the key objectives and concepts, and (b) the fidelity of the instrument. The training session will cover topics such as the Implementation Survey’s purposes and research questions, the topic guide, procedures for scheduling visits and conducting interviews (including a review of interview techniques and procedures for protecting confidentiality), and post-interview documentation.

B.5. Individuals consulted on statistical aspects and individuals collecting and/or analyzing data

The names and contact information of the persons consulted on statistical aspects and individuals collecting and/or analyzing data for the Implementation Survey are:

Clare DiSalvo

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

370 L'Enfant Promenade, SW

7th Floor West

Washington, DC  20447

(202) 401-4537


Dirk Butler

Family and Youth Services Bureau

Division of Abstinence Education

U.S. Department of Health and Human Services

370 L’Enfant Promenade, SW

Washington, DC 20477

(202) 260-2242

Robert Wood

Mathematica Policy Research

P.O. Box 2393

Princeton, NJ 084543-2393

(609) 936-2776


Susan Zief

Mathematica Policy Research

P.O. Box 2393

Princeton, NJ 084543-2393

(609) 275-2291


Gretchen Kirby

1100 1st Street, NE, 12th Floor
Washington, DC 20002-4221

(202) 484-3470

Jessica Ziegler

Mathematica Policy Research

P.O. Box 2393

Princeton, NJ 084543-2393

(609) 275-2291

References

National Implementation Research Network (NIRN). FPG Child Development Institute, University of North Carolina, Chapel Hill. Research and Resources. Retrieved from http://nirn.fpg.unc.edu/resource-search on 9 June 2014.

Patton, M.Q. (2002). Qualitative research and evaluation methods: Third edition. Thousand Oaks, CA: Sage Publications.

Ritchie, J., and Spencer, L. (2002). Qualitative data analysis for applied policy research. In Huberman, A.M., and Miles, M.B. The qualitative researcher’s companion. Thousand Oaks, CA: Sage Publications.

Zief, Susan, Rachel Shapiro, and Debra Strong. “The Personal Responsibility Education Program: Launching a Nationwide Adolescent Pregnancy Prevention Effort. Final Report.” Princeton, NJ: Mathematica Policy Research, October 2013.




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJessica Ziegler
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy