Pep_ss-a-omb_3_30_15_final_

PEP_SS-A-OMB_3_30_15_FINAL_.docx

Evaluation for the Partnerships for Success Program

OMB: 0930-0348

Document [docx]
Download: docx | pdf

PARTNERSHIPS FOR SUCCESS PROGRAM EVALUATION FOR PREVENTION CONTRACT

SUPPORTING STATEMENT

  1. JUSTIFICATION

    1. Circumstances of Information Collection

The Substance Abuse and Mental Health Services Administration’s (SAMHSA) Center for Substance Abuse Prevention (CSAP) is requesting approval from the Office of Management and Budget (OMB) for new data collection activities for SAMHSA’s Program Evaluation for Prevention Contract (PEP-C) evaluation of SAMHSA’s Strategic Prevention Framework (SPF) Partnerships for Success (PFS) programs. PEP-C is scheduled through September 2018 to conduct a national cross-site evaluation of SPF-PFS, which aims to address two of SAMHSA’s top substance abuse prevention priorities: underage drinking (UAD; age 12 to 20) and prescription drug misuse and abuse (PDM; age 12 to 25). SAMHSA is requesting approval for data collection with the following three instruments:

  1. Grantee-Level Instrument–Revised (GLI-R), to collect information about grantee organizational structure, planning, data systems, workforce development, evaluation activities, sustainability efforts, and integration of cultural competencies into the prevention system (Attachment 1).

  2. Community-Level Instrument–Revised (CLI-R), to collect information about grantee subrecipients’ organization type, funding, cultural competence, assessments, capacity building, sustainability, strategic planning, prevention intervention implementation, evaluation, and contextual factors (Attachment 2).

  3. Grantee Project Director Interview (PD Interview), to provide more in-depth information about grantee subrecipient selection, criteria for intervention selection, continuation of SPF State Incentive Grant (SIG) activities, leveraging of funds, collaboration, evaluation activities, cultural competence policies, processes to impact health disparities, and challenges faced (Attachment 3).

The GLI and CLI are streamlined and modified versions of those OMB approved for the SPF-SIG program 0930-0279. SAMHSA’s SPF-PFS program is authorized under Section 516 of the Public Health Service Act - Priority Substance Abuse Prevention Needs of Regional and National Significance, as amended, and addresses the Healthy People 2020 Objective of Substance Abuse. SPF-PFS also supports two of SAMHSA’s eight Strategic Initiatives: (1) the Prevention of Substance Abuse and Mental Illness and (2) Data, Outcomes, and Quality. Finally, this effort supports and is aligned with the Surgeon General’s “Call to Action to Prevent and Reduce Underage Drinking 2007,” the Department of Health and Human Services’ National Prevention Strategy 2011, and the Office of National Drug Policy’s (ONDCP) “Epidemic: Responding to America’s Prescription Drug Abuse Crisis 2011.”

Scope of the Issue

Over the past decade, a large number of evaluation studies demonstrated that prevention interventions affectively reduce substance abuse, as well as delinquent behaviors; violence; and other mental, emotional, and behavioral health problems (e.g., Calear & Christensen, 2010; Lemstra et al., 2010; Ttofi & Farrington, 2011). Among 12- to 20-year-olds from 2002 to 2013, rates of current alcohol use decreased from 28.8% to 22.7%, rates of binge drinking declined from 19.3% to 14.2%, and heavy alcohol use declined from 6.2% to 3.7% (SAMHSA, 2014). Despite these successes, UAD continues to be a significant public health problem. The 2013 National Survey on Drug Use and Health (NSDUH) report estimates that 8.7 million persons aged 12 to 20 drank alcohol in the past month, 5.4 million binge drank, and 1.4 million drank alcohol heavily (SAMHSA, 2014). UAD causes serious harm to the adolescent drinker as well as to the community as a whole (Office of Juvenile Justice and Delinquency Prevention, 2012). Alcohol use by adolescents negatively effects brain development, results in other serious health consequences (e.g., alcohol poisoning, risky sexual behaviors, and addiction), and leads to safety consequences from driving under the influence, poisonings, and other injuries. UAD places youth at increased risk for violence perpetration and victimization along with social or emotional consequences (e.g., low self-esteem, depression, anxiety, lack of self-control, stigmatization by peers), academic consequences (e.g., poor academic performance, truancy, suspension or expulsion from school), and family consequences (e.g., poor relationships with parents).

Adolescent drinking can also impose economic consequences, ranging from personal costs (e.g., payment for alcohol treatment or medical services) to familial costs (e.g., parents taking time off of work to drive children to treatment) to community costs (e.g., providing enforcement, supervision, or treatment to underage drinkers). In 2001, the sale of alcohol to underage drinkers led to an estimated 3,170 deaths and almost 2.6 million injuries, which caused an estimated $61.9 billion in harm to society through medical spending, property losses, lost wages, and the loss of quality-adjusted life years (Miller, Levy, Spicer, & Taylor, 2006). Sacks et al. (2013) estimated that in 2006, UAD was responsible for $24.6 billion (11%) of the total cost to society of excessive alcohol consumption in the United States.

Prescription Drug Misuse (PDM) among young people is the fastest-growing drug problem in the United States, and prescription drugs are second only to marijuana as the drugs most abused by teens (National Institutes of Health, 2011). PDM refers to the use of licit drugs to treat pain, attention deficit disorder, or anxiety without a prescription; in a way other than prescribed; or because of the feelings it may elicit (National Institutes of Health, 2011). NSDUH survey estimates from 2013 indicate that approximately 4.8% of respondents age 18 to 25 and about 2.2% of respondents age 12 to 17 report PDM (SAMHSA 2014). The widespread consequences of PDM are consistent with the consequences caused by UAD (i.e., violence perpetration and victimization along with health, safety, social, emotional, academic, familial, and economic consequences). In 2011, youth age 12 to 24 accounted for approximately 23% of emergency department visits involving nonmedical use of pharmaceuticals (SAMHSA, 2013). Since 2003, more deaths have been due to opioid analgesic overdoses than heroin and cocaine combined (Centers for Disease Control and Prevention, 2012).

Strategic Prevention Framework Partnerships for Success Program

In 2004, SAMHSA began funding the SPF SIG, an infrastructure grant program designed to help States, jurisdictions/territories, and tribal organizations implement the SPF, with the goals of preventing the onset and reducing the progression of substance abuse, reducing problems related to substance abuse, and building capacity and infrastructure for prevention. Grantees used the SPF model, which consists of five steps: (1) needs assessment; (2) capacity building; (3) strategic planning; (4) implementation of programs, policies, and practices; and (5) evaluation. Grantees also considered cultural competence and sustainability at each step in the process. SAMHSA awarded 5-year SPF SIG grants to 49 States, 19 tribal organizations, 8 jurisdictions/territories, and the District of Columbia from 2004 to 2010. These grantees funded approximately 650 subrecipient communities within their regions to also adopt the SPF process, build their prevention capacity, and implement substance use prevention interventions.

In 2009 SAMHSA initiated the SPF-PFS program, the focus of this evaluation, to provide an opportunity for previously funded SPF SIG recipients to expand on their SPF efforts in their communities by promoting further alignment of resources and priorities and improvement of prevention infrastructure. The first SPF-PFS cohort, PFS I, included five grantees, each funded for 5 years (the PFS I cohort is not a part of the current evaluation). PFS I grantees were required to identify, select, and target one statewide priority need that would lead to a lasting reduction in substance use. The requirements and goals of the following cohorts, PFS II (n=15), PFS 2013 (n=16), and PFS 2014 (n=21), require that recipients select one or both of two national prevention priorities identified by SAMHSA: (1) UAD among persons age 12 to 20 and (2) PDM among persons age 12 to 25.

The overall goals of for the SPF-PFS II, PFS 2013, and PFS 2014 cohorts are to:

  • Prevent the onset and reduce the progression of substance abuse, prioritizing UAD among persons age 12 to 20, PDM among persons age 12 to 25, or both;

  • Reduce substance abuse-related problems;

  • Strengthen prevention capacity and infrastructure at the State and community levels; and

  • Leverage, redirect, and align statewide funding streams and resources for prevention.

PFS II, PFS 2013, and PFS 2014 also place a new emphasis on community need in that grant awardees (States, jurisdictions/territories, and/or tribal organizations) must identify and fund high-need and low-capacity subrecipient communities to implement evidence-based prevention programs, policies, and practices that address the selected prevention priorities. The PFS program builds on but does not duplicate the work and lessons learned from the SPF-SIG program and related evaluation. Rather, the focus for PFS cross-site evaluation shifted to outcomes and the factors associated with those outcomes, including program and intervention costs. Communities will be able to practically apply the PFS cross-site evaluation findings when making decisions on how to build prevention infrastructure, what interventions to implement, and how to implement those interventions.

SPF SIG National Cross-Site Findings

Two national SPF SIG cross-site evaluations were conducted: one for grantee cohorts I and II and the other for grantee cohorts III, IV, and V. Published results from the Cohorts I and II evaluation focus mainly on changes to prevention infrastructure over the course of SPF SIG funding. Orwin and colleagues (2014) found that, on average, grantees’ strategic planning, workforce development, and support of the implementation of evidence-based program, policies, and practices increased during their grant period. Grantees showed continued improvement in these areas, as well as in evaluation and monitoring, one year after their SPF-SIG funding ended. Cohort I and II grantees also showed improvements in integration (cooperation across state agencies and across state, regional, and local levels) over the course of their grants (Orwin et al., 2014). Outcome data for Cohort III grantees demonstrated mixed findings: 30% of grantees and 32% of subrecipients showed improvement in past 30-day alcohol use, and 50% of grantees and 29% of subrecipients showed improvements in substance-related consequences (e.g., alcohol-involved motor vehicle fatalities, drug-related arrests; SAMHSA, 2012a). Currently, many grantees in SPF SIG Cohorts IV and V are in the process of completing their grants so available findings related to those cohorts provide only baseline and basic descriptive information.

Thus far an examination of the impact of the type or mix of implemented interventions for SPF SIG cohorts has been limited. Analyses have been able to show that implementing a greater proportion of population-based versus individual-based interventions is related to a community having other prevention funding sources and having law enforcement more involved in the planning process. However, a lack of more detailed information about the intervention types, and heterogeneity in subrecipient community descriptions of those interventions limited the ability to examine other descriptors of intervention type, or determine more practical implications of intervention type findings. Similarly, the SPF SIG cross site evaluation provided some descriptive analyses of funding and costs of interventions, but did not tie leveraging of funding or intervention cost to outcomes.
SPF-PFS National Cross-Site Evaluation

With the extensive assessment of prevention infrastructure provided through the SPF SIG cross site evaluations, the SPF-PFS cross-site evaluation will examine infrastructure in a more limited fashion. This SPF-PFS infrastructure examination primarily will focus on monitoring grantees and community subrecipients to assure they follow the SPF process, but place a special emphasis on assessing capacity changes of the subrecipients who all should be purposefully selected for their high need and low capacity. Another important aspect of the infrastructure evaluation for the SPF-PFS cross site will be an examination of leveraged partner relationships, to assess their importance beyond the findings related to law enforcement partnerships from the SPF SIG cross site. The more limited focus on infrastructure assessment will allow the SPF-PFS cross site evaluation to instead collect more detailed data about the implemented interventions, to provide a more comprehensive typology of interventions and assess how various types and combinations impact outcomes. Some typology factors to be examined include CSAP strategy type (e.g. prevention education versus environmental strategies versus information dissemination), IOM category (e.g. universal direct versus universal indirect), and ecological target (e.g. individual, family, friends, institutions, public policy).

The SPF-PFS cross site evaluation will focus on grantee- and community-level substance use intervening variables, consumption and consequence outcomes, especially those related to UAD and PDM. The recent emergence of PDM as a serious public health issue also provides a unique opportunity for the SPF PFS cross site evaluation to examine the implementation and effectiveness of prevention interventions developed to target this issue. In addition, the SPF PFS cross-site will more strongly emphasize an examination of economic issues, including associations between funding and outcomes and the cost-effectiveness of various intervention types and combinations. Both grantees and community subrecipients will provide information on leveraged funding, or all of the sources of funding that support both their overall substance abuse prevention efforts and their PFS-specific efforts. These sources of support can include other federal grants, as well as state, local, foundation, and corporate sources, along with donations. Through the CLI-R, community subrecipients will provide information on direct costs of personnel, supplies, training, and overhead for their interventions along with in-kind donations of volunteer time and other intervention needs. This information will allow for a comprehensive look into the cost effectiveness and cost benefits of the interventions and the SPF-PFS program.

The SPF-PFS cross site evaluation is expected to have numerous program and policy implications and outcomes at the national, state, and community levels. It will provide valuable information to the prevention field about best practices in real world settings, along with what types of adaptations community implementers make to evidence based interventions to better fit their targeted populations and settings. SPF-PFS cross-site findings will provide guidance to governmental entities and communities as to what types of interventions should be funded and implemented to reduce UAD and PDM. More specifically, this guidance will include information on what combinations or types of interventions work the best. For example, do environmental strategies produce adequate outcomes for a community? Or are those outcomes enhanced with the addition of more targeted education interventions? Does the mix of interventions needed to address the issue differ between UAD and PDM? What role should cost play in decision making around funding these interventions? Beyond intervention type and cost, the SPF-PFS cross-site evaluation also will provide a valuable assessment of the importance of leveraged funding as well as providing information about the process states, jurisdictions, tribes, and communities undergo to leverage funding. Information and guidance about leveraging that comes from the SPF-PFS cross site evaluation will allow the federal government, state, tribes, jurisdictions, and local communities to more effectively and efficiently use their resources and sustain future prevention efforts.

PEP-C has been tasked with conducting this national cross-site evaluation to assess whether and how SAMHSA’s SPF-PFS cohorts II, 2013, and 2014 (and any future cohorts) advance SAMHSA’s mission of reducing the impact of substance abuse and mental health disorders on America’s communities, particularly focusing on UAD and PDM. The SPF-PFS evaluation will focus on the PFS II, PFS 2013, and PFS 2014 grantees and subrecipient communities, and any future cohorts (unknown at this time). The total number of current grantees and total number of the estimated subrecipient communities is provided in Exhibit 1. Subrecipient community number estimates are based on a consideration of the numbers reported by grantees in the first year (PFS 2013) or two (PFS II) of their grants and the number of subrecipients grantees indicated their proposals.

Exhibit 1. Cohorts of the Strategic Prevention Framework Partnerships for Success (SPF-PFS) in the Program Evaluation for Prevention Contract (PEP-C)

Cohort

# of Grantees

# of Subrecipient Communities

Length of Grant

Start Date–End Date

PFS II

15*

~140

3 years

Oct. 2012–Sept. 2015

PFS 2013

16**

~250

5 years

Oct. 2013–Sept. 2018

PFS 2014

21***

~220

5 years

Oct. 2014—Sept. 2019

Total

52

~610



* Includes 14 States and 1 territory.

** Includes 14 States and 2 territories.

*** Includes 12 States, 3 territories, 5 tribal organizations, and the District of Columbia.

The overall goal of the cross-site evaluation is to document and assess the factors that contribute to the effectiveness of the PFS approach to SAMHSA’s mission of reducing UAD and PDM, including costs, inputs, outputs, and contextual factors. To meet this goal, PEP-C will undertake activities that fall into three areas: (1) data collection, (2) analysis, and (3) dissemination.

Data Collection

To allow SAMHSA to monitor the outcomes of the SPF-PFS program, PEP-C will use already available annual grantee- and community-level secondary data as well as Government Performance and Results Act (GPRA) measures and National Outcomes Measures ([NOMs]; approved under OMB No. 0930-0230), such as past-30-day alcohol use or perception of parental or peer disapproval of substance use. In particular, the NOMs data generally consist of sampled survey data estimates (such as from the National Survey on Drug Use and Health, NSDUH, or CDC’s Youth Risk Behavior Survey, YRBS) or existing administrative records (e.g., arrest and crash data). As such, the outcomes consist of secondary data that fall outside the scope of this OMB application.

Additional data are needed to fully address the cross-site evaluation’s five core evaluation questions ([EQs]; see Exhibit 2) and provide SAMHSA with a better understanding of both the context in which grantees operate and the prevention interventions that subrecipient communities implement. The primary sources of these data will be the GLI-R, CLI-R, and grantee PD Interview. Grantees will report on the GLI-R and subrecipient communities on the CLI-R through a PEP-C developed online data collection system designed to gather information to meet Federal reporting requirements and to assess key programmatic components hypothesized to be associated with program effectiveness (e.g., leveraged funding, type of prevention intervention, costs). The GLI and CLI are streamlined and modified versions of those OMB approved for the SPF-SIG program 0930-0279. The new telephone-based PD Interview will provide rich contextual information to provide a better understanding of these same programmatic components.

Exhibit 2. Evaluation Questions for the Strategic Prevention Framework Partnerships for Success (SPF-PFS) in the Program Evaluation for Prevention Contract (PEP-C)

EQ1.

Was the implementation of PFS programs associated with a reduction in underage drinking and/or prescription drug misuse and abuse?

EQ2.

Did variability in the total level of funding from all sources relate to outcomes? Did variability in the total level of PFS funding relate to outcomes, above and beyond other funding available to communities?

EQ3.

What intervention type, combinations of interventions, and dosages of interventions were related to outcomes at the grantee level? What intervention type, combinations of interventions, and dosages of interventions were related to outcomes at the community level?

EQ4.

Were some types and combinations of interventions within communities more cost-effective than others?

EQ5.

How does variability in factors (strategy selection and implementation, infrastructure, geography, demography, subrecipient selection, training and technical assistance, barriers to implementation) relate to outcomes across funded communities?


Analysis

The primary focus of the analyses will be to assess findings related to the five core EQs associated with the overall outcomes of the program (EQ1); funding and cost effectiveness (EQ2 and EQ4); and key programmatic and process components hypothesized to be associated with program effectiveness (EQ3 and EQ5).

Evaluation question 1 (EQ1) assesses the relation between implementation of the PFS program and grantee-level and community-level outcomes, with a particular focus on underage alcohol use among persons age 12 to 20 and PDM among persons age 12 to 25; intervening variables related to perceptions of parent attitudes, perceived risk, and family communication around substance use; and consequences such as substance use-related car accidents, crime, emergency room visits, and prescription drug poisonings. The outcomes focused on under EQ1 will also act as the outcomes for each of the other evaluation questions.

SAMHSA recognizes the need to answer core economic questions regarding the PFS program. Evaluation question 2 (EQ2) assesses the degree to which variation in outcome is associated with variation in funding, accounting for funding from other sources (i.e. leveraging). The question examines the resources allocated for the program, other accessible resource streams, and the effect that variation in resource levels may have on desired outcomes. EQ2 provides a standard measure of funding across State- and community-level grantees and defines funding as a sum of money and other resources that are set aside for a specific purpose (to support SPF-PFS intervention activities).

Evaluation question 4 (EQ4) assesses the cost-effectiveness of the interventions. Specifically, EQ4 assesses the resources used to achieve a given outcome and whether those resources are justified by their outcomes. To better understand how resources are used in the implementation of the PFS program, PEP-C will conduct cost-effectiveness analysis and benefit-cost analysis. Data to complete these analyses come from subrecipient responses on the CLI-R to questions that summarize categories used in the SAMHSA intervention cost template and that focus on personnel costs, program supplies, equipment, travel, and in-kind contributions.

As with the main outcome evaluation question, evaluation questions 3 (EQ3) and 5 (EQ5) address both grantee and subrecipient outcomes, with a particular focus on the mediators and moderators of the outcomes. The analyses for these evaluation questions will focus on the relations between relevant input (e.g., grantee infrastructure, baseline subrecipient capacity), activity output (e.g., subrecipient selection, training, capacity improvement, intervention selection), and participant output (e.g., type of selected communities, demographics of subrecipients) factors and the outcomes. EQ3 will assess intervention type first by focusing on various separate factors and then on what mix of multiple types subrecipients implement, along with how dosage relates to this mix. Interventions may differ in that individual-level strategies work directly with members of the target population, whereas environmental-level approaches focus on the larger community. Intervention types also will be classified as either universal, selective, or indicated and also through a socioecological model (i.e., identified by targets and settings at the individual, peer or parent, school, and community levels). Dosage will be assessed via a combination of intensity and reach. EQ5 examines numerous facets of the program as they may or may not relate to outcomes, such as strategy selection, infrastructure, and barriers to implementation.

To assess the evaluation questions, PEP-C will use basic descriptive analyses along with more advanced analytic frameworks such as qualitative comparative analysis, cost-effectiveness models, multilevel latent growth models, propensity scoring approaches, latent class models, and advanced tests for mediation effects (see Section A.16 for a detailed description of analytic techniques).

Dissemination

The chief aim of dissemination is to deliver information to the intended audiences. The primary focus of PEP-C for the dissemination area will be to report on the SPF-PFS cross-site evaluation activities and findings, as well as to identify best practices and contribute to the formulation of future program and policy directions as describe above. The main goal of dissemination will be to provide SAMHSA, grantees, the prevention field, and other stakeholders with timely and user-friendly cross-site evaluation findings.

    1. Purpose and Use of Information

The SPF-PFS cross-site evaluation will use a pre/post design with matched comparison groups where relevant and possible (discussed further in Section A.16). It will combine qualitative and quantitative data and methodologies to fully address SAMHSA’s objectives for the cross-site evaluation and will include structure, process, outcome, and cost components and address questions at the grantee and subrecipient community levels.

The SPF-PFS evaluation design and measures have been informed by current and previous cross-site evaluation efforts for SAMHSA, drawing heavily from lessons learned through prior and currently OMB-approved SPF SIG evaluations (OMB No. 0930-0279). For example, the variability across SPF SIG grantees (which will also apply to SPF-PFS grantees) posed one of the main challenges of evaluation. To minimize this challenge, the SPF-PFS evaluation will focus on assessing the dimensions of variability and accounting for them in analysis (e.g., control variables in multivariate analysis, moderation analyses), which will allow for the accurate description of grantees and characterization of impact. In addition, to take advantage of lessons learned in developing the SPF SIG cross-site evaluation instruments, the SPF-PFS cross-site instrument development began with the SPF SIG versions of the GLI and CLI, and revised those to develop the GLI-R, CLI-R, and the PD Interview. SPF-PFS reduced reporting burden where possible by eliminating items that were thoroughly analyzed through the SPF SIG evaluation (especially items assessing constructs that appeared less influential on outcomes, e.g., coalition functioning) and items that are less relevant to the SPF-PFS program (e.g., items assessing the greater number of targeted outcomes possible for SPF SIG programs). The GLI-R, CLI-R, and PD Interview instrument development also solicited input from grantee-level SPF-PFS evaluators, SAMHSA, the PEP-C External Steering Committee (ESC), and other stakeholders (see Exhibit 10 - the statistical consultants list, and Appendix 5 - consultation outside the agency). After careful review, revisions were made to streamline the instruments, reduce burden, decrease verbosity, increase variation in response options, improve coherence of scales for summing, assess a baseline level of community capacity, create consistency in assessing infrastructure at the grantee and community levels, and address gaps such as those related to leveraged funding, costs, and intervention description.

Grantee-Level Instrument–Revised

The GLI-R is a web-based survey to be completed by the grantee Project Director. The survey will provide categorical, qualitative, and quantitative data related to coordination of State efforts, use of strategic plans, access to data sources, data management, workforce development, cultural competence, sharing of evaluation data, and sustainability. The GLI-R is planned to be collected at the beginning of the grant and in the final year of the grant. Collecting baseline and follow-up data is necessary to assess the grantees’ progress and change over the course of the grant, which, in turn, is crucial to accurately interpreting outcomes. Depending on the timing of OMB approval and implementation of the instrument, the baseline data collection may require grantees to provide retrospective information (e.g., the PFS II grantees will likely be within the third year of their grants when they provide their baseline responses). This retrospective data collection process will incorporate procedures used successfully to obtain retrospective data for the SPF SIG cross-site evaluation. In addition, grantees should have much of the retrospective data readily available as it will come through aggregating data from existing prevention-related data systems implemented by grantees, and from archival records already maintained by grantees and subrecipients.

As described above, the items on the GLI-R were adapted from an OMB-approved GLI measure used for the SPF SIG evaluation (OMB No. 0930-0279). All efforts have been made to keep burden to a minimum, which is demonstrated in the reduced estimated time to complete the current GLI-R versus the SPF SIG version. The SPF SIG GLI burden estimate was 4.75 hours at baseline and 4.17 hours at follow-up. The current SPF-PFS evaluation GLI-R is estimated to take 1 hour at both baseline and follow-up.

Community-Level Instrument–Revised

The CLI-R is a web-based survey designed to be completed by subrecipient community Project Directors to assess grantee subrecipients’ progress through the SPF steps, prevention capacity, intervention implementation, and related funding and cost measures. The survey will provide process data related to leveraging of funding, in-kind services, organizational capacity, collaboration with community partners, data infrastructure, planned intervention targets, intervention implementation (categorization, costs, adaptation, timing, dosage, and reach), cultural competence, evaluation, contextual factors, training and technical assistance needs, and sustainability. The CLI-R will be collected semiannually; however, not all questions will be answered every time. For instance, PFS 2013 subrecipients will respond to items related to their organizational capacity only at baseline and final follow-up, whereas they will respond to intervention implementation items every 6 months (see Exhibit 3 below for timing of data collection of various items). The continual collection of these data is needed to (1) track the subrecipients’ progress and change over time; (2) allow SAMHSA and the grantees the ability to monitor performance and ongoing implementation; and (3) meet new requirements regarding identifying and reducing disparities.

To minimize burden on the subrecipient respondents, all reporting for the CLI-R will be done in a web-based entry form and at each data collection point only the questions that are required at that time will appear. Responses will also generate skip patterns for later questions in the instrument, where the subrecipients only complete relevant sets of questions and do not see others. For example, when reporting on their interventions, their reported CSAP strategy type of the intervention later leads respondents to see the questions for that CSAP strategy type sub-form (e.g. prevention education or environmental strategy) and not see the other 6 sub-forms at all, unless they need to fill them out for another reported intervention. In addition, once completed initially, many items will be automatically pre-populated on later CLI-R administrations. Subrecipients can keep those pre-populated responses intact or change the responses as relevant. For e.g., once subrecipients provide information about the typology and targets of their interventions, in the future they will see their prior responses, and not need to respond to those items, unless their responses change.

As described above, the items on the CLI-R were adapted from the previous OMB-approved CLI I and II instruments used for the SPF SIG evaluation (OMB No. 0930-0279). The CLI-R was developed with substantial input from grantee-level SPF-PFS evaluators, SAMHSA, the PEP-C ESC, and other stakeholders (see Exhibit 10 - the statistical consultants list, and Appendix 5 - consultation outside the agency). While removing items on coalition capacity, some infrastructure items, and some of the items related to progress through the SPF steps, the CLI-R retains or revises items related to cultural competence, use of data, planned intervention targets, community awareness activities, data infrastructure, evaluation, contextual factors, and sustainability, along with intervention implementation items related to categorization of implemented interventions, adaptation, timing, dosage, and reach. The CLI-R also includes new items that address SPF-PFS cross-site priorities, such as items related to leveraging of funding, in-kind services, access to relevant data sources, organizational capacity, T/TA needs, collaboration with community partners, and intervention costs.

All efforts have been made to minimize respondent burden while still retaining the essential information needed to answer the EQs. This is evident when comparing the estimated burden for the SPF SIG CLI to the estimated burden for the current CLI-R: the SPF SIG CLI burden estimate was 3.1 hours, and the SPF-PFS evaluation’s CLI-R burden estimate is 2.6 hours.

Grantee Project Director Interview

The PD Interview is a semi-structured telephone interview with grantee Project Directors designed to collect more in-depth information on subrecipient selection, criteria for intervention selection, continuation of SPF SIG activities, leveraging of funds, collaboration, evaluation activities, cultural competence policies, processes to impact health disparities, and challenges faced. The PD Interview will be collected at the beginning of the grant and in the third and final years of the grant; collecting baseline and follow-up data is necessary to assess the grantees’ progress and change over the course of the grant. Depending on the timing of OMB approval and implementation of the interview, the baseline data collection may require grantees to provide retrospective information (e.g., the PFS II grantees will likely be within the third year of their grants when they provide their baseline responses). The SPF-PFS evaluation will require PFS II grantees to participate in the interview only at the beginning of their final year and at the close of their grant.

The PEP-C contractor’s prior experience as State-level SPF SIG evaluators demonstrated the utility of more in-depth interviews with Project Directors. The SPF SIG evaluation version of the OMB-approved GLI (OMB No. 0930-0279) provided essential grantee-level data, but the instrument limited the information the respondents could provide. Supplemental interviews with Project Directors provided the necessary context to understand changes in infrastructure and outcomes over time. In fact, the SPF SIG GLI measure began as an in depth, in-person interview for SPF SIG cohorts I and II, and was revised into the “survey” version for SPF SIG cohorts III through V in order to save funds. The SPF-PFS cross-site evaluation retains some of the advantages of collecting data through survey (i.e. ease and consistency of response) by keeping straightforward items with specific response choices on the GLI-R. However, shifting other items over to the PD Interview and developing PFS specific items for the interview allows for more in depth discussion and follow-up on important contextual factors. With the PD Interview burden estimated at 1.4 hours, the combined burden for the PFS GLI-R and PD interview (at 2.4 hours) will be less than the total burden of the SPF SIG GLI (at over 4 hours). The PD Interview format will also allow grantees to become better acquainted with PEP-C staff, ask PEP-C staff questions about the cross-site evaluation, and pass along their concerns about any cross-site evaluation activities.

Items on the PD Interview were adapted from questions on the OMB-approved GLI (OMB No. 0930-0279) that were considered critical for more detailed, in-depth discussion. With an eye toward minimizing duplication and burden, the SPF-PFS evaluation has made sure that the data collected from the GLI and the PD Interview will be non-duplicative and complementary. In some instances, items have been removed entirely from the current GLI-R to be included only on the PD Interview. In other instances, the GLI-R and the PD Interview may address the same topic, but the information collected will not be duplicative, as the GLI-R is more quantitative in structure, whereas the PD Interview typically collects detailed, qualitative, contextual data.

The GLI-R, CLI-R, and PD Interview will be used to collect data to measure the main constructs of interest in order to answer the EQs. Exhibit 3 provides an overview of the evaluation’s main constructs of interest and the data sources and items on the GLI-R, CLI-R, and PD Interview that will be used to measure them. It also provides a description of the usual timing of data collection for specific sets of items.

Exhibit 3. Evaluation of the Strategic Prevention Framework Partnerships for Success (SPF-PFS) in the Program Evaluation for Prevention Contract (PEP-C) - Constructs and Data Sources

EQ1. Was the implementation of PFS programs associated with a reduction in underage drinking and/or prescription drug misuse and abuse? NOTE: These outcomes will also constitute the outcomes for each of the remaining EQs

Construct

Data Source

GLI-R/CLI-R/PD Interview Items

Timing


Intervening variables (e.g., perception of parental or peer disapproval, perceived risk or harm of use)

Publicly available secondary dataa and grantee- and community-level NOMs data

n/a

Yearly or as data become available


Substance use (e.g., 30-day alcohol use, 30-day prescription drug misuse)

Publicly available secondary dataa and grantee- and community-level NOMs data

n/a

Yearly or as data become available


Consequences (e.g., alcohol and/or drug-related car crashes and injuries, alcohol- and drug-related crime)

Publicly available secondary dataa and grantee- and community-level NOMs data

n/a

Yearly or as data become available


Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; NOMs, National Outcomes Measures; PD, Project Director; PFS, Partnerships for Success; SPF, Strategic Prevention Framework; TA, Technical Assistance.

a For example, the National Survey on Drug Use and Health, National Poison Data System, Uniform Crime Reports.


Exhibit 3. Evaluation of the Strategic Prevention Framework Partnerships for Success (SPF-PFS) in the Program Evaluation for Prevention Contract (PEP-C) - Constructs and Data Sources (cont.)

EQ2: Did variability in the total level of funding from all sources relate to outcomes? Did variability in the total level of PFS funding relate to outcomes, above and beyond other funding available to communities?

Construct

Data Source

GLI-R/CLI-R/PD Interview Items

Timing


Grantee-level funding

Grantee documents available from SAMHSA (funding award notice)

n/a

Beginning of grant



Project Director (PD) Interview

11a-11b, 13

Year 1, year 3, & final year of grant


Subrecipient-level funding

PD Interview

12

Year 1, year 3, & final year of grant



Community-Level Instrument-Revised (CLI-R)

34, 41a-41b, 42

34, 42 – annual; 41a-41b – twice each year


EQ3. What intervention type, combinations of interventions, and dosages of interventions were related to outcomes at the grantee level? What intervention type, combinations of interventions, and dosages of interventions were related to outcomes at the community level?

Construct

Data Source

GLI-R/CLI-R/PD Interview Items

Timing


Intervention type

CLI-R

33a-33g, 36a, 48-52, 63, 81, 87, 113-114, 116, 121, 123, 124a, 125-128a, 129a, 130a, 131, 132, 143-144, 147, 154a, 155a, 156a, 157a, 158a, 159a, 160, 161a, 162, 190

All twice each year except 190 which is annual


Combination category (i.e., multiple interventions delivered)

Composites will be created from “Intervention Type” variables

same as above



Intervention format

CLI-R

65, 83, 117

65, 83 – annual, 117 – twice each year


Timing

CLI-R

33d, 33h, 33i, 62, 64, 78, 82, 98, 112, 115a, 142

Twice each year


Dosage

CLI-R

44, 68a-68b, 85-86, 88, 118-119, 128b, 129b, 149a-149d, 150a-150d, 151a-151c, 152, 153, 155b, 156b, 157b, 158b, 161e, 195c, 196c

All twice each year except 195c and 196c which are annual


Reach and Numbers Served

CLI-R

39g, 45-47, 53a-53b, 67b-67c, 69a-69b, 80, 84, 89a-89b, 101, 103a-103b, 115b-115c, 120a-120b, 122a-122b, 124b, 130b, 133a-133b, 149e, 150e, 151d, 154b-154c, 159b, 161b-161c, 163a-163b

Twice each year


EQ4. Were some types and combinations of interventions within communities more cost effective than others?

Construct

Data Source

GLI-R/CLI-R/PD Interview Items

Timing


Intervention Costs

PD Interview

4

Year 1, year 3, & final year of grant



CLI-R

172-189

Annual


Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; NOMs, National Outcomes Measures; PD, Project Director; PFS, Partnerships for Success; SPF, Strategic Prevention Framework; TA, Technical Assistance.

a For example, the National Survey on Drug Use and Health, National Poison Data System, Uniform Crime Reports.



Exhibit 3. Evaluation of the Strategic Prevention Framework Partnerships for Success (SPF-PFS) in the Program Evaluation for Prevention Contract (PEP-C) - Constructs and Data Sources (cont.)

EQ5. How does variability in factors (infrastructure, subrecipient selection, strategy selection, implementation, geography, demography, training/technical assistance [T/TA], and barriers to implementation) relate to outcomes across funded communities?

Construct

Data Source

GLI-R/CLI-R/PD Interview Items

Timing

Infrastructure

GLI-R

1a-1e, 2, 9-13, 17-18

Baseline & final


PD Interview

10a-10f, 14, 15

10a-f - Year 1, 14 - year 3 & final year of grant, 15 - year 1, year 3, & final year of grant


CLI-R

1-3, 10-13, 20a-20i, 24-27, 40

1-3, 10-13, 20a-i, 26, 27 – baseline & final; 24 – annual; 25, 40 – twice each year

Subrecipient selection

PD Interview

1-3

Year 1


CLI-R

4, 5

Baseline

Strategy (intervention) selection

PD Interview

4-5, 6a

Year 1, year 3, & final year of grant


CLI-R

35a-35c, 36b, 37, 38

Twice each year

Implementation - adaptation/fidelity

CLI-R

191-195b, 196a-196b, 197-201, 204a-204b

Annual

Implementation - other

CLI-R

100, 102

100 – annual; 102 – Twice each year

Geography

CLI-R

19, 39a-39e

19 – annual; 39a-3 – Twice each year

Demographics (gender, age, race, ethnicity, language preference, disabilities, military, military families)

GLI-R

19

Baseline & final

CLI-R

39f, 54-61, 66, 70-77, 79, 90-97, 99, 104-111, 134-141, 148, 149e, 150e, 151d, 164-171

66, 79, 99, 148 annual; all others twice each year

Training and technical assistance (TA)

CAPT & PEP-C TA reports

n/a



GLI-R

18c-e, 18g, 24a

Baseline & final


PD Interview

6, 10g-h, 19

6, 19 - year 1, year 3, & final year of grant;

10g-h - baseline


CLI-R

23a-23n

Annual

Barriers to implementation

GLI-R

22

Baseline & final


PD Interview

23-24

Year 3 & final year of grant


CLI-R

206a-206t

Annual

Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; NOMs, National Outcomes Measures; PD, Project Director; PFS, Partnerships for Success; SPF, Strategic Prevention Framework; TA, Technical Assistance.

a For example, the National Survey on Drug Use and Health, National Poison Data System, Uniform Crime Reports.



Exhibit 3. Evaluation of the Strategic Prevention Framework Partnerships for Success (SPF-PFS) in the Program Evaluation for Prevention Contract (PEP-C) - Constructs and Data Sources (cont.)

Additional Monitoring Measures

Construct

Data Source

GLI-R/CLI-R/PD Interview Items

Timing

Progress through SPF steps

GLI-R

3-8, 14-16, 20-21, 23-25

3-6 – baseline; 7-8, 14-16, 20-21, 23-25 – baseline & final


PD Interview

7-8, 16a-18

Year 1, year 3, & final year of grant


CLI-R

6, 8, 9, 14-18, 21-22, 28-32, 44-52, 145-146, 202-205

29 – baseline; 6 – baseline & final; 8, 14-18, 21-22, 28, 30-31, 202-205 – annual; 7, 32, 44-52, 145-146, – twice each year;

Health disparities

PD Interview

20-22

Year 1, year 3, & final year of grant


CLI-R

7

Twice each year

Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; NOMs, National Outcomes Measures; PD, Project Director; PFS, Partnerships for Success; SPF, Strategic Prevention Framework; TA, Technical Assistance.

a For example, the National Survey on Drug Use and Health, National Poison Data System, Uniform Crime Reports.

    1. Use of Information Technology

Grantee-Level Instrument–Revised and Community-Level Instrument–Revised

The GLI-R and the CLI-R are self-administered, web-based surveys to be completed through the PEP-C online data collection system. Using a web instrument allows for automated data checks as well as for skip procedures and prepopulated fields based on prior responses to certain questions, which will reduce the burden among respondents and possibility of data entry error, thereby increasing the efficiency of data entry and improving data quality. The automated data checks will assess the consistency of responses between items (e.g. checking numbers reached against demographic breakdowns of those reached), and also ensure that responses follow the expected format (e.g. numbers or dates where those are expected). Responses will generate skip patterns for later questions in the instrument, where the subrecipients only complete relevant sets of questions and do not see others (e.g. reported CSAP strategy type of an intervention triggering respondents seeing the related intervention description sub-form and not the others). In addition, once completed initially, many items will be automatically pre-populated on later CLI-R administrations. Subrecipients can keep those pre-populated responses intact or change the responses as relevant. For e.g., once subrecipients provide information about the typology and targets of their interventions, in the future they will see their prior responses, and not need to respond to those items, unless their responses change.

Using a web-based system will provide the capability to send automatic email reminders to grantees if surveys have not been completed. In addition, data will be verified on submission; if problems are discovered, the grantee or subrecipient community will receive an automated email indicating that there were errors in the submission and providing a one-time link to a specialized page that will allow the respondent to correct the errors.

A dashboard and other reports will also be available to SAMHSA and the PEP-C team, as well as the grantees and subrecipients who submit data, so that they can monitor the overall status of data collection. Grantees and subrecipients will have access to their own data and will also be able to review online descriptive findings.



Grantee Project Director Interview

The PD Interview is designed as a telephone-based interview. Respondents will be read questions by a PEP-C interviewer while a PEP-C note taker records each response. With respondent consent, interviews will be recorded as a back-up to the note taker. After the interview, the interviewer and note taker will review the completed interview notes for accuracy; any areas of discrepancy will be validated with the recording. All notes will be analyzed using NVIVO software. Once the interview responses are considered final, the recording will be deleted. Until they are deleted, the recordings will be kept by the note taker on a secure, password protected computer.

    1. Effort to Identify Duplication

This evaluation is collecting information unique to SPF-PFS programs that is otherwise not available. A literature review prepared by the evaluation team in November 2013 confirmed that the information being collected by the GLI-R, CLI-R, and PD Interview cannot be obtained through other sources.

    1. Involvement of Small Entities

Participation in this evaluation will not impose a significant impact on small entities. SPF-PFS grantees and subrecipient communities may include State agencies, tribal organizations and other jurisdictions, and community service providers. Some subrecipients may be small entities; however, the GLI-R, CLI-R and PD Interview are designed to include only the most pertinent information needed to be able to carry out the evaluation effectively, and their impact will not be significant.

    1. Consequences If Information Collected Less Frequently

The multiple data collection points for the GLI-R, CLI-R, and PD Interview are necessary to track and evaluate grantees’ and subrecipient communities’ progress and change over time. In addition to the purposes of the SPF-PFS evaluation, SAMHSA will use these data to monitor grantee performance, and grantee and subrecipient communities will use these data to track changes in their ongoing implementation. Less frequent reporting will affect SAMHSA’s and the grantees’ ability to do so effectively. For example, SAMHSA’s federal requirements require them to report on performance and GPRA measures once each year. Related measures include items on leveraging funds and partnerships, numbers served/reached, and training or technical assistance provided or received. New health disparities priorities require reports of the demographic breakdowns of individuals reached or served twice each year. The SPF-PFS evaluation has made every effort to ensure that data are collected only when necessary and that extraneous collection will not be conducted.

    1. Consistency With the Guidelines in 5 CFR 1320.5(d)(2)

This information collection fully complies with the guidelines in 5 CFR 1320.5(d)(2).

    1. Consultation Outside the Agency

The notice required by 5 CFR 1320.8(d) was published in the Federal Register on October 28, 2014 (79 FR 64205). No comments were received.

The SPF-PFS cross-site evaluation used measures developed for the earlier SPF SIG cross-site evaluation as a base for the instrument development process of the GLI-R, CLI-R, and PD Interview to take full advantage of the valuable input garnered through the previous evaluation. More recently, drafts of the PEP-C versions of the instruments were shared with internal SAMHSA staff, contractors associated with PEP-C, four PFS II grantee evaluator, and an External Steering Committee consisting of 10 experts in the field of substance use prevention research and implementation. Six of these experts provided feedback on each of the data collection instruments, and the instruments were revised based on that feedback. Revisions ranged from changes in the original instructions to simplify them, to the use of summary cost questions rather than highly detailed ones, to modifications of response choices to better reflect SPF-PFS activities. See Attachment 4 for the list of individuals consulted throughout the development process of the instruments.

    1. Payment to Respondents

No incentives or gifts will be given to respondents.

    1. Assurance of Confidentiality

No individual-level or personal data will be collected by the SPF-PFS evaluation; project directors at the grantee and subrecipient levels will provide information about their organizations, PFS activities, and implemented interventions, rather than information about themselves personally. The GLI-R, CLI-R, and PD Interview collect programmatic data (i.e., information about the organizations and implemented interventions) at the grantee and community levels along with aggregated, nonidentifying participant-level data (e.g., total demographic number of participants that received a prevention intervention). Sensitive respondent information, such as birthdates and Social Security Numbers, will not be collected. To create a profile for grantees and subrecipient respondents to log in to the PEP-C online data collection system, the system will collect names, telephone numbers, mailing addresses and email addresses of grantee and subrecipient staff. This identifying information will be accessible only to select PEP-C evaluation staff and State Project Officers at SAMHSA. No other personal information will be collected from respondents as the focus of the data collection is on the programmatic characteristics of the SPF-PFS grantees and subrecipients.

The PEP-C systems development team takes responsibility for ensuring that the web and data system is properly maintained and monitored. Server staff will follow standard procedures for applying security patches and conducting routine maintenance for system updates. Data will be stored on a password-protected server, and access to data in the system will be handled by a hierarchy of user roles, with each role conferring only the minimum access to system data needed to perform the necessary functions of the role.

While not collecting individual-level data, evaluation staff are trained on the importance of privacy and in handling sensitive data. In addition, the contractor is in the process of submitting the GLI-R, CLI-R, and PD Interview to the contractor’s Institutional Review Board (Federal Wide Assurance #3331).In addition, a Privacy Impact Assessment (PIA) and System of Records Notice (SORN) have been approved from SAMHSA and paperwork has been filed with HHS.



    1. Questions of a Sensitive Nature

No questions of a sensitive nature are being asked through the GLI-R, CLI-R, or PD Interview.

    1. Estimates of Annualized Hour Burden

The number of data collection respondents will vary by year because of the varying lengths in grants, data collection time points, and each cohort’s grant end dates. As such, the burden and respondent cost will also vary by year. Exhibit 4 provides an overview of the annual number of responses per grantee, per instrument, broken out by cohorts.

Exhibit 4. Annual Data Collection Responses by Cohort per Grantee


SPF-PFS EVALUATION YEARS


FY2015

FY2016

FY2017

FY2018b

Grantee-Level Instrument–Revised

PFS IIa

2

0

0

0

PFS 2013

1

0

0

1

PFS 2014

1

0

0

0

Client-Level Instrument–Revised

PFS IIa

2

0

0

0

PFS 2013

2

2

2

2

PFS 2014

1

2

2

2

Grantee Project Director Interview

PFS IIa

2

0

0

0

PFS 2013

1

1

0

1

PFS 2014

1

0

1

0

Note. OMB, Office of Management and Budget; PFS, Partnerships for Success; SPF, Strategic Prevention Framework.

a PFS II grants end in September 2015; therefore, PFS II grantees will not participate in data collection in the following years unless they receive no-cost extensions. Under that circumstance, their final round of the Grantee-Level Instrument–Revised and Grantee Project Director Interviews will occur in FY2016 rather than FY2015, and PFS II subrecipients will provide two rounds of Community-Level Instrument–Revised information in FY2016.

b FY2018 does not fall within the OMB 3-year approval period; therefore, data collection for that year is not included in the burden estimate.

Grantee-Level Instrument–Revised

All 52 PFS II (n=15), PFS 2013 (n=16), and PFS 2014 (n=21) grantees, and all future cohorts, are expected to complete 1 baseline and 1 follow-up GLI-R. Because the PFS II cohort will be ending in September 2015, they will complete the interview twice in FY2015. The GLI-R is estimated to take 1 hour to complete per response; this includes time to look up and compile information (0.5 hours) and time to complete the web-survey (0.5 hours).The estimated burden time is based on paper-and-pencil surveys completed by evaluation staff members that have experience working with SPF-PFS grantees (see Section B.4 for more detail). There are no direct costs to respondents other than their time to complete the instrument. Exhibits 5–7 provide the details of the annual burden for each instrument for FY2015–FY2017, and Exhibit 8 presents estimates of the GLI-R annualized burden hours, 22, and the annualized respondent cost, $883 (total burden hours × the average hourly wage for State government managers, as reported in the 2012 Occupational Employment Statistics [OES] by the Bureau of Labor Statistics [BLS]).

Community-Level Instrument–Revised

All of the approximately 610 PFS II (n = ~140), PFS 2013 (n = ~250), and PFS 2014 (n=~220) subrecipient communities, and all future cohorts, are expected to complete the CLI-R. The CLI-R is estimated to take 2.6 hours per response, with each subrecipient community completing it two times per year. Because the PFS II cohort ends in September 2015, the PFS II subrecipient communities will complete the CLI-R only during the first year of data collection. The estimated burden time is based on paper-and-pencil surveys completed by evaluation staff members that have experience working with SPF-PFS grantees (see Section B.4 for more detail). This is likely an overestimation of the time it will take to complete the tools online. There are no direct costs to respondents other than their time to complete the instrument. Exhibit 8 presents estimates of the CLI-R annualized burden hours, 2,496, and the annualized respondent cost, $53,090 (total burden hours × the average hourly wage for community and social service occupations, as reported in the 2012 OES by the BLS).

Grantee Project Director Interview

All 52 PFS II (n=15), PFS 2013 (n=16), and PFS 2014 (n=21) grantees, and all future cohorts, are expected to complete the PD Interview. The instrument is estimated to take 1.4 hours to complete per response. Because the PFS II cohort will be ending in September 2015, they will complete the interview twice in FY2015. The estimated burden time is based on 3 pilot interviews with current grantee project directors (see Section B.4 for more detail). There are no direct costs to respondents other than their time to participate in the interview. Exhibit 8 presents estimates of the PD Interview annualized burden hours, 48.5, and the annualized respondent cost, $1,919 (total burden hours × the average hourly wage for State government managers, as reported in the 2012 OES by the BLS).



Exhibit 5. FY2015 Annual Burden

Instrument

Number of Respondents

Responses per Respondent

Total Number of Responses

Hours per Response

Total Burden Hours

Average Hourly Wage

Total Respondent Costa

GLI-R







PFS II cohortb

15

2

30

1

30

$39.54

$1,186

PFS 2013 cohort

16

1

16

1

16

$39.54

$633

PFS 2014 cohort

21

1

21

1

21

$39.54

$830

GLI-R FY2014 total

52


67


67


$2,649


CLI-R

PFS II cohortb

140

2

280

2.6

728

$21.27

$15,485

PFS 2013 cohort

250

2

500

2.6

1,300

$21.27

$27,651

PFS 2014 cohort

220

1

220

2.6

572

$21.27

$12,166

CLI-R FY2014 total

610


1,000


2,600


$55,302








Grantee PD Interview







PFS II cohortb

15

2

30

1.4

42

$39.54

$1,661

PFS 2013 cohort

16

1

16

1.4

22.4

$39.54

$886

PFS 2014 cohort

21

1

21

1.4

29.4

$39.54

$1,162

PD Interview FY2014 total

52


67


93.8


$3,709

FY2014 TOTAL

714


1,134


2,760.8


$61,660

Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; PD, Project Director; PFS, Partnerships for Success.

a Total respondent cost is calculated as total burden hours x average hourly wage.

b Because the PFS II cohort will be ending in September 2015, they will complete the GLI-R and PD Interview twice in FY2015. If those grantees receive a no-cost extension, then their second GLI-R and PD Interview will shift to FY2016 and up to two additional CLI-R instruments will be completed by their subrecipients that year.

Exhibit 6. FY2016 Annual Burden

Instrument

Number of Respondents

Responses per Respondent

Total Number of Responses

Hours per Response

Total Burden Hours

Average Hourly Wage

Total Respondent Costa

GLI-Rb

0

--

--

--

--

--

--

CLI-R

470

2

940

2.6

2,444

$21.27

$51,984

Grantee PD Interview

16

1

16

1.4

22.4

$39.54

$886

FY2015 TOTAL

486


956


2,466.4


$52,870

Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; PD, Project Director.

a Total respondent cost is calculated as total burden hours x average hourly wage.

b The GLI-R will not be collected in FY2016 unless PFS II grantees receive a no-cost extension.




Exhibit 7. FY2017 Annual Burden

Instrument

Number of Respondents

Responses per Respondent

Total Number of Responses

Hours per Response

Total Burden Hours

Average Hourly Wage

Total Respondent Costa

GLI-Rb

0

--

--

--

--

--

--

CLI-R

470

2

940

2.6

2,444

$21.27

$51,984

Grantee PD Interview

21

1

21

1.4

29.4

$39.54

$1,162

FY2016 TOTAL

491


961


2,473.4


$53,146

Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; PD, Project Director.

a Total respondent cost is calculated as total burden hours x average hourly wage.

b The GLI-R will not be collected in FY2017.

Exhibit 8. Annualized Data Collection Burdena

Instrument

Number of

Respondents

Responses per Respondent

Total Number of Responses

Hours per Response

Total

Burden Hours

GLI-RB

17

1

17

1

17

SLI-R

517

2

1,034

2.6

2,688

Grantee PD

Interview

30

1

30

1.4

42

Annualized Total

564


1,081


2,747

Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; PD, Project Director.

a Annualized Data Collection Burden captures the average number of respondents and responses, burden hours, and respondent cost over the three years (FY2015 – 2017).

b Respondent cost is calculated as total burden hours x average hourly wage.

    1. Estimates of Annualized Cost Burden to Respondents

There are no respondent costs for capital or start-up or for operation or maintenance.

    1. Estimates of Annualized Cost to the Government

The total estimated five year cost to the government for the data collection is $3,895,775. This includes approximately $3,205,094 for developing the instruments; programming and maintaining the online data collection system; providing data collection training to grantees and subrecipients; contractor labor for conducting PD interviews; processing, cleaning, and housing data; and analyzing and reporting data. Approximately $138,136 per year represents SAMHSA costs to manage/administer the data collection and analysis for 50% of two employees (GS-14-10, $138,136 annual salary). The annualized cost is approximately $779,155 ($641,018 + 138,136).

    1. Changes in Burden

This is a new data collection.

    1. Time Schedule, Publications, and Analysis Plan

Time Schedule

Exhibit 9 outlines the key time points for the study and for the collection of information.

Exhibit 9. Time Schedule for Data Collection

Activity

Time Schedule

Prepare for data collection, including programming web system

November 2013–May 2015

Obtain OMB approval for data collection

January 2015

Collect data

February 2015–May 2018

Collect first round of Grantee PD Interviews (all cohorts)

February 2015–April 2015

Collect first round of GLI-R (all cohorts)

February 2015–April 2015

Collect first round of CLI-R (PFS II and PFS 2013)

February 2015–April 2015

Collect CLI-R every May & November

May 2015–May 2018

Collect final round of PFS II GLI-R

August 2015–September 2015

Collect final round of PFS II Grantee PD Interviews

August 2015–September 2015

Collect second round of PFS 2013 Grantee PD Interviews

February 2016–April 2016

Collect second round of PFS 2014 Grantee PD Interviews

February 2017–April 2017

Collect final round of PFS 2013 GLI-R (possibly also PFS 2014)

June 2018–July 2018

Collect final round of PFS 2013 Grantee PD Interviews (possibly also PFS 2014)

June 2018–July 2018

Analyze data

April 2015–September 2018

Disseminate findings
Interim reports, presentations, manuscripts, final report

April 2015–September 2018

Note. CLI-R, Community-Level Instrument–Revised; GLI-R, Grantee-Level Instrument–Revised; OMB, Office of Management and Budget; PD, Project Director; PFS, Partnerships for Success.

Publications

The SPF-PFS evaluation will help SAMHSA reach its diverse stakeholders through targeted products and innovative dissemination venues. The evaluation’s objective for all reports and dissemination products is to provide user-friendly documents and presentations that help SAMHSA successfully disseminate and explain the findings. The dissemination plan includes products in a variety of formats for a variety of target audiences. Audiences for these reports will include Congress, the ONDCP, SAMHSA Centers, the evaluation’s SAMHSA Contracting Officer’s Representatives (CORs), SPF-PFS grantees, and the broader substance abuse prevention field (e.g., academia, researchers, policy-makers, providers). The SPF-PFS evaluation recognizes that different audiences are best reached by different types of report formats. For example, reports to Congress and ONDCP will require materials that are concise but offer policy-relevant recommendations. Reports created for SAMHSA Centers and the CORs will require more in-depth information, such as substantive background and discussion sections, to supplement the analytic approach. Reports created for SPF-PFS grantees will be concise handouts with helpful and easy-to-read graphics on performance data rather than lengthy text. The SPF-PFS evaluation will develop an assortment of disseminations products, including short and long analytic reports, congressional briefings, annual evaluation reports, research and policy briefs, ad hoc analytic reports, journal articles, best practice summaries, and conference or other presentations.

Analysis

The SPF-PFS evaluation uses a series of interdependent analysis frameworks that have been selected to maximize the coverage of the key EQs posed for assessing the objectives of SPF-PFS in the prevention of onset and the reduction of the progression of UAD and PDM and their consequences. The analysis plan proposes a series of analyses that move from basic descriptive analyses of GPRA measures, grantee performance measures, and NOMs measures (e.g., means, frequencies, percentages) to the use of sophisticated qualitative analysis techniques and multiple analytic frameworks that reflect various complexities that are anticipated to arise with data collected by the PEP-C.

Matched Comparison Groups

The SPF-PFS evaluation will use a pre/post design with matched comparison groups where relevant and possible. The PEP-C team plans to obtain relevant baseline census, archival, and survey estimates to select comparison counties (or communities) for SPF-PFS subrecipients. For some grantees, much of the required estimates will be available through standard (public) reporting; for others, the PEP-C team will need to collaborate with grantee-level evaluators to obtain the estimates. In no cases will new data collection be required for the matching process and the follow-up outcomes data for the matched comparison groups will come from the same data sources as those used for the matching process.

Matched comparison communities will not be completing the GLI-R, CLI-R, or PD Interview.

Qualitative Analyses

Qualitative analyses for the SPF-PFS evaluation focus primarily on the PD Interview; however, techniques similar to those described in this paragraph will be applied to any open ended responses on the GLI-R and CLI-R. Upon completion of each interview, the interview note taker, using the recordings for verification, will review and clean the notes. All notes will have a unique identifier that includes fields to indicate the key informant’s State and role. PEP-C staff will upload the interview data into a qualitative research software program, NVivo, for coding. Preparation for coding will include developing a dictionary or codebook in which codes will be carefully defined and logged so that coders are able to follow their meaning and know when to apply the codes to text within an interview. Codes will reflect prominent themes relevant to interpreting evaluation findings. To ensure reliability in the coding process, coders, who will also have served as interviewers or note takers, will then be assigned to work independently and concurrently on a subset of interviews. A kappa coefficient of .8 or higher will be maintained on all codes. Any discrepancies will be worked out between coders to ensure consistent application of codes. Upon completion of coding, the findings will be compiled on the basis of the prominence of codes (or themes) and organized around the major research questions and constructs. Analyses will focus on the characteristics reported by the grantee Project Directors relating to subrecipient selection process, criteria for intervention selection, grantee infrastructure, capacity building, activities to address health disparities, and leveraging resources. The findings that emerge will be used to examine how variations in SPF-PFS outcomes were related to grantee-level characteristics and processes.

Qualitative comparative analysis. The SPF-PFS evaluation plans to use qualitative comparative analysis (QCA) specifically to address EQs 3 and 5. QCA is a case-oriented approach that examines relationships between conditions (similar to explanatory variables in regression models) and an outcome using set theory, a form of logic that deals with the nature and relations of sets. While few methodological approaches can accommodate the small number of grantees, QCA is a method designed for studies with small and intermediate numbers (i.e., 10 to 50 cases). However it can also be applied successfully to large sample sizes. QCA examines what conditions—alone or in combination with other conditions—are necessary or sufficient to produce an outcome; in contrast, regression analyses identify “what factor, holding all other factors constant at each factor’s average, will increase (or decrease) the likelihood of an outcome.” Because of the intermediate number of grantees (n=52 total PFS II, PFS 2013, and PFS 2014 grantees), QCA will allow us to explore EQs for the intermediate-number populations when probabilistic analysis may not be possible.

We plan to use data from the GLI-R, CLI-R, and PD interviews to operationalize “conditions sets” (similar to independent variables in regression) and the publically available outcomes data described under EQ1 in Exhibit 3 to operationalize “outcomes sets” (similar to dependent variables). We will abstract the relevant values from appropriate data sources and create a Stata 13 data set. We will follow conventional QCA practices, which include identifying individual necessary and sufficient conditions, examining the combinations of conditions (i.e., sufficient causal pathways), and assessing QCA parameters of fit (i.e., consistency and coverage).

Quantitative Analyses

Several features of the evaluation design and EQs guided the selection of the analysis frameworks, including:

  • Repeated outcomes;

  • Data from subrecipient communities nested within grantees;

  • Nonrandomized comparison communities within grantee States; and

  • Nonrandom selection of intervention types that often occur in combination

Each of these features led to the selection of the complex analysis frameworks the SPF-PFS evaluation has proposed to use or adapt. Below is an overview of the more advanced analytic frameworks that will be used in the SPF-PFS evaluation, which include:

  • multilevel latent growth models (MLLGMs, with parallel and lagged processes)

  • integrative data analysis/item response theory

  • meta-regression

  • propensity score weighting

  • latent class analysis (LCA)

  • advanced mediation analysis

  • cost analysis models

Multilevel latent growth models. One of the primary analysis frameworks that will be used is the MLLGM. The basic linear MLLGM (Muthén, 1997) is constructed to account for variability in changes over time on outcomes, with sources of variability at the grantee and subrecipient levels. Where possible, a multiple baseline strategy will be employed whereby trends over time on outcomes at the grantee- and sub-recipient levels prior to PFS implementation will be compared to post-implementation trends (similar to an interrupted time series approach). In addition, predictors of post-implementation changes in outcomes over time, such as the type and dosage of interventions supported under SPF-PFS and variation in outcomes across SPF-PFS cohorts (i.e., PFS II, PFS 2013, PFS 2014), will be the focus of these analyses. However, several limitations may arise in these analyses including (1) small sample sizes at the grantee level, (2) nonrandom assignment of PFS interventions, and (3) variation in how GPRA and NOMs may be reported within and across grantees. As a result, the SPF-PFS evaluation will incorporate alternative or complementary analysis frameworks, or both, in addition to MLLGM.

Integrative data analysis. To address concerns about the potential variability in measures across grantees, the SPF-PFS evaluation will employ integrative data analysis (Curran et al., 2008; Curran & Hussong, 2009) to harmonize different measures of UAD and PDM (as well as risk and consequences measures) across grantees and subrecipients. The harmonization process involves (1) creating a common measure for questions that are worded slightly differently from each other but are comparable and (2) using response scales (e.g., Likert-type scales, ordered categories) that can be condensed to their least common denominator (e.g., ever used/never used). For single-item constructs and measures, the harmonization process is the only step necessary. For constructs that reflect multiple-item scales, confirmatory factor analysis models will be employed to assess which items load on which factors and derive factor and scale scores via item response theory models, which weight each item according to how common (or rare) a response is and how correlated the item is with other items making up the factor. Note that this step may be more difficult at the grantee level, where sample sizes are small.

Meta-regression. A second strategy that can be employed if sample sizes are too small to estimate MLLGMs or too small to estimate scale scores under integrative data analysis is meta-regression (Hox, 2010). Meta-regression uses effect sizes for data instead of raw data (as is done with meta-analysis, where effect sizes are extracted from journal articles). Unlike MLLGMs, meta-regression does not require that the outcome measure be exactly the same across all analysis units; effect sizes for changes over time from disparate measures of the same construct within grantee (for grantee-level analyses) are sufficient for analysis. In addition to effect sizes, the standard errors for the effect sizes are used to calculate meta-regression weights in a manner similar to that of standard meta-analysis models. Key predictors can then be used to account for variability in effect sizes as in a standard meta-analysis.

Propensity scoring approaches. Propensity scoring is a statistical approach used to balance measured covariates that influence the probability of selection into two or more non-experimental groups and also influence treatment outcomes (Rosenbaum & Rubin, 1983; Shadish, Cook, & Campbell, 2002; West, Biesanz, & Pitts, 2000); more recent work has extended propensity scoring to continuous measures of treatment (Imai & van Dyk, 2004). The propensity score (when treatment assignment is categorical) is the predicted probability of assignment to a treatment condition given the key covariates of interest (estimated from a regression model—ordinary least squares for continuous treatment or logistic for categorical treatment), with the resulting probability used as either a sample stratifier or a weight in subsequent outcome analyses. After the propensity score weight is controlled for, covariate distributions should be equal across conditions, which will mimic random assignment to the conditions of interest in the particular EQ. These scores can then be used to weight outcome analyses (e.g., MLLGMs) to produce unbiased estimates of the treatment effect (Harder, Stuart, & Anthony, 2010; McCaffrey, Ridgeway, & Morral, 2004; Rosenbaum & Rubin, 1983; Shadish, 2010).

Latent class analysis/finite mixture models. LCA models are used to model unobserved heterogeneity (i.e., “hidden” groups) among individuals who would cluster into subpopulations. Class membership is a latent variable that distinguishes groups on the basis of estimated differences in either the probability of endorsement of an item (for categorical measures) or mean differences on continuous measures. For example, among multiple prevention domains, LCA can be used to determine a small number of common patterns of types of prevention activities. Extensions of LCA can be structured so that different hidden subpopulations of SPF-PFS subrecipient communities can be defined by differences across continuous latent variables (i.e., factor mixture models) or different trajectory classes (i.e., latent class growth analysis, growth mixture models). LCA analyses will be used when the interest is in understanding the effects of different combinations of intervention types.

Advanced mediation analysis. For the assessment of linkages between various parts of the SPF-PFS logic model, a series of longitudinal mediation analyses (e.g., Cheong, MacKinnon, & Khoo, 2003; Jagers, Morgan-Lopez, Flay et al., 2009) will also be used. The SPF-PFS evaluation will assess the linkages between intervention components and changes in long-term outcomes as mediated through parallel changes over time in SPF-PFS intervention components.

Cost analysis. The GLI-R and CLI-R also include resource use and cost questions to collect data to estimate the costs and perform economic evaluations of the SPF-PFS grants at the grantee and subrecipient levels. For this evaluation, costs will be estimated separately for start-up activities and ongoing implementation activities. The cost analysis will provide both dollar estimates and estimates of the amount of resources used so that the results can be applied to different circumstances and prices. The economic evaluation will also identify the key drivers of cost, allowing decision makers to identify critical cost components of the intervention. The detailed economic study will also facilitate sensitivity analysis, which assesses the degree to which conclusions are robust to changes in key assumptions.

Grantee and subrecipient respondents will be asked to report overall resource use and funding for the SPF-PFS program (grantee level) and for each intervention being implemented (subrecipient level) for a specified period. At the intervention-level, these data will be reported by resource category (e.g., labor, building space, equipment, supplies and materials). Both grantee and subrecipient respondents also will be asked to report the quantity (and monetary value, if known) of any volunteer or in-kind resources used during that period and to report these in-kind resources by resource category (e.g., labor, building space). The SPF-PFS evaluation cost analyses will follow an ingredients-based approach that will allow for the derivation of estimates for labor and non-labor resources separately. The total cost for a given program or strategy will be the sum of staff labor costs (e.g., time spent performing strategy activities), costs of building space, costs of any equipment, costs of any supplies or materials, and costs of any other miscellaneous resources used in the strategy.

The cost-effectiveness method follows the approach described in the literature (e.g., Drummond et al., 2005; Gold et al., 1996). This method entails tabulating the costs and effectiveness measures for each model under study in increasing order of cost (or effectiveness). Starting with the model with the smallest cost (or effectiveness), cost-effectiveness ratios are then computed for each model relative to the next most expensive option after eliminating options that are dominated by other models (Drummond et al., 2005). A model may be dominated in either a simple sense (higher cost and lower effectiveness than another option) or in an extended sense (higher cost-effectiveness ratio than a more effective option). In addition to a cost-effectiveness analysis, we will also perform a limited benefit-cost analysis to examine the monetized benefits relative to costs for selected PFS models and for the PFS grant program overall. The key economic consequences for cost benefit studies that we will focus on are alcohol- and drug-related car crashes and injuries, alcohol- and drug-related crime, and alcohol- and prescription drug-related emergency department visits. These outcomes will be operationalized through the publically available outcomes data described under EQ1 in Exhibit 3.

    1. Display of Expiration Date

OMB approval expiration dates will be displayed.

    1. Exceptions to Certification for Statement

There are no exceptions to the certification statement. The certifications are included in this submission.

16


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSnaauw, Roxanne
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy