OMB_Supporting_Statement_B 6-15-17

OMB_Supporting_Statement_B 6-15-17.docx

Mental Health Block Grant Ten Percent Set Aside Evaluation

OMB: 0930-0376

Document [docx]
Download: docx | pdf


THE SUBSTANCE ABUSE AND MENTAL HEALTH SERVICES ADMINISTRATION (SAMHSA) MENTAL HEALTH BLOCK GRANT TEN PERCENT SET ASIDE EVALUATION

SUPPORTING STATEMENT PART B

B. Collections of Information Employing Statistical Methods

B.1 Potential Respondent Universe and Respondent Selection Method

B.1.1 Respondent Universe

The respondent universe for the proposed evaluation includes 250 SAMHSA grantee sites implementing Coordinated Specialty Care (CSC) programming through MHBG 10% set-aside funding across the country. The project will have up to 32 grantee sites that are implementing a CSC program for participation in the full evaluation. The sampling approaches for the data information collection activities are:


(1) Site Survey: A one-time online survey will be conducted with site directors of all CSC sites using MHBG 10% set-aside funding for CSC programming in FY 2017. One grantee director from each CSC site across the country will be invited to respond to the online survey.

(2) Agency Director/Administrator Interview: There will be one-day visits to all 32 sites annually in Years 1 and 2. The Agency Director will be selected purposively using organizational charts and information on each employee’s role at the host organization and its partner organizations.

(3) Coordinated Specialty Care (CSC) Staff Interview: There will be one-day visits to all 32 sites annually in Years 1 and 2. The CSC staff will be selected purposively using organizational charts and information on each employee’s role at the host organization and its partner organizations. Up to 3 CSC Program Staff will be interviewed at each site.


(4) Coordinated Specialty Care (CSC) Participant Interview: During the annual site visits, interviews will be conducted with up to 2 program participants at each of the 32 sites. The project will work with each site representative to purposively select up to two program participants for interviews at each site.


(5) State Mental Health Authority Interviews: There will be one-time key informant interviews with up to 32 State Mental Health Directors (or their representatives). The interviews will be conducted over the telephone.

(6) Fidelity Interview: Program fidelity assessments will be conducted annually in Years 1 and 2 at 32 sites. Up to four staff members from each agency will be selected to respond the fidelity questions. The fidelity assessments will be conducted through telephone interviews.

(7) Possible Administrative Data Elements: Each site is already collecting data on participants as a part of their treatment process. The project will work with each site to obtain possible administrative data on client demographics and outcomes for analysis in the evaluation. The administrative data will include information on clients who were enrolled within 180 days prior to evaluation start (as soon as OMB approval is obtained) and future clients who will be enrolled within 180 days after evaluation start. There will be no additional interview or data collection effort with site clients as a part of this evaluation.

Attachments 1-7 present the site survey, Agency Director Interview, Coordinated Specialty Care (CSC) Staff Interview, Coordinated Specialty Care (CSC) Participant Interview, State Mental Health Authority Interview, Fidelity Interview, and Possible Administrative Data Elements.

.


B.1.2 Sampling Methods

Site Selection

Selecting the 32 sites will be purposeful and the selection criteria will include factors such as, having administrative data on the outcome measures identified for this evaluation; HHS Regional representation; program model/type; CSC implementation and fidelity status; use of set-aside funding; variation in TA received at start-up of program; and urban/rural representation. There is no systematic sampling of sites from a universe.


Power Estimates

The evaluation is designed to identify the differences in participant outcomes (assessed through administrative data) across sites implementing CSC program with varying levels of fidelity. For the outcomes evaluation, assuming three groups of sites including high, medium and low fidelity to the CSC, it is critical to have a sample size with the ability to detect differences in key outcomes across the three groups. A recent evaluation1 funded by the National Institute of Mental Health including population with FEP used several outcomes including the Quality of Life Scale (which is one of the outcomes in the current evaluation), the Positive and Negative Syndrome Scale (PANSS), and the Calgary Depression Scale for Schizophrenia (CDSS) as key outcomes. Using data from this evaluation, we estimate that for a power of 80 percent, the total sample size needed for the evaluation across the 32 sites is at most 348; about 11 evaluation subjects per site on average.



Table 1. Sampling power analysis

Outcome variables

Desired power

Total sample size

Per site sample size

QLS

0.8

198

6.0

PANSS

0.8

189

5.7

CDSS

0.8

348

10.5





Note: Intra-class correlation within site is assumed to be 0.1 and Alpha=0.05 and power was set to be 0.7 or 0.8. The number of sites is assumed to be 32.


For the high fidelity group, the estimated effect sizes for the QLS, PANSS, and CDSS are the difference between the baseline measure and the follow-up measure. As presented in Table 6, these are 5.90, 4.32, and 0.78 which can be detected with a sample size of 348 and a power of 0.8.


Table 2.           The estimated effect sizes for the high fidelity group

Outcome variables

Baseline

Follow-up for

High Fidelity Group

Estimated Effect Size

QLS

54.77

60.67

5.90

PANSS

74.54

70.22

4.32

CDSS

4.66

3.88

0.78







In-depth Interviews with Key Informants

Key informant interviews will be conducted with up to 32 State Mental Health Directors across the country. Interviewees will be contacted by phone and they will represent the States where the 32 CSC sites are located.


There will be two site visits in Years 1 and 2. During the site visits, in-person in-depth interviews will be conducted with site director/administrator and up to 3 program staff providing CSC services. Working with each site, Westat will identify a convenience sample of 2 program participants at each site for in-person in-depth interviews. Potential in-depth interviewees will be contacted by phone and offered $25 to participate in an interview at the CSC center which will focus on their experiences as program participants. Respondents will be informed that the interview will last for approximately 60 minutes. Agreement to be interviewed will be obtained on the phone to be followed up with written consent in person.



B.2 Information Collection Procedures

Administrative data that include client outcomes are collected by 32 study sites as a part of their routine operations. The sites will submit de-identified client-level data to the Evaluation Team. For the key informant interviews and fidelity assessments, the Evaluation Team will conduct all data collection activities. The Evaluation Team also will provide evaluation-related technical assistance and training (TAT) to 32 study sites to build consensus on an approach to the evaluation, its measures, and data collection procedures. National Association of State Mental Health Directors (NASMHPD), who is a member of the Evaluation Team for this evaluation, already has regular contact with the CSC sites through their TAT-related activities, and these already-established connections with sites will help facilitate the evaluation-related TAT.


Site Survey

The site survey will be administered to all CSC programs that are funded through the MHBG 10% set-aside funds in FY2017. This survey will help address research questions regarding the context within which the sites operate. The site survey will include items that will assess the following:


  • Program model(s) used

  • Target population and demographics of clients

  • Process for identifying and recruiting participants

  • Array of treatment services and supports offered through the program’s CSC model

  • Fidelity process used by site

  • Peer involvement

  • Duration of care

  • Outcome measures used by site

  • Use of MHBG funds for FEP

  • Use of other funds for FEP

  • Sources of insurance that programs accept for payment for services

  • Information technology/system availability

  • Sustainability of the services

One key contact (in most cases the center director) at each site will be contacted by email and asked to participate in the survey. A secure link to the survey will be provided within the email. The survey will take approximately 12 minutes to complete.


Agency Director/Administrator Interview, Coordinated Specialty Care (CSC) Staff Interview, CSC Participant Interview, and State Mental Health Authority Interview

The site visit discussion guides are designed to coordinate the collection of information on how the components of the CSC model are delivered at each site. The Evaluation Team will have interviews with state representatives for MHBG funding, CSC program site directors, site staff, and program participants. The questions will examine the recruitment of participants with FEP, education and training of program staff, and referral activities conducted. These questions will be mainly open-ended, as there will be variation in responses and the intent is to capture that variation rather than using predetermined response codes.


The interview data collection protocol is designed with overlapping content areas focused on eliciting information about operations and perspectives on CSC service delivery at each site. Site directors will be asked to recount their MHBG experiences, offer informed perspectives on the primary objective of the FEP program, participant recruitment, and staff training. Program staff will be asked questions about provision of individual components of CSC, service delivery challenges and solutions, and participant outcomes. Program participants will be asked questions about their experiences at the site, challenges and facilitators that are important in their treatment plan, and outcomes observed through participation in the program. State mental health directors will be asked about the state’s role in making programmatic decisions for CSC sites, such as which CSC program model is implemented, what type of technical assistance sites receive, and how set-aside funds are allocated and utilized for service delivery. Questions will focus on the challenges experienced by the programs and lessons learned, fidelity to the CSC service package, funding for different components of the services package, staff and participant experiences, program outcomes, and barriers/ facilitators to successful implementation of CSC services. The collection and analysis of qualitative case evaluation data are crucial for answering the research questions.



Fidelity Interview

Information gathered through fidelity assessment interviews (conducted over the phone with site staff) and reviews of de-identified health records and supplementary material from each site will be used to rate the program’s fidelity to the CSC model. Ratings of fidelity will be made using the First Episode Psychosis Services Fidelity Scale (FEPS-FS)2. The FEPS-FS was developed using formal knowledge synthesis processes, which included systematic reviews, international expert consensus, and pilot testing in two countries. The scale is designed to assess fidelity across differing health systems and different program models. The FEPS-FS contains 31 program-specific items. The scale has been developed to measure the adequacy of implementation of CSC.


The data collection procedures that support the fidelity review include health record reviews and semi-structured interviews with the team leaders. For some items that require health record reviews for rating, the intent is to use health records selected at random. Health records will be selected from those who have participated in the program for at least 1 year.



Possible Administrative Data Elements

To minimize burden and maximize the number of sites reporting participant level data, the team will seek sites that are already collecting measures identified for this evaluation. The Team will develop procedures to assess the quality of any data that sites are already collecting to ensure its adequacy for the evaluation. The Team utilized information from partners about data that sites are already collecting to help select data elements, thereby reducing site burden. Sites will be asked to provide participant data at baseline and every 6 months after that for up to 18 months. In order to reduce burden on sites, the project will follow these procedures:


  • All of the required items will be the ones that are regularly collected by providers as a part of their routine practice. Centers will not be required to make any changes to their current practices. A site staff familiar with the client will be able to complete most of the questions quickly.

  • Most items will require only a “yes” or “no” response.

  • In some cases, sites will be asked to include certain information only if it is already being collected at the site. No changes to the routine of the clinic are required for these items.


B.3 Methods to Maximize Response Rates

Several steps will be taken to maximize response rates and reduce nonresponse bias for all data collection efforts. The Evaluation Team will lead and/or be available to support each data collection process, providing ongoing TAT, answering sites’ questions, and providing clarification and guidance whenever needed. For most data collection activities, the Evaluation Team will collect data from participants involved in the planning and implementation of CSC models. Efforts to maximize response rates are presented here by type of data collection method.


  • Requesting documents. Document requests will be combined across all evaluation components to minimize the number of requests and avoid duplicate requests.

  • Identifying respondents among participants. The Evaluation Team will work with each site’s project director to identify the appropriate people to interview. All respondents will be partners in the planning and implementation of the CSC models and will participate in the evaluation as part of the performance of their roles.

  • Scheduling interviews. The Evaluation Team will be flexible in scheduling interviews, provide a copy of the interview schedule ahead of time, and respect the specified time limits. To make the best use of informants’ time, the Evaluation Team will review available documents and conduct web searches to collect publicly available information prior to the interview.

  • Site liaison model. Individual Evaluation Team members will serve as site liaisons to each participating site to facilitate communication in ways that the Evaluation Team anticipates will enhance response rates, data quality, and site motivation. In addition, the site liaison model will enable the Evaluation Team to understand the sites more comprehensively, which will be of value when interpreting findings.

B.4 Test of Procedures

The data collection instruments including the site survey and key informant interviews were reviewed by the internal and external technical advisers who provided feedback on measurement quality, potential burden, and ease of administration. Additionally, as part of previous university-based research that was not conducted for SAMHSA, the CSC fidelity instrument (the FEPS-FS) was pilot tested in two countries. Results from this pilot testing indicated the feasibility and reliability of the fidelity instrument (intraclass correlation coefficient for interrater reliability=.842; 95% confidence interval=.795-.882). Content validity was supported by comparisons with three existing fidelity scales, as the FEPS-FS had the highest proportion of components common to all scales. Results indicated that the fidelity scale was feasible for sites to use, reliable, and a valid measure of adherence to evidence-based practices for first-episode psychosis services.3


B.5 Statistical Consultants

Name

Affiliation

Telephone Number

E-mail

Abram Rosenblatt

Principal Investigator, Westat

301-517-4065

[email protected]

Preethy George

Project Director, Westat

301-738-3553

[email protected]

Hyunshik Lee

Senior Statistician, Westat

301-610-5112

[email protected]



List of Attachments

Attachment 1: Site Survey

Attachment 2: Agency Director/Administrator Interview

Attachment 3: Coordinated Specialty Care (CSC) Program Staff Interview

Attachment 4: Coordinated Specialty Care (CSC) Participant Interview

Attachment 5: State Mental Health Authority Interview

Attachment 6: Fidelity Interview

Attachment 7: Possible Administrative Data Elements


1 Kane JM, Robinson DG, Schooler NR, Mueser KT, Penn DL, Rosenheck RA, Addington J, … Heinssen RK. (2016). Comprehensive Versus Usual Community Care for First-Episode Psychosis: 2-Year Outcomes From the NIMH RAISE Early Treatment Program. American Journal of Psychiatry, 173(4):362-72.

2Addington, D., Norman, R. Bond, G., Sale, T., Melton, R., McKenzie, E., & Wang, J. (2016). Development and testing of the First Episode Fidelity Scale. Psychiatric Services. Advance Online Publication.

3 Addington, D., Norman, R., Bond, G., & Wang, J. (2016). Development and testing of the First-Episode Psychosis Services Scale. Psychiatric Services, 67(9), 1023-1025.

1


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSonji Hogan
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy