Phase IV Supporting Statement FINAL

Phase IV Supporting Statement FINAL.doc

National Evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program: Phase Four

OMB: 0930-0257

Document [doc]
Download: doc | pdf






Supporting Statement










Phase Four of the National Evaluation

of the Comprehensive Community Mental Health Services

for Children and Their Families Program










Child, Adolescent and Family Branch

Division of Service and Systems Improvement

Center for Mental Health Services

Substance Abuse and Mental Health Services Administration


Summary



The purpose of this request is to obtain approval on the revised data collection associated with Phase IV of the National Evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program (OMB NO. 0930-0257), which expires on April 30, 2007. The current request builds on experience garnered during all phases of the evaluation and enhances the design, data collection procedures, and instruments.


Serious emotional disturbance affects more than 4.5 million children and their families in the United States. There is consensus that an integrated, coordinated, and comprehensive system of care is the best approach for meeting the needs of this population. The Comprehensive Community Mental Health Services for Children and Their Families Program, which is administered by the Center for Mental Health Services (CMHS) within the Substance Abuse and Mental Health Services Administration (SAMHSA), provides funds to support a broad array of community-based and family-centered services delivered through the system of care model. Under this program, CMHS has funded 5- and 6-year grants to States and locales to expand the array and capacity of services for children with serious emotional disturbance. Starting in 2002, CMHS began awarding 6-year cooperative agreements to provide these services. To date, this CMHS program has funded 126 such sites through these grants and cooperative agreements.


The data collection effort proposed here relates closely to the completed evaluation of Phase I grantees (OMB No 0930-0171), Phase II grantees (OMB No 0930-0192) and Phase III grantees (OMB No 0930-0209), which is expected to be completed in March, 2007. Phase IV of the National Evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program, for which revised clearance and re-approval is now being sought expands data collection to the 25 sites awarded cooperative agreements in FY02 and FY03, and an additional 4 sites funded in FY04. Phase IV of the evaluation will continue for the duration of the 6-year award period, ending in September 2010.


The Phase IV evaluation has eight core study components that are conducted with all grantees, including a sub-study conducted in select grantee communities. These study components collect information on a major nationwide initiative serving thousands of children and their families. These data are used for the national evaluation as well as for local evaluations by the grantees. The Phase IV studies include: 1) the System of Care Assessment that involves collection of data through site visits conducted every 18 months to document the development of systems of care; 2) the Services and Costs Study that analyzes data from sites= fiscal management information systems (MISs); 3) the Cross-Sectional Descriptive Study that collects descriptive data on all the children and families who enter the CMHS-funded systems of care throughout the funding period; 4) the Child and Family Outcome Study that collects data longitudinally on child clinical and functional status, and on family life from a sample of children and families; 5) the Service Experience Study that collects data on child and family experience and satisfaction with services in the overall system; 6) the Sustainability Study that gathers data on system of care characteristics and factors related to sustainability of infrastructure during the life of the award and after the Federal funding cycle is complete; 7) the Culturally Competent Practices Study that assesses characteristics of culturally competent practices and the extent to which service providers engage in culturally competent practices, and the relationship between providers’ level of involvement in a system of care and their degree of culturally competent practice, and between child and family outcomes and the overall cultural competency of care in their community; 8) A Treatment Effectiveness Study that assesses the effectiveness of evidence-based treatments on clinical outcomes among a selected group of children with disruptive behavior who are at risk for the development of substance use disorders served within two CMHS-funded systems of care.


As with the other phases of this project, Phase IV has been structured to capture the linkages between an enhanced system of care and the outcomes and experiences of children and families over time.


The new study components for this clearance are as follows:


  • The Primary Care Provider Study which will investigate the role of primary health care practitioners in systems of care to learn how PCPs identify and treat children and youth with mental health needs, and learn more about the factors that facilitate and interfere with communication and interaction between PCPs and mental health providers;

  • The Family Education and Support Study which will examine the impact of community-based interventions (i.e., Family education and support) on child and family outcomes within select CMHS-funded systems of care.





A. Justification




1. Circumstances of Information Collection




a. Background



The understanding of child and adolescent mental health disorders has improved significantly over the last two decades. As a result, the field is in a much better position today to estimate the extent to which mental health disorders occur in the population of children and adolescents at large, although it is likely that many children in need go undetected. Over the past several years, the Report of the President’s New Freedom Commission on Mental Health (2003) has set the agenda for transforming mental health care in America. The Commission evaluated the mental health service delivery system in the United States and advised the President on approaches to improve the system so that adults and children with serious mental health problems could participate fully in their communities. The recommendations outlined in the Commission Report yielded an unprecedented collaboration among Federal departments, agencies, and offices. The Substance Abuse and Mental Health Services Administration (SAMHSA) led the collaborative effort of the departments of Health and Human Services, Education, Housing and Urban Development, Justice, Labor, Veteran’s Affairs, and the Social Security Administration to produce the Federal Mental Health Action Agenda (SAMHSA, 2005a) that outlines first steps that can be taken to yield immediate results in system improvement. The Federal Action Agenda makes clear that the mental health service delivery system must focus its efforts toward achieving its primary goal of helping children with serious emotional disturbance “achieve recovery to live, work, learn, and participate fully in their communities” (p. 78).


The Commission’s Subcommittee on Children and Family created a vision for children’s mental health based on the system of care approach of the Children’s Mental Health Initiative (CMHI), that “our communities, states, and nation provide access to comprehensive, home and community-based, family-centered services and supports for children with mental health disorders and their families, while at the same time creating conditions that promote positive mental health and emotional well-being and prevent the onset of emotional problems in all children.” The vision aligns with the CMHI program and is consistent with SAMHSA’s vision of “A Life in the Community for Everyone” and mission, “Building Resilience & Facilitating Recovery.” Steps for implementing this vision, as outlined by the subcommittee, align with steps taken by CMHI-funded communities in their efforts to transform mental health service systems (Huang et al., 2005). The five principles of the Federal Action Agenda (SAMHSA, 2005a) of focusing on outcomes; focusing on community-level models of care; maximizing resources; using research findings; and ensuring innovation, flexibility, accountability, and respect for constitutional roles, are modeled by the CMHI. It is evident that the CMHI clearly supports the agency’s priorities and has the full support of the Administration and SAMHSA’s leadership.


Children and adolescents with serious emotional disturbance face challenges in many aspects of their daily lives. They are at greater risk for substance abuse disorders, and youth with less severe emotional disturbance are vulnerable to increased emotional problems as a result of substance use (CMHS, 2002; Holden, 2003; Holden et al., 2003; Liao, Manteuffel, Paulic, & Sondheimer, 2001; Substance Abuse and Mental Health Services Administration, 2002). Youth with serious emotional disturbance have greater risk for negative encounters with the juvenile justice system and have a high rate of criminal involvement when compared to all students with disabilities (CMHS, 2002; Davis & Vander Stoep, 1997). Youth within the juvenile justice system display an exceptionally high rate of mental health and substance abuse disorders (Heffron, Pumariega, Fallon, & Carter, 2003). Students with emotional disturbance fail more courses, earn lower grade point averages, miss more days of school, are retained at grade more than students with other disabilities, and have high dropout rates (Epstein, Nelson, Trout, & Mooney, in press; U.S. Department of Education [DOE], 2001).


Research supports assertions that people with mental illness during childhood have higher use of health care services in adulthood than other adults (Knapp, McCrone, Fombonne, Beecham and Wostear, 2002), and may have poor employment opportunities and experience periods of poverty in adulthood (National Advisory Mental Health Council Workgroup on Child and Adolescent Mental Health Intervention and Deployment, 2001). There is also the increased risk that youth with mental illness will not reach adulthood, as these youth are more likely to commit suicide than youth without mental illness. Suicide is the fourth leading cause of death among youth age 10 to 14, and the third leading cause of death among those age 15 to 24 (Centers for Disease Control and Prevention [CDC], 2001). Many of these suicide victims have undiagnosed or untreated mental illness (Institute of Medicine [IOM], 2002). During 2006, SAMHSA revisited its Matrix of Priorities to add suicide prevention and workforce development program/issue areas; to include disaster readiness and response as a cross-cutting principle, in line with the agency’s central role in responding to mental health needs following hurricanes Katrina and Rita; and to clarify that collaboration also includes international partners. These changes in SAMHSA’s matrix reflect the agency’s continuing efforts to address the call for transformation of mental health services. These priority areas are directly relevant to CMHI-funded communities as they address the impact of hurricanes, disaster response, and youth suicide with a limited available trained mental health workforce, and grow their programs.


Advances in the knowledge base over the last decade have served to illuminate continuing challenges in delivering services and meeting needs for this population, and have thrust the issue of children’s mental health into the public spotlight. Despite these advances, service capacity has not kept pace with need (Friedman, 2002; Stroul, Pires, & Armstrong, 2001); it is estimated that only 1 in 5 children with serious emotional disturbance receive the specialty services they need (Burns et al., 1995; DHHS, 1999; Shaffer et al., 1996), and youth with co-occurring mental health and substance abuse disorders rarely receive appropriate and timely services (Federation of Families for Children’s Mental Health and Keys for Networking, Inc., 2001). Unfortunately, the prevalence and accompanying impairment associated with serious emotional disturbance is only likely to grow in the future.


There has been much debate about the best method to serve these children and their families. In 1969, the Joint Commission on the Mental Health of Children published a landmark study showing these children were typically not served or were served inappropriately in excessively restrictive settings. Later, the Commission=s findings were substantiated by numerous other studies, task forces, commissions, and reports. These studies concurred that community-based, family-centered, coordinated systems of care providing a range of services are necessary to effectively serve these children and their families.


In 1984, in response to these findings, the National Institute of Mental Health (NIMH) initiated the Child and Adolescent Service System Program (CASSP). Later administered by the Center for Mental Health Services (CMHS) within the Substance Abuse and Mental Health Services Administration (SAMHSA), CASSP provided funds to promote the development of comprehensive and integrated service delivery systems for children with serious emotional disturbance through a system of care approach. The more recent publications (e.g., the Federal Mental Health Action Agenda and the President’s New Freedom Commission Report) documents the progress that has been made and the resources devoted to transforming the nature of service delivery for children with serious emotional disturbances and their families. These reports advocate for mental health care to be provided in communities with treatments integrated across agencies and designed to meet the needs of individuals and their families.


The system of care program theory model proposes a comprehensive spectrum of mental health and other necessary services that are organized into a coordinated network to meet the multiple and changing needs of children and adolescents with serious emotional disturbance. In this model, agencies in various child-serving sectors, such as education, juvenile justice, mental health and child welfare, work together to provide the wide array of services needed by children with serious emotional disturbance and their families. Built upon the CASSP philosophy that calls for services to be child-centered, family-focused, community-based, and culturally competent, the model emphasizes the need to: 1) broaden the range of non-residential community-based services, 2) strengthen case planning across child-serving sectors, and 3) increase case management capacity to ensure that services work together across sectors and providers.


In spite of the progress made through CASSP efforts to develop an infrastructure for systems of care, a deficit of appropriate, less restrictive treatment services remains. Studies indicate rising costs of residential services and increasing rates of child placement in residential facilities and in out-of-home care. These findings are reasons for continued concern that children are served in overly restrictive settings.


b. The Comprehensive Community Mental Health Services For Children and Their Families Program (CMHI)



While the system of care model has provided a conceptual framework to meet the needs of children with serious emotional disturbance, funding to provide services at the local level has been either sporadic or missing. In 1992, the Federal government addressed this gap with the passage of the Children’s and Communities Mental Health Services Improvement Act (CMHI), which is part of the Alcohol, Drug Abuse and Mental Health Administration Reorganization Act (Public Law 102-321, Section 520). CMHI provides support through grants and cooperative agreements to states, political subdivisions of states, tribal communities, and territories to improve and expand systems of care that coordinate and integrate services across mental health, health, child welfare, education, juvenile justice, substance abuse treatment and other agencies, as appropriate, to meet more fully the needs of children with serious emotional disturbance and their families. The CMHI is the largest Federal commitment to children’s mental health to date, and through FY 2005 has provided over $957 million to support system development in 126 communities. The program is fully described in the grant Guidance for Applicants (see Attachment 1, Guidance for Applicants No. SM–02–002, Comprehensive Community Mental Health Services for Children and Their Families Program, CMHS, SAMHSA, DHHS).


The goals of the CMHS program are to:


$ develop a service delivery system that consolidates existing fragmented, categorical service funding streams so that accountability for effective care can be clearly attributed;

$ develop accountable delivery systems directly responsive to public health and welfare authorities with effective governance structures that manage processes for service planning and delivery and fully involve children and their families;

$ ensure full family-professional partnership in the planning, development, implementation, management and evaluation of the local service system, and the care of their children and adolescents;

$ create a single system of care that is financially sustained through collaborative and integrated funding investments from State or community-based child- and family-serving agencies, including, but not limited to, child welfare, juvenile courts, education, health care and mental health, and specialty services (e.g., substance abuse treatment);

$ establish interagency involvement in the project=s structure and process, and demonstrate the extent of involvement in the interagency structure by representatives from the major child-serving agencies with interagency agreements relevant to the system of care; and

$ create organizational relationships between the State and local mental health agency and other State and local health and human service agencies as they pertain to the proposed project.


The goals of the CMHS program correspond with those outlined in Achieving the Promise: Transforming Mental Health Care in America (2005). Systems of care work to promote recovery and reduce stigma though the provision of youth-guided and family-driven care that is culturally and linguistically responsive. Services are informed by research and evidence-based practices are utilized to treat children and youth, including those with co-occurring disorders. Finally, Federal, State, and local partnerships are encouraged across child- and youth-serving systems.


c. The Need for Evaluation



Title V Section 565. [290FF-4](c), General Provisions of the Public Health Service Act mandates annual evaluation activities. A basic requirement is documentation of the characteristics of the children and families served by the system of care initiative, the type and amount of services they receive, and the cost to serve them. Equally important is the need to assess whether the program was implemented and services experienced as intended. It is also critical to assess whether the children served by the program experience improvement in clinical and functional outcomes, whether family life is improved, and whether improvements endure over time. Finally, policymakers and service providers need to know whether those outcomes can be reasonably attributed to the system of care initiative.


A government contractor (referred to as the National Evaluator throughout this document) coordinates data collection for the national evaluation and provides support for local-level evaluations. Each grantee is required by the cooperative agreement to hire a minimum of two evaluation staff (or their full-time equivalents) to ensure that data collection is systematic and can be sustained through the funding period. In this partnership between the National Evaluator and local evaluators, the National Evaluator provides training and technical assistance regarding data collection and research design. In addition, the National Evaluator receives data from all grantees, monitors data quality, and provides feedback to grantees. The grantees help shape data collection procedures and provide feedback to the National Evaluator regarding successful approaches. This evaluation will incorporate the data into both a grantee-specific and a national assessment of the program.


Previously Approved Clearance


The previously submitted OMB clearance request was approved for the first 3 years of the 6-year data collection effort for Phase IV of the National Evaluation of the Comprehensive Community Mental Health Services for Children and Families Program.


The national evaluation is driven by the system of care program theory model. This program theory asserts that to serve children with serious emotional disturbance, service delivery systems need to offer a wide array of accessible, community-based service options that center on children=s individual needs, include the family in treatment planning and delivery, and are provided in a culturally competent manner. An emphasis is placed on serving children in the least restrictive setting that is clinically appropriate. In addition, because many children with serious emotional disturbance use a variety of services and have contact with several child-serving agencies, service coordination and interagency collaboration are critical. The program theory holds that if services are provided in this manner, outcomes for children and families will be better than can be achieved in traditional service delivery systems.


To examine the system of care theory, the national evaluation is designed to answer the following overarching questions:


Who are the children and families served by the program and by the funded communities? Does the served population change over time as systems of care mature?

How do systems of care develop according to system of care principles (e.g., family and youth involvement, cultural competence, interagency collaboration) over time? In what ways does funding accelerate system development?

To what extent do children’s clinical and functional outcomes improve over time? How are family outcomes affected? How are changes in child, family, and system outcomes associated with efforts to implement and develop systems of care?

What are the service utilization patterns (specific services, treatments, and supports) for children and families in systems of care and what are the associated costs? How cost-effective are systems of care over time?

To what extent are children’s and families’ experiences consistent with the system of care philosophy? How satisfied are children and families with the services they receive? To what extent are family members and youth involved in systems of care?

To what extent do specific evidence-based interventions enhance positive outcomes among children and families, including prevention of substance abuse among children receiving services in systems of care?

Are there subgroups of children and families for whom a system of care is more effective?

To what extent are systems of care able to sustain themselves after Federal funding has ended? What factors facilitate or impede sustainability?

How competently do systems of care provide an array of services, treatments, and supports in the cultural and linguistic context of the child and family?

What is the role of primary care providers in systems of care? What are barriers to communication and interaction with mental health services?


These evaluation questions evolved over the last 9 years through development of the CMHI and feedback from system of care personnel and other partners and extend those mandated by the CMHI authorizing legislation. The legislation requires funded communities to participate in a national evaluation that assesses the number of children served, child and family characteristics, child and family outcomes, service utilization patterns, and system characteristics.


The evaluation design includes eight study components, including special studies that employ both qualitative and quantitative methods to comprehensively examine the impact of CMHI funding. This evaluation provides the opportunity to advance the assessment of evidence-based treatments within systems of care, and to examine in greater detail specific efforts and goals of the CMHI. The study components are:


System of Care Assessment (Attachment 4.A.). This component examines whether programs have been implemented in accordance with system of care program theory and documents how systems develop over time to meet the needs of the children and families they serve. A particular interest was whether services are delivered in an individualized, family-focused, coordinated manner, and whether the system involves multiple child-serving agencies. For Phase IV, site visits for each system of care community were conducted at 18–24-month intervals across their 6 years of funding, beginning in the second year of project funding, and repeated in the fourth and sixth years of project funding. Information is being collected through a combination of document reviews, review of randomly selected case records, semi-structured quantitative and qualitative interviews, observations made on site, follow-up telephone interviews to clarify information, and the administration of selected domains of the Interagency Collaboration Scale (IACS) (Greenbaum et al., 2003). Categories of interview respondents includes project directors, core child-serving agency representatives, representatives from family organizations, care coordinators, direct service providers, and caregivers of children being served by the system of care. The System of Care Assessment has included a formative youth component to evaluate youth involvement and experiences in system of care communities. These questions are asked of youth and youth coordinators. The IACS, which quantifies collaboration between child-serving agencies in system of care communities, is administered to project directors, core child-serving agency representatives, representatives from family organizations, care coordinators, and direct service providers.


Services and Costs Study. This study describes the types of services used by children and families, their utilization patterns, and the associated costs. The relationship between service use and outcomes were explored. These data are maintained continually by grantees in their fiscal (e.g., charge, billing) management information systems (MISs) and transmitted to the National Evaluator at regular intervals. Of interest are the types of services, the combination of services, continuity or gaps in care, and the length of treatment. Additional site-level data were compiled from annual budget summaries provided to the State and other funders. Although clearance was not requested for data extracted from existing MISs or the annual administrative budgets (as they constitute no additional burden for site staff or families), they are mentioned here in order to describe the full scope of the evaluation.


Cross-Sectional Descriptive Study (Attachment 4.B.). This study describes child and family characteristics of all children entering CMHS-funded systems of care. Data are obtained primarily through in-person interviews with caregivers; an abbreviated set of items administered at intake was directly entered into a Web-based database by intake personnel to facilitate capture of basic descriptive characteristics of children served. Data are collected upon entry for all children and families who enter the system of care throughout the program=s funding period. In addition, for the children and families who participate in the Child and Family Outcome Study (see below) the descriptive data elements that may have changed over time (e.g., family income, legal custody) are collected again at follow-up data collection points. Because sites routinely maintain these data for their own administrative purposes, only the descriptive data collected on families at follow-up in the Child and Family Outcome Study sample create additional respondent burden.

Child and Family Outcome Study (Attachment 4.C.). This study, conducted among a sample of children in each site, examines how the system affects child clinical and functional status and family life. Outcome data on child clinical and functional status are used to assess change over time in the following areas: symptomatology, diagnosis, social functioning, substance use, school attendance and performance, delinquency and juvenile justice involvement, and stability of living arrangements. Family life is assessed in the areas of family functioning, caregiver strain, and material resources. These data are collected at all system of care sites within 30 days of the child=s entry into services and at 6-month intervals for the length of the evaluation.

Service Experience Study (Attachment 4.D.). This study, conducted among the sample of children participating in the Child and Family Outcome Study, investigates the extent to which system of care principles are experienced by families, and considers experiences from the perspectives of caregivers and youth. Data are used to assess intervention fidelity, satisfaction with services, cultural competence, accessibility and coordination of services, perceived helpfulness of services, and impact of services on ability of family members to work outside the home. Data collection occurs at follow-up from those families who have received services in the previous 6 months. To assess service use, data are collected using 1) the Multi-Sector Service Contacts (MSSC), which reports services used in multiple child-serving sectors and records when, where, and how much of each service was received, and the caregiver’s perception of each service’s usefulness, 2) the Youth Services Survey (youth and family versions), which looks at satisfaction and experience with services, perceived outcomes of services, and compatibility of services with system of care principles, and 3) the Cultural Competence and Service Provision Questionnaire that assesses caregivers’ perceptions of whether they receive culturally-appropriate services.


Treatment Effectiveness Study (Attachment 4.E.). This study uses a randomized clinical trial design to assess effectiveness of an evidence-based intervention, Brief Strategic Family Therapy, within two systems of care (i.e., Cleveland, OH and Oklahoma City, OK) on clinical outcomes by comparing outcomes among children who received the standard system of care services plus an evidence-based treatment to those among children who received only the standard system of care services. The study measures treatment fidelity (i.e., Therapeutic Alliance Scale-caregiver and youth; and Therapy Adherence Form), outcome measures (i.e., Family Assessment Measure, Conflict Behavior Questionnaire, and Ohio Scales) specific to treatment goals, and attitude measures (i.e., Evidence-Based Practice Provider Survey) were administered to service providers, caregivers and youth. These data are collected from 2 system of care sites, with the study including only children with the specific diagnostic characteristics for which the evidence-based treatment was designed (i.e., disruptive behavior disorder and at-risk for substance use). Outcomes for children receiving an evidence-based treatment are compared to outcomes for a control group of matched children from the same system of care site.


Sustainability Study (Attachment 4.G.). Using a Web survey, this study gathers data on system of care characteristics and factors related to sustainability of infrastructure during the life of the award and after the Federal funding cycle is completed. The survey questions cover the following topic areas: (a) availability of specific services in the system of care, (b) mechanisms used to implement system of care principles, (c) factors affecting sustainability (whether each factor has played a role in the development or maintenance of the system of care, and, if so, the extent to which each has impacted the system of care), (d) success with objectives for implementing systems of care, and (e) strategies for sustaining systems of care. The Web survey is conducted with representatives from all sites in years 2, 4, and 5 of the evaluation.


Culturally Competent Practices Study (Attachment 4.I.). The purpose of this special study is to obtain information from service providers, administrators, family members and youth in system of care communities on characteristics of culturally competent practices and on the extent to which service providers engage in culturally competent practices, as well as to examine the relationship between providers’ level of involvement in a system of care and their degree of culturally competent practice, and between child and family outcomes and the overall cultural competency of care in their community. This study builds upon the assessment of cultural competence across the service system in the system of care protocol by obtaining information from a spectrum of mental health service providers across each community, and allows for more in-depth assessment of issues associated with engaging in culturally competent practice within the context of the system, the population served, and the practices of individual providers. In year 3, a one-time Web-survey was conducted with approximately 30 mental health clinicians at each site. Data from the survey will be used to assess communities for consistency in use of culturally competent practices. One community high in these practices and one low or variable were selected for qualitative follow-up. Following analysis of survey data, separate focus groups with service providers, administrators, caregivers, and youth will be conducted at these communities.


Exhibit 1: Summary of Major Components in Phase IV

Note: Years refer to evaluation year



d. Clearance Request



SAMHSA is requesting approval for revisions to the previously approved national evaluation Phase IV package. Changes requested are described below:


      1. The number of Phase IV communities for which burden was calculated was reduced from 29 to 27 sites because funding for two of the 18 sites funded in FY 2002 was withdrawn. These sites were Sacramento, CA, and Washington, DC.


      1. Minor modifications were made to instruments in previously approved study components (see Attachment 4). In addition, modifications were made to the youth and youth coordinator instruments of the System of Care Assessment. These instruments are presented in Attachment 4.A.5.


      1. The addition of a Primary Care Provider Study. This study seeks to investigate the role of primary health care practitioners in systems of care, to learn how PCPs identify and treat children and youth with mental health needs, and learn more about the factors that facilitate and interfere with communication and interaction between PCPs and mental health providers. The study is the final stage in a three-stage process, which was guided throughout by a team of stakeholders including representatives from youth, caregivers, service providers, project directors, and primary care personnel.


Part 1 involved collecting descriptive data on participating children’s health status, care, and financing, and continues through the 6 years of the national evaluation as a part of the longitudinal child and family outcome study. Part 2, conducted during year 2 of the evaluation, utilized qualitative evaluation methods to understand the role of primary care practitioners within systems of care. These data were obtained from discussion groups with various stakeholder groups involving nine or fewer participants and will be used to develop a model of the factors influencing the role of primary care practitioners in systems of care. Themes addressed included: perspectives on access to health care, the role of primary care providers in screening for mental health disorders, the role of primary care providers in providing ongoing mental health care, the role of primary care providers in prescribing medication, integration of health care services into systems of care, family partnerships with primary care, youth partnerships with primary care, collaboration between service providers and primary care, programmatic barriers to the integration of primary care into systems of care and health care disparities and primary care services. Part 3 will be conducted in the upcoming year of the evaluation. Findings from the qualitative evaluation were used to develop a survey instrument that will be administered to primary care practitioners, specifically, pediatricians, in communities funded by the CMHI.


      1. The addition of the Family Education and Support Study (FES). The FES study will examine the relative impact of community-based treatments focused within system of care sites. This study will focus on a community-based practice (i.e., family education and support) that has not accumulated research evidence, but rather through community-based implementation has accumulated practice-based evidence. The addition of this study is important because there is little information about the effectiveness of community level mental health care. Community-based care that is coordinated at the community level is necessary for effective and accessible treatment of mental illness. While the notion of family education and support is widely used in communities, many communities implement it differently and it is an area that is understudied. This study design uses a 3-tier research design approach which includes activities at each tier that both inform the field outright as well as inform the implementation of the subsequent tier’s activities.


Tier 1 involves developing a critical elements assessment by using existing data sources and extant literature to identify critical domains and elements for family education and support. During this phase of the study, secondary data analyses will be conducted using existing national evaluation data to describe family and education services and examine outcome variables that may be associated with receipt of these services. The results from the secondary data analyses, along with a review of the literature and the analyses of other data sources from the national evaluation (e.g., System of Care Assessment, Cross-Sectional Descriptive Study, and Child and Family Longitudinal Outcomes Study) will be used to develop a matrix of family education and support (i.e., critical elements assessment) that will be subsequently used in Tier 2 (described below) to assess communities and generate community-unique profiles. Specific clearance is not requested for Tier 1; however, these activities are described in order to provide information about the full scope of this study component.


Tier 2 involves conducting a critical elements assessment with selected system of care communities. This assessment will be conducted with Phase IV communities in which at least 25 families are receiving family education and support services and in which that number represents at least 25 percent of the local service population. An initial review of the data indicates that up to six sites meet these criteria. The critical elements assessment will be used to determine community-based practices of family education and support within these sites. The critical elements assessment will be conducted via site visits at each site with administrative, provider and consumer respondents. One-on-one interviews will be conducted with the Project Director and/or Clinical Supervisor to provide general background information about the family education and support model being implemented at the site and information such as number of families receiving this service, number of providers providing this service, any training activities associated with this service. In addition, two focus groups will be conducted at each site: One with providers and the other with caregiver consumers to provide more detailed information about the family education and support model being implemented at the site, as well as consumer satisfaction with these models.


Tier 3 is an outcomes study that will be conducted to examine family level outcomes and their association to family education and support intervention critical elements. Up to six sites will be selected from those targeted in Tier 2. Our experience with other phases of the national evaluation indicates that there is considerable variability in the ways sites operate and implement their programs. It is anticipated that multiple family education and support models may emerge from the findings in Tier 2. The critical elements assessment will identify key components of these models at each site. In order to allow for an examination of the impact that variation in local implementation may have on outcomes, site selection would be based on the variability of family education and support implementation characteristics and the variability in the experience of critical elements within sites.


Up to 50 families per site will be recruited to participate in the study. Families will be interviewed three times at 6-month intervals from baseline, 6 months, and 12 months. Families will be assessed on their critical elements experience through information obtained from the ongoing service management process (e.g., child and family team and service planning meetings) and case record reviews. There is no additional burden associated with obtaining information related to participants’ service experience, as it is already collected as part of a program's ongoing service provision process. However, study participants will be administered instruments to assess five main domain areas: Caregiver social support; caregiver functioning/stress; mental health services self-efficacy; parenting skills and involvement; and parent use of mental health services. The majority of information being collected is through the national evaluation outcome study. Measures to be added to the national evaluation instruments to address these domain areas include: Duke Social Support Scale (Landerman, George, & Campbell, 1989); Beck Depression Inventory (Beck, Steer & Brown, 1996); Vanderbilt Mental Health Services Self-Efficacy Questionnaire, (Reich, Bickman & Heflinger, 2004); Caregiver Strain Questionnaire (Brannan, Heflinger & Bickman, 1997); Alabama Parenting Questionnaire (Frick, Christian, & Wootten, 1999); and The Parenting Sense of Competence Scale (Gibaud-Wallston & Wandersman, 1978). Families with an identified need for family education and support services as described in a plan of care upon entry into the system of care program will be recruited for participation in the study. In order to maximize recruitment and study participation, site-specific implementation such as the timing of the receipt of family education and support services after enrollment and the eligibility criteria provided by sites for receipt of these services will be taken into account in site selection. The relationship between the experienced family education and support critical elements and child and family outcomes will be explored.


2. Purpose and Use of the Information



This evaluation will serve several purposes. It will: 1) describe who is being served by the CMHS-funded systems of care; 2) show whether there are observable differences in child and family outcomes that can be plausibly linked to a faithful implementation of the system of care approach; 3) describe how children and families experience the service system and how they use services and supports (i.e., utilization patterns); 4) estimate the cost of serving children in systems of care; 5) illustrate the development of systems of care as they move toward offering integrated and comprehensive services; 6) determine if a specific evidence-based treatment (BSFT) enhances the positive outcomes for children and families participating in a system of care; 7) determine the utility and impact of family education and support on child and family outcomes; 8) assess the role of primary care providers in systems of care and how they can better serve children in these programs; 9) describe how providers and system of care stakeholders conceptualize culturally competent care and determine whether providers in systems of care are providing culturally competent services both from the perspectives of providers and recipients of services; 10) support technical assistance activities to help CMHS best meet program goals; 11) support CMHS in its efforts to establish standards for measuring their performance and effectiveness as required under the 1993 Government Performance and Results Act (GPRA); and 12) provide data for the National Evaluation Measures (NOMs) to address the national outcome measures for mental health programs as currently established by SAMHSA. NOMS data will be collected through the Transformation Accountability (TRAC) system implemented by CMHS once OMB approval of the NOMS measures is obtained. The evaluation will phase in NOMS data collection through the TRAC system using an electronic data delivery (EDD) mechanism for annual performance reporting.


The data collected in Phase IV have been useful to CMHS and its partners, other Federal agencies, the grantees, individual children and their families, and the research field. Findings from the Phase I, II and III evaluations have been used to describe the children and families served by the funded systems of care, to assess whether the children in the samples have experienced improved outcomes, to measure service experiences and system development, and to request additional funding from local and state agencies to sustain system of care services. In addition to contributing further information on topics covered in prior phases, Phase IV continues to add to the knowledge base through assessment of fidelity and outcomes of evidence-based treatments, system of care communities’ readiness to sustain themselves and the barriers and facilitators to sustainability, the prevalence of culturally-competent services provided within systems-of-care primary care, providers’ role in systems of care, and the efficacy of community-based practices. As in previous phases of the evaluation, the design allows for the exploration of the relationships between service use and outcomes and the study of the long-term impact of the program.


CMHS will continue to use the results from Phase IV to develop policies and provide guidance regarding the development of systems of care. Specific findings on the successes and challenges that agencies have experienced in developing collaborative, coordinated, and comprehensive systems is used to tailor technical assistance to grantees. Information and findings from the evaluation will help CMHS plan and implement other efforts related to systems of care, including building the evidence for effective community-based practices. Findings from the evaluation can also enhance other CMHS programs that support system development (e.g., Projects for Assistance in Transition from Homelessness, Block Grants for Community Mental Health Services, and the Community Support Program). In addition, the many partners that work in collaboration with CMHS, including the Federation of Families for Children=s Mental Health, and the National Mental Health Association, among others, will be able to use the results in their national efforts to help build systems of care to meet the needs of children and families. Finally, CMHS will also use the findings from the evaluation to provide objective measures of its progress toward meeting targets of key performance indicators put forward in its annual performance plans as required by law under the GPRA. Globally, these measures for children include number of children served in the CMHS program, cross-agency treatment planning, usage of inpatient hospitalization, school attendance, juvenile justice contacts, and past month use of alcohol or illegal drugs. Specific measures from the Phase IV instrumentation corresponding to these global measures include case record review for number of children served and cross-agency treatment planning, the Living Situations Questionnaire (LSQ) for assessing usage of inpatient hospitalization, the Education Questionnaire (EQ) and the Delinquency Scale (DS) for assessing school attendance and juvenile justice contacts, and the Substance Use Scale (SUS) for assessing past month use of alcohol and illegal drugs. These instruments are described in detail in Attachment 4.C.


Findings from the evaluation will be useful to policymakers, planners, and analysts in other Federal agencies involved in programs for this target population. The service program is being coordinated with relevant Federal agencies, such as the National Institute of Mental Health and the Administration for Children and Families in the Department of Health and Human Services, the Office of Juvenile Justice and Delinquency Prevention in the Department of Justice, and the Institute of Education Sciences and the Office of Special Education Programs under the Office of Special Education and Rehabilitative Services in the Department of Education. CMHS has held several meetings with representatives from these and other Federal agencies since the inception of this program. The involvement of staff from related agencies and programs ensures that the effort is coordinated at the Federal level and that results of the evaluation are useful to a wider audience. See Attachment 2.A. for a list of participants in the Federal-National Partnership for Children=s Mental Health.


Findings from the evaluation will continue to be used by grantees to improve the implementation of their systems of care and achieve the goals of the CMHI. Demographic and outcome data on a sample of children and families who participate in the system of care will aid grantees in identifying the program elements that help children and families function better, that promote family involvement, and that lead to client satisfaction. Grantees can use the information to identify better their target populations, improve their services, and bolster their efforts to obtain the required matching funds and sustain their system of care after the CMHI funding has ended. Indeed, several grantees have used data collected for the Phase I, II and III studies to request additional funding from their State legislatures. The same is expected for the remaining years of Phase IV. Intervention-level data will continue to provide useful feedback to grantees on whether families experience services as the grantees intended and identify their programs= strengths and weaknesses. This information helps grantees plan culturally-competent services and supports that families report as useful and that are associated with improved child and family outcomes. System of Care Assessments provide useful feedback on how to refine the system by identifying gaps in system development and barriers to collaboration, which helps the grantees more effectively allocate personnel and funding and prioritize activities. Grantees will continue to learn what barriers children and their families perceive and be able to work to eliminate such barriers. Clinicians have been able to use the data collected with standardized objective measures to guide treatment.


The research community, particularly the field of children=s mental health services research, will continue to benefit in a number of ways. First, evaluation of the CMHI adds significantly to the developing research base about systems of care. Second, the focus on child, family, and system outcomes allows researchers to examine and understand the specific ways children improve, how services can be enhanced, and the importance of adherence to service plans. As a result, the relationship among these variables is better understood. Finally, the analysis of evaluation data helps researchers in formulating new questions about systems of care and specific services, and helps both service providers and researchers improve the delivery of children=s mental health services. The information obtained from the Outcome Study and the Treatment Effectiveness Studies are of particular importance in addressing these research goals.


If these data are not collected, policymakers and program planners at the Federal and local levels will not have the necessary information to determine the extent to which children with serious emotional disturbance and their families experience contract-funded services as they were intended. Without this evaluation, they will not know if these systems have had any positive impact on the lives of the people they serve.


3. Use of Improved Information Technology

The majority of the child and family descriptive, outcome, intervention-level, and treatment effectiveness data are collected through interviews with children and families using standard instruments. The data collection is conducted by grantee site staff (or by National Evaluation staff in the case of the Evidence-Based Treatment Study, Family Education and Support Study, and the System of Care Assessment). Every effort has been taken to reduce the burden on children and families participating in the study, including offering to conduct the interviews in their homes or at other locations most convenient for them. Previous experience has shown that sites differ in their access to hardware and software. Requiring special hardware or software for this evaluation would be disruptive and would increase rather than reduce burden, especially since grantees must be capable of administering the instruments in a variety of settings. However, the National Evaluator has provided software for computer-assisted personal interviewing (CAPI) for those grantee sites that have access to the necessary hardware and across all study components. Approximately 50 percent of total responses are obtained electronically by computer-assisted personal interview or Web survey.


Because the collection of System of Care Assessment data is primarily qualitative in nature and does not lend itself to the use of special technology, these data are collected by the National Evaluator during site visits. Data from the Cross-Sectional Descriptive, the Child and Family Outcome, the Service Experience, and the Treatment Effectiveness studies are managed using an integrated Internet-based data input, management and dissemination system—the interactive-collaborative network (ICN). The ICN, which was introduced in Phase III of the National Evaluation, reduces evaluation burden for the sites and allows real-time access to data for site personnel and national evaluation team members. The system serves as a mechanism for communicating about evaluation activities and results.


The ICN was designed as a three-part system that allows systematic data input, immediate validation to identify data input flaws and the monitoring of data entry and evaluation in real time. It reduces processing time, and provides the capability of creating interactive reports. The ICN is a completely secure system that maintains privacy through the provision of different levels of password-protected access to site and national data. The three software subsystems include:


Data Input. Data entry software allows rapid data entry offline, and the Internet is used to transfer data from local sites to the national database. The offline data entry feature of the ICN allows those sites with available laptop computers the option of CAPI interviewing by entering the participant’s responses directly into the data entry package during the interview. Software improvements developed by the National Evaluator for Phase IV allow specific descriptive information on study participants to be entered directly to the ICN Web site. This software is designed to be used by intake workers or case managers often located at various agencies rather than at a central evaluation office. The primary goal of the addition of this Web-based software is to maximize the capture of descriptive information on all children served in system of care programs while reducing burden associated with the Cross-Sectional Descriptive Study.


Data Monitoring and Management. Software allows the National Evaluator and CMHS to monitor the status of each site’s data submissions in real time and permits sites to check the status of their own data submissions.


Data Dissemination. Reporting features support sites’ abilities to use their data for quality assurance monitoring and system improvement purposes. Basic validations are completed during the data entry process. More complex validations requiring comparison of data across instruments and across time are performed on the ICN after data are uploaded to and stored in the central repository. Additional reports created on the ICN provide a vehicle for the review of aggregate data that CMHS have approved for public release.


The National Evaluator provides training and direct evaluation technical assistance support to sites to facilitate the implementation of the evaluation protocol and the use of evaluation results at the site level. Site personnel are trained to utilize the ICN at national training meetings and during evaluation technical assistance visits to the sites.


Sustainability Survey. This study is conducted as a Web survey. Because it is necessary to link responses of individuals who completed System of Care Assessment interviews to their sustainability survey responses, procedures to maintain anonymity are not employed. Respondents enter a Web address and password into their Web browsers to open and complete the survey. Because names and contact information of respondents in Phase IV communities are maintained by the National Evaluator, e-mail contacts are available. A letter describing the survey and instructions for logging onto the Web survey is sent by either e-mail or mail to respondents. For those people who cannot complete the survey on the Web, the option to complete a paper-and-pencil survey is provided. Survey completion can be monitored by each login to assess response rates and to implement targeted follow-up mailings and phone calls to non-respondents.

Culturally Competent Practices Survey. The Culturally Competent Practices Survey was implemented as a Web survey. Respondents entered a Web address and password into their Web browsers to open and complete the survey. A letter describing the survey and instructions for logging onto the Web survey was sent by mail. Respondents completed a Web survey more quickly because completion rules were embedded in the electronic survey and responses were generally more complete than what is expected with a paper-and-pencil survey. Previous experience with other similar projects has shown that Web-based survey completion results in data of better quality.


To maintain anonymity of responses to the Web-based survey, two databases have been used: (1) an identifier database containing the respondent name, user id and password, and (2) a database holding the survey responses. The two databases are not linked. Respondents are asked to login using an assigned ID and Password. Once they login, they are asked to create their own userid and password. A blank record is created in the responses database. The respondent-generated userid and password are stored in that blank record, so that a respondent is able to login and complete the survey at a later time, if needed.


After the blank record is generated, it is possible to check off that the subject has responded to the survey in the identifier database and delete their assigned id and password. This serves two purposes. The first is that it allows us to quickly generate a response rate, and second, a respondent is prevented from filling out the survey twice. Respondents are not able to generate a new blank response record since they are not able to login again using their assigned ID and password (which no longer exist). However, they are able to login using their own generated ID and password to modify their responses or to complete or view the survey. Since subjects create their own userid and password that only they know, the National Evaluator is not be able to link respondents to responses, but, will know who has not responded because their login has not been used.


Primary Care Provider Study. The Primary Care Provider Study will be implemented as a both a mail and Web survey. Respondents will have the option of choosing the method that is most convenient for them. For the Web-based survey, respondents will enter a Web address into their Web browsers to open the survey. They will then enter an assigned user id and password to complete the survey. Because names and contact information of potential respondents will be obtained from mailing lists of providers in each community, it is less likely that these will contain e-mail contact information. Consequently, a letter describing the survey and instructions for logging onto the Web survey will be sent by mail in a letter. Respondents can complete a Web survey more quickly, and because completion rules can be embedded in an electronic survey, responses are generally more complete than with a paper-and-pencil survey. For those people who cannot complete the survey on the Web, the option to complete a paper-and-pencil survey will be provided. Previous experience with other similar projects has shown that Web-based survey completion results in data of better quality.


To maintain anonymity of responses to the Web-based survey, two databases will be used: (1) an identifier database containing the respondent name, user id and password, and (2) a database holding the survey responses.


Respondents will be asked to login using an assigned ID and Password. Once they login, they will be asked to create their own user id and password. A blank record will be created in the responses database. The respondent-generated user id and password will be stored in that blank record, so that a respondent will be able to login and complete the survey at a later time, if needed. After the blank record is generated, it will be possible to check off that the subject has responded to the survey in the identifier database and delete their assigned id and password. This serves two purposes. The first is that we will be able to quickly generate a response rate, and second, a respondent will be prevented from filling out the survey twice. Since they will not be able to login again using their assigned ID and password (which no longer exist), they will not be able to generate a new blank response record. However, they will be able to login using their own generated ID and password to modify their responses or to complete or view the survey.


Since subjects create their own userid and password that only they know, Macro will not be able to link respondents to responses, but will know who has not responded because their login has not been used.


4. Efforts to Identify Duplication



In an invited address at the 12th Annual Systems of Care Research and Training Conference in 1999, Abram Rosenblatt reviewed the current status of the effectiveness of systems of care. His review suggested that progress has been made in amassing evidence regarding the characteristics of children in systems of care, but controlled research studies on the effectiveness of systems of care lag well behind other forms of research evidence. Continued emphasis on defining the independent variable (e.g., the system of care approach and its specific service delivery components) is needed to understand community-level effectiveness. In addition, longer follow-up studies of the natural course of serious emotional disorders and outcomes for children and adolescents who may experience repeated episodes of treatment or services are necessary to clarify the effectiveness of systems of care. The issue of real world effectiveness has also become a growing concern for those who have been supporting efficacy studies of treatments for specific child disorders. A conceptual model and strategic plan for improving the relationship between the results of efficacy trials and effectiveness research for both children and adults with mental illness was released by the National Institute of Mental Health in 1998. At this critical juncture, the Phase IV evaluation offers a unique opportunity to address the overlapping needs to understand the effectiveness of systems of care and to implement and measure evidence-based treatments in community contexts. The development of designs to address these needs within the national evaluation has generally followed questions emerging from the children’s mental health services field. Although many questions continue about the effectiveness of systems of care at the clinical outcome level (Burns & Hoagwood, 2002; Surgeon General’s Report, 1999), data exist to support continued work on implementation of the approach within community settings and the President’s New Freedom Commission (2003) calls for community services with programs integrated across levels of government and agencies. Strong consumer advocacy for alterations in traditional mental health services approaches for children with serious emotional disturbance and their families is an important driving factor in sustaining federal- and state-level efforts. The President’s New Freedom Commission Report (2003) and the evidence-based treatment movement within children’s mental health (Burns & Hoagwood, 2002) are more recent events that have affected the evolution of research questions and the direction of the evaluation. Systems of care are an area in need of further study, especially with respect to the integration of evidence-based interventions within these community-based programs. The most important questions for the field are how to effectively integrate evidence-based interventions within the system of care philosophy with the underlying hypothesis being that the effects of these interventions will maintain and generalize more effectively within the context of coordinated, community-based service systems.


The National Evaluator also conducted an extensive literature search to identify existing evaluation research on systems of care and children’s mental health services. The search included a review of published literature, unpublished papers, works-in-progress, and working papers and documents. During the implementation of the Phase I-III evaluations, the National Evaluator has kept abreast of the literature in children’s mental health services research and has been in close contact with the original grantees. This has allowed the team to keep up with advances in practice and research. In addition, the Services Evaluation Committee for the National Evaluation has helped keep the evaluation apprised of new innovations in the field. These efforts yielded a broad list of useful references. While some of the research identified contains features similar to the planned evaluation, the scope of the research projects varies considerably and is driven by the particular research interests of each investigator. The Phase IV evaluation offers unique contributions to the field not available in these other studies. The nature of these studies and the unique contributions being made by the Phase IV evaluation are summarized below.


Systems of Care for Children and Adolescents with Serious Emotional Disturbances: What Are the Results?” published by Beth Stroul in 1993, contains a complete review of studies of local systems of care. Stroul concluded that while there is a growing body of evidence to support the contention that systems of care provide high-quality and more appropriate care, continuing commitments to research and evaluation are needed. Further, attention should be directed beyond the assessment of short-term outcomes. She called for the development of a common set of outcome indicators that would provide a framework for more systematic studies and multi-site analyses. The evaluations for all four phases of the project address these concerns because they cover multiple sites, share standard instrumentation, and Phases I and II included comparison sites. In addition, Phase IV will collect data from children and families after the completion of services in order to examine long-term outcomes.


The Alternatives to Residential Treatment Study (ARTS) project, which started in the early 1990’s, was conducted by the Research and Training Center for Children’s Mental Health of the Florida Mental Health Institute to study the effectiveness of five innovative programs (Duchnowski, Hall, & Kutash, 1998; Duchnowski, Hall, Kutash, & Friedman, 1998). Components of this study included descriptions of the children and families served, interventions employed, program costs, and outcomes for children over time. This study contributed to the field by documenting the experiences of individuals affected by changes in service delivery systems. However, the ARTS project sample was relatively small (87 children). As a result, generalizable conclusions about the effectiveness of the system of care approach cannot be drawn. With a larger sample and more sites, Phase IV offers an opportunity to produce generalizable findings for those elements covered in ARTS. In addition, unlike ARTS, Phase IV will address the effect of system of care and service-level factors on outcomes.


The National Adolescent and Child Treatment Study (NACTS) was a 7-year longitudinal study conducted at 121 sites in 6 states by the Research and Training Center for Children’s Mental Health of the Florida Mental Health Institute. It assessed the treatment provided to children with serious emotional disturbance in residential mental health facilities and in community-based special education programs (Greenbaum, Dedrick, Friedman, Kutash, Brown, Lardieri, & Paugh, 1996). Although the NACTS project studied children in residential treatment and community-based special education programs, it focused on describing children rather than on the services they received. The NACTS was not evaluative, but descriptive, in nature. In addition to describing children receiving services in a community-based system of care, the Phase IV evaluation also assesses outcomes and service delivery and use.


The Robert W. Johnson Foundation (RWJF) Mental Health Services Program for Youth, conceived in 1988, funded eight community programs that were evaluated by Brandeis University (Cole & Poe, 1993; Cole, 1996; Saxe & Cross, 1997). The evaluation of that program focused on changing financing policies and refining new treatment strategies and did not aim to assess client outcomes over time. While not mandated by the evaluation, some sites collected child and family outcome data. However, their findings were limited due to differences in instrumentation that compromised the ability to compare results across the sites. The National Evaluation systematically evaluates child and family outcomes using a standard set of instruments, thus allowing for comparison across sites and, when appropriate, aggregation of data.


Another evaluation of the RWJF program in North Carolina was started in 1992 and conducted by researchers at Duke University (Burns, Farmer, Angold, Costello, & Behar, 1996; Angold, Burns, Costello, & Behar, 1998). For this study, children were randomly assigned to one of two models of case management to determine their impact on mental health outcomes for children. Unlike Phase IV, this study did not evaluate the effectiveness of the full continuum of service options or study the roles of multiple child-serving sectors (e.g., juvenile justice, education, child welfare).


The Center for Mental Health Policy at Vanderbilt University evaluated the Fort Bragg Child and Adolescent Mental Health Demonstration Project. The evaluation of this project, which served children of military personnel in the Fort Bragg area, had four components. First, it described how the demonstration project was implemented and highlighted key process indicators (e.g., linkages among providers, extent of family involvement). Second, it examined whether the quality of services provided was sufficient to produce the predicted effect on outcomes. Third, it studied the cost of providing services and patterns of service use. Finally, it assessed the mental health outcomes of the children using a quasi-experimental design that included two comparison sites (Bickman, Guthrie, Foster, Lambert, Summerfelt, Breda, & Heflinger,1995). Several of these general areas of inquiry overlap with the Phase IV evaluation. However, the Fort Bragg study focused on services in the mental health sector, ignoring other child-serving sectors. The evaluation indicated that services delivered through a continuum of care did not produce significantly better clinical outcomes than regular CHAMPUS-funded services for military dependents. Access to services was greater in the demonstration site with resulting increases in costs. A subsequent investigation utilized a randomized control group design to evaluate the effectiveness of system of care services for children with serious emotional disturbance and their families seeking services in Stark County, Ohio. This latter effort also found no significant clinical and functional differences between children served in a system of care and those who received treatment as usual, although the children enrolled in this trial may have been minimally functionally impaired and the number of participants limited the power to detect significant differences (Bickman, Summerfelt, Firth, & Douglas, 1997).


The Phase IV evaluation has a broader population scope than the Fort Bragg study since it is not limited to the children of military personnel. It is notable that more than half of the children in grant-communities funded between 1997 and 2000 lived in poverty and less than 25 percent lived in households with both of their biological parents. Phase IV grantees are serving similar populations, and, as such, findings from Phase IV are more likely to generalize to the children and families served by public agencies.


The 1999 Mental Health: A Report of the Surgeon General included a review of the effectiveness of systems of care. The report concluded that while findings are encouraging, the effectiveness of systems of care has not been demonstrated conclusively, and that the findings of the Fort Bragg study, in particular, indicate the importance of evaluating the impact of changes at the system level on practice. The report’s findings indicate that further research needs to focus on practice-level issues, and examine the relationship between changes at the system level and changes at the practice level in order to demonstrate that services delivered within a system of care result in improved clinical outcomes relative to services delivered within traditional systems.


As explained above, Phase IV does not duplicate extant studies, but instead enhances the existing knowledge base. In addition, Phase IV provides information that is specific to this service program. As required by the legislation, data must be collected from the communities in which the program has been funded.


As described above in section A.1.d., advances in the field of children’s mental health have emphasized the importance of assessing the impact of implementing evidence-based treatments in systems of care and the co-occurrence of substance use and mental illness among children and adolescents. Emphasis has also been placed on integrating primary health and mental health care, as well as utilizing effective community-based practices. Consequently, Phase IV addresses these issues through a Treatment Effectiveness Study that focuses on evidence-based treatments targeted at substance use prevention among children with disruptive behavior disorders and at risk for substance abuse, which will enhance our understanding of the effectiveness and fidelity of evidence-based treatments in general, and their impact on preventing substance use in this target population. Furthermore, with the addition of the Primary Care Provider Study and the Family Education and Support Study, Phase IV studies seek to examine the role of primary health care providers in the mental health care of consumers and the efficacy of community-based practices.


5. Involvement of Small Entities



Some of the data for this evaluation are collected from mental health, juvenile justice, education, and child welfare agencies. While most data are collected from public agencies, it is possible that some organizations providing services to the target population, such as community-based organizations, not-for-profit agencies, private providers, schools, or parent groups, would qualify as small entities. The site visit interview guides used in the System of Care Assessment, the Sustainability Survey, the Culturally Competent Practices Survey and Focus Groups, the Primary Care Provider Survey, the Treatment Effectiveness Study’s treatment adherence measures, and the Family Education and Support Study’s focus groups and interview guides are the only instruments that will be administered to the staff of small entities. The information requested is the minimum required to meet the study objectives. There is no significant impact to small entities.


6. Consequences if Information Is Collected Less Frequently



System of Care Assessment. Data for this component are collected every 18–24 months across the 6 years of system of care community funding, documenting how the program has led to system enhancement. This information is key to examining whether improved outcomes for the children served by the system can be plausibly linked to this initiative. Because systems of care change slowly, collection of system data every 18–24 months is sufficient to provide information on system implementation, organizational involvement, and relationships. If these data were collected less frequently, important interim changes would not be documented. The System of Care Assessment data collected during the evaluations in Phases I, II, and III have been valuable to CMHS and the system of care communities in mapping progress and making decisions about program resources and strategies, and have been useful in identifying interim technical assistance needs. In Phase IV, continued efforts have been made to apply System of Care Assessment results to CMHS technical assistance efforts.


Services and Costs Study. There is no burden associated with this data collection since data are obtained from the communities’ Management Information Systems and annual budget.

Cross-Sectional Descriptive Study. Data for this component are collected when children and families first access the system of care. As part of their normal operations, grantees collect data on children and families including demographics, service use, status, treatment plans, and other information. These and other data elements are maintained by the grantees for their own administrative purposes; hence their collection creates no additional respondent burden. For families participating in the Child and Family Outcome Study, however, the descriptive information that may have changed over time (e.g., family income, caregiver=s marital status) is collected at each follow-up data collection point. Failure to collect these few data elements at follow-up would preclude the detection of key changes in the child=s environment that could have an important impact on the child=s clinical outcomes, service use, or family functioning. Data from the grantee sites are submitted to the National Evaluator continuously using the ICN, resulting in a minimal burden to site staff.


Child and Family Outcome Study. For this component, data are collected at intake and every 6 months for the length of the evaluation, up to 36 months. Clinicians who work with this population of children suggest that once children enter services, they are likely to experience detectable improvements within the first 6 months of services. However, whether improvement is sustained is important to demonstrate. Assessing outcomes every 6 months allows for the study of the course of improvement over time so that interventions can be planned for times that are likely to yield the greatest gains. Thus, waiting 12 months to collect outcome data would miss important changes that are likely to happen in children who are still developing. On the other hand, it was the judgment of the Services Evaluation Committee and prior grantees that quarterly data collection would be too burdensome.


The data collection schedule calls for collecting data on all the children and families in the longitudinal Child and Family Outcome Study for the duration of the evaluation. It is important to follow children as long as possible, in order to capture changes that occur as children enter new developmental stages, especially adolescence and young adulthood. Of particular interest are functional outcomes that indicate whether a child is developing into a productive member of society such as completing high school, obtaining a job, and abstaining from criminal behavior. However, because some children will enter services (and therefore, the study) later than others, the children recruited into the study in the first year of data collection are followed for 36 months, while the children recruited in the fourth year of data collection will only be followed for 18 months.


Service Experience Study. Data for this study component are collected 6 months after intake into the evaluation and at subsequent 6-month intervals in conjunction with the Child and Family Outcome Study data collection. At each data collection point, a screening question indicates whether any services have been received during the previous 6-month period. If so, questions for the Multi-Sector Service Contacts survey (MSSC), the Youth Services Survey (family and youth versions), and the Cultural Competence and Service Provision Questionnaire are asked. If not, these sets of questions are skipped. This provides youth and caregiver perspectives at various stages of treatment as their needs and services change (e.g., during intensive involvement, while transitioning to less intensive services, and after formal discharge from mental health services). If these data were collected less frequently, the National Evaluator would not be able to track the service changes that may be linked to changes in outcomes.

Treatment Effectiveness Study. The Treatment Effectiveness Study evaluates the effectiveness of an evidence-based treatment, Brief Strategic Family Therapy (BFST) implemented within two system of care communities (i.e., Cleveland, OH and Oklahoma City, OK) compared to system of care services-as-usual. The majority of the measures being used in this study are included in the Child and Family Outcome Study, however, additional measures specific to the goals of the study are added and are described below.


In order to determine which children have the diagnostic characteristics that are compatible with the selected evidence-based treatment, children who seem likely to have these characteristics are recruited and administered a one-time diagnostic measure (i.e., Disc Predictive Scales [DPS]) to determine eligibility. Limiting the collection of these data would not allow for the determination of appropriate children for inclusion in the study. Treatment fidelity measures (i.e., Therapeutic Alliance Scale and the Therapy Adherence Form) are used to assess the extent to which treatment components are implemented and experienced as intended. The TAS and the TAF are administered to caregivers at 1, 2, and 3 months after treatment for TAS and at 3 months for TAF. Three months corresponds to the estimated end of treatment for the BSFT intervention group (i.e., post-test). Failure to collect these data would not allow for the assessment of the extent to which the treatment was administered and how the extent of treatment administration influences child outcome.


In order to determine the effect of BSFT on outcomes, the Family Assessment Measure (FAM), the Conflict Behavior Questionnaire (CBQ), and the Ohio Scales are administered once before receiving BFST, at the termination of the treatment, and then at intake and every 6 months to 18 months for the FAM and CBQ, and at intake and 3-month follow-up for the Ohio Scales. Limiting the collection of these data would not allow for the determination of the impact of BFST on clinical outcomes both during and after treatment.


To assess the attitudes of service providers, no more than 50 service providers (25 from each site) for children enrolled into the Treatment Effectiveness Study will be administered an attitude survey of evidence-based treatment (i.e., The Evidence-Based Practice Provider Attitudes Scale). This measure will be administered only once at the point of the child’s assignment to a therapist and each therapist will only be assessed once. Failure to collect this information would limit the knowledge base on provider attitudes toward evidence-base treatments and practices.


Sustainability Study. Data on sustainability are collected from representatives of all award communities in years 2, 4, and 5 of evaluation. It is necessary to collect these data at multiple points during the latter half of programs’ funding cycle in order to assess the progress being made towards sustaining funding for continued operation during their funding period and for sustaining programs after the funding cycle. Evaluation of sustainability over time is needed because the amount of non-Federal funds required increases each year, as does the developmental stage of the systems of care. This makes each evaluation point distinct from previous points and will yield important information on the process of becoming increasingly independent of Federal support, the critical stages in efforts towards sustainability, and where in the process potential barriers to sustainability are most likely to arise. Assessing sustainability at the end of the funding cycle would yield information on whether a site has or has not achieved sustainability but would not provide insight into the process of becoming sustainable or barriers and facilitators to sustainability. The final survey administration and at least one of the other administrations will occur in the same year as programs’ System of Care Assessment evaluation and having these complementary data from the same points in time will permit a more comprehensive understanding of sustainability efforts at each site.


Culturally Competent Practices Study. This study has two components, a Web-survey that was conducted with mental health clinicians in each of the funded communities and focus groups that will be held in two system of care communities with service providers, caregivers, and youth. Survey respondents completed a one-time Web-survey and focus group participants will be involved in one focus group session. It is necessary to collect these data in order to examine the extent to which culturally competent care is provided by clinicians in grantee communities.


Primary Care Provider Study. This study has three parts. Part 1 involved collecting descriptive data on participating children’s health status, care and financing, and continues through the 5 years of the national evaluation. Part 2, conducted during year 2 of the evaluation, utilized qualitative evaluation methods to understand the role of primary care practitioners within systems of care. These data were obtained from discussion groups with various stakeholder groups involving nine or less participants and will be used to develop a model of the factors influencing the role of primary care practitioners in systems of care. Part 3 will be conducted in the upcoming year of the evaluation, using findings from the qualitative evaluation to develop a survey that will be administered to primary care practitioners, specifically, pediatricians, in communities funded by the CMHI. It is necessary to have all of these study components as they collect complementary but distinct information. Limiting data collection in this area would not allow for increased knowledge of the role primary care provider play in systems of care.


Family Education and Support Study. This study examines the relative impact of community-based practices (i.e., Family Education and Support services) within system of care communities using a multi-tiered approach. The study will identify the critical elements of a family education and support intervention and examines the effectiveness of the intervention and its effect on outcomes. Tier 1 includes secondary data analysis of exiting data sources to develop a critical elements assessment. Tier 2 includes a critical elements assessment through a one-time data collection effort with key informant interviews and focus groups to assess community-based practices of family education and support. Key informant interviews with project directors and clinical supervisors, as well as focus groups with service providers and caregivers will be conducted in up to six sites, depending on whether there is a critical mass of families receiving family education and support services. Tier 3 includes an outcomes study in up to six sites depending on the variability of critical elements experience within sites and other site-specific implementation characteristics to examine family-level outcomes and their association with intervention critical elements of family education and support. Families will be interviewed at the time of enrollment into the study and 6 months and 12 months following enrollment. The majority of data collected for this component are already being collected for the larger Child and Family Outcomes study, and some additional instruments will be added to the national evaluation instruments. Each tier of the design informs the field and activities of the subsequent tiers. Limiting data collection in tiers 2 and 3 would not allow for the identification and examination of the critical elements of family education and support that affects better outcomes in children and families. Accordingly, failure to collect these data would result in a lack of knowledge about community-based interventions that have accumulated practice-based evidence within systems of care.


7. Consistency with Guidelines in 5 CFR 1320.5(d)(2)



The data collection fully complies with the requirements of 5 CFR 1320.5(d)(2).



8. Consultation Outside the Agency



SAMHSA published a notice in the Federal Register on Friday, December 15, 2006 (71 FR 75568), soliciting public comment on this study. SAMHSA received no comments on the planned data collection.


Consultation on the design, instrumentation, data availability and products, and statistical aspects of the evaluation occurred continually throughout the implementation of Phases I, II and III. In order to capitalize on the experience and knowledge gained, the development of Phase IV was based, in part, on this consultation. Since the beginning of this initiative, consultations have been sought from the following:


$ Federal representatives working in related program areas

$ Experts in the area of child mental health services research

$ CMHS grantees

$ Families caring for children with emotional and behavioral disorders

$ Representatives of national organizations for children, families, and providers in the field (e.g., National Association of Residential Treatment Centers, National Center for Children in Poverty, the Federation of Families for Children=s Mental Health, National Alliance for the Mentally Ill, State Mental Health Representatives for Children and Youth)

$ Experts in program evaluation, measurement, and statistical analysis

$ Experts in mental health service systems for Native American children


These consultations had several purposes: (1) to ensure continued coordination of related activities, especially at the Federal level; (2) to ensure the rigor of the evaluation design, the proper implementation of the design, and the technical soundness of study results; (3) to verify the relevance and accessibility of the data to be collected; and (4) to minimize respondent burden.


a. Federal Consultation



Input from representatives of Federal agencies involved in children’s mental health issues has been elicited throughout all phases of the National Evaluation. CMHS received input about its children’s services program from Federal offices including, but not limited to, the following: the Office of Special Education Programs, Department of Education; the National Institute on Early Childhood Development, Department of Education; the Office of Juvenile Justice Delinquency Prevention Programs, Department of Justice; and the National Center for Child Abuse and Neglect of the Children’s Bureau, Administration for Children and Families. See Attachment 2.A. for a list of the participants in the Federal/National Partnership for Children’s Mental Health and their affiliations and telephone numbers.


These offices are involved in a public-private interagency partnership group to ensure that services for children with serious emotional disturbance and their families are coordinated at the Federal level and that evaluation results are useful to a wide audience. Specifically, representatives from the listed Federal agencies have convened to develop strategies for coordinated training, technical assistance, and culturally competent services to communities across the country.


In addition, SAMHSA, the parent agency of CMHS, requires that its other two constituent centers, the Center for Substance Abuse Treatment and the Center for Substance Abuse Prevention, conduct an internal review of the Annual Report to Congress on the Evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program. Evaluation specialists at the Centers for Disease Control and Prevention (CDC), the National Institute of Mental Health (NIMH), and the Office of the Assistant Secretary for Planning and Evaluation (ASPE) of the Department of Health and Human Services (DHHS) have also reviewed and provided comments on the National Evaluation. Furthermore, Dr. Kimberly Hoagwood represented the National Institute of Mental Health (NIMH) and participated in the Services Evaluation Committee of the national evaluation until 2001 when Dr. Heather Ringeisen of NIMH assumed this role (see Attachment 2.B.). In 2006, Dr. Ringeisen was replaced by Carmen Moten of NIMH in this role. Collaboration with NIMH led to the release of a program announcement (PA-00-135; Effectiveness, Practice, and Implementation in CMHS’ Children’s Service Sites) on September 21, 2000 by NIMH for the conduct of research studies on services delivered to children, adolescents, and their families in currently or previously CMHS-funded system of care sites. This mechanism encourages studies examining the nature and impact of routine clinical practice, and factors related to successful implementation of treatments or services. This PA addresses recommendations set forth in the NIMH report, “Bridging Science and Service: A Report by the National Advisory Mental Health Council’s Clinical Treatment and Services Research Workgroup,” and in the NIMH Child and Adolescent Services Research Strategic Planning Report. A revised program announcement (PA-04-019; Effectiveness, Practice, and Implementation in CMHS’ Children’s Service Sites) was released on November 10, 2003 by NIMH. The scope of this PA was broadened to include research in communities with Safe Schools Healthy Students grants. On August 14, 2006, a reissued program announcement (PA-06-526; Effectiveness, Practice, and Implementation in CMHS’ Comprehensive Community Mental Health Services Program for Children and their Families Service Sites) was released, of which the RO1 component (PA-07-091; Effectiveness, Practice, and Implementation in CMHS’ Comprehensive Community Mental Health Services Program for Children and their Families Service Sites) was reissued on November 27, 2006. The scope of this PA was revised to address secondary data analysis of National Evaluation data.


b. Expert Consultation



The Services Evaluation Committee of the National Evaluation, a workgroup of expert consultants, was organized to provide technical guidance and review for Phase I of the evaluation. The Services Evaluation Committee continued to have input regarding the enhanced design and instrumentation for Phase II and Phase III, and recommendations made by this group have influenced changes applied to the Phase IV instrumentation. Services Evaluation Committee members have combined expertise in children’s mental health, the delivery of children’s mental health services, and the evaluation of systems of care (see Attachment 2.B. for a list of Services Evaluation Committee members). Most of the individuals invited to provide consultation were chosen because of their involvement in past or current studies of children’s mental health service systems. During previous phases, input has also been received from the National Association of State Mental Health Directors and the State Mental Health Representatives for Children and Youth.


Experts also reviewed the instrumentation for Phase IV and the changes made to the Phase IV instrument package were based on guidance received at the Measures Review Meeting held on October 29, 2003 with a team of experts in children’s mental health, evaluators, family members, and project directors from Phase II, III and IV sites. The meeting participants included individuals representing a range of community perspectives, including communities situated within specific cultures or with specific cultural or ethnic service populations (e.g., Puerto Rico, American Indian), and individuals with expertise in evaluation with specific cultural groups. This spectrum of participants provided valuable guidance for the cultural competence of evaluation instrumentation and the assessment process (see Attachment 2.C. for a list of meeting participants). Meeting participants provided feedback on the constructs to be assessed in the evaluation and recommended measures to evaluate these constructs best.


These individuals, and others, who are experts in the field of children’s mental health services and in the assessment of child and family outcomes were also consulted regarding the content of the Phase IV instrument package. These individuals received the Measures Review Meeting report and provided their input on recommendations made by the group, or were consulted individually regarding proposed instrumentation and assessment.


Experts have contributed also to the development of the sustainability survey developed in Phase II, and the Treatment Effectiveness Studies as these were developed in Phases II and III. A special group of experts was created to guide the Treatment Effectiveness Study for Phase IV. This Evidence-Based Treatment Advisory Committee, developed during Phases II and III, includes experts in clinical research and research methods. This committee helped to identify evidence-based interventions that could be candidates for the study and to formulate the research design, and continues to be available for consultation to the Treatment Effectiveness Studies in Phase IV. A similar group was developed to guide the Family Education and Support and the Primary Care Provider Studies. For the FES study, the committee helped to identify the community-based practice to examine and to develop the design for the study. For the PCP study, the committee has provided consultation on survey design, sampling methods and will provide guidance on data analysis and interpretation. Both committees will provide ongoing consultation. See Attachments 2.D., 2.E., and 2.F. for the lists of committee members.


c. Grantee Consultation



Previously funded grantees have been key providers of input for all phases of the evaluation design. For the design of Phase IV, grantee input was used in the development of the instrument package. Project directors and evaluators from Phase II, III and IV sites participated in the Measures Review Meeting. These participants helped in determining the instruments that are most appropriate for each component of the evaluation. Evaluators from all Phase IV sites were contacted to assess evaluation needs and this information was used in creating the instrument package. In addition, evaluators and project directors from all sites were given the opportunity to provide their input to the recommendations made at the Measures Review Meeting. Modifications to the Phase IV instrument package also reflect on-going input received by the National Evaluator from Phase II and III grantees through conference calls, site visits, and semi-annual workshops and evaluator meetings. Additional grantee feedback was received during close-out site visits conducted with Phase II communities funded in 1997 and 1998, and Phase III communities funded in 1999 and 2000 in which evaluation processes and data utilization were reviewed.


Several representatives from the grantee sites also participate in the Services Evaluation Committee of the national evaluation and these members offer the grantee site perspective on how research goals can be achieved at the sites with the least disruption.


In January and February 2002, CMHS initiated an annual consumer survey of the Phase II and Phase III grantee sites to assess satisfaction with implementation of the national evaluation and the role of the National Evaluator in this implementation (OMB Control # 0930-0197). The survey also asked for feedback from grantee site evaluators regarding desired changes in study design. CMHS received feedback from evaluators in almost all grantee communities (except those with recent staff turnover) and has since synthesized these data for use in quality improvement efforts. This survey was repeated in spring 2003 (OMB Control # 0930-0197).


Grantee feedback was sought also during the development of the Family Education and Support Study design. Grantees were given an opportunity to participate in a Web-based survey and provided insight into community-based interventions. Feedback from grantees helped foster the selection of family education and support as the intervention to examine for this study.


d. Family Consultation



Critical to the CASSP principles is the role of family caregivers as active stakeholders in the system of care. That philosophy has been extended to all phases of the evaluation design in several ways. Caregivers participate on the Services Evaluation Committee and gave early input to the overall design. Caregivers also reviewed the instrumentation and key features of the evaluation design to ensure sensitivity to parent issues and concerns as well as to maximize clarity of meaning and to assess feasibility of administering the questionnaires. Input from family members participating in assessment interviews indicated a need to reduce the length of the interview and this recommendation is reflected in the Phase IV instrument package. The revised package was pilot-tested with caregivers and youth and found to be acceptable in terms of length and content. Grantee sites systematically solicit feedback from family members; hence the family perspective is also included in comments and consultation from grantee sites. The evaluation team has a formal relationship with the Federation of Families for Children’s Mental Health to facilitate systematic and ongoing input to the evaluation. Family members also participate in the Treatment Effectiveness Review Committee and the Primary Care Provider Study, providing a family perspective on research goals and activities.


9. Payment to Respondents



As with previous phases, Phase IV of the national evaluation uses a research-based approach to evaluation and, as such, requires participation of children and families beyond their receipt of services in their system of care programs. Consequently, remuneration is essential to ensure good response rates across all study components.


Remuneration levels in the Child and Family Outcome Study, System of Care Assessment, Treatment Effectiveness Study, Cultural Competence Study, and Sustainability Study are the same as those currently approved in Phase IV.


System of Care Assessment. Four caregivers of children who are receiving services in each system of care community and two youth are interviewed during each System of Care Assessment site visit. The national evaluation will provide a payment of $25 to caregivers and $15 to youth at the time of their interviews in compensation for the additional burden and potential inconvenience of these interviews.


Child and Family Outcome Study. The National Evaluator strongly recommends that grantees remunerate respondents who participate in the Child and Family Outcome Study $20 each for caregivers and youth. Remuneration is standard practice in this type of longitudinal research to acknowledge participants’ value to the study. It is essential to help maximize participation rates, particularly given the additional time being asked of families who already face multiple challenges and demands on their time in caring for their children with serious emotional disturbance. Caregivers and children who participate in the Child and Family Outcome Study are asked to complete more assessments than ordinarily are required in the course of receiving services. To complete the instruments at the time of entry to services and at subsequent follow-up points requires the evaluation participants to spend time away from other activities. The combination of the number of instruments and their periodicity creates a burden to the caregivers and children that exceeds the burden that ordinarily would be placed on them if they were seeking services not associated with this evaluation.


Treatment Effectiveness Study. Participants in the Treatment Effectiveness Study also receive remuneration. Caregivers who participate in the study are asked to participate in additional assessments beyond the core set of assessments for the Child and Family Outcome Study. Subcontractors at local sites participating in the TES are collecting the data and distributing incentives. For children who meet the initial screening criteria, the National Evaluator recommends remuneration of caregivers with an additional $10 for the one-time completion of the Diagnostic Interview Schedule for Children, Predictive Scales (DPS), which is used to confirm that the child has the diagnosis required for the selected evidence-based treatment. Those caregivers whose children are enrolled into the Treatment Effectiveness Study are requested to complete additional fidelity and treatment outcome measures. The National Evaluator recommends remuneration to the participating caregivers of $15 at each data collection point. The Therapeutic Alliance Scale (TAS) and the Therapy Adherence Form (TAF) are administered to caregivers as treatment fidelity measures at 1, 2, and 3 months after treatment for TAS and at 3 months for TAF. Three months corresponds to the estimated end of treatment for the BSFT intervention group (i.e., post-test). The National Evaluator recommends that children administered the TAS are remunerated $5 at each data collection point for the completion of this instrument. The local sites participating in the TES have reviewed National Evaluator remuneration recommendations, and have tailored those recommendations to local expectations; however, local sites were not permitted to locally tailor to a level less that than recommended by the National Evaluator.


Providers of service to children enrolled into the Treatment Effectiveness Study are asked to complete the Evidence-Based Practices Provider Attitudes Survey (EBPAS). The EBPAS is completed by providers in Cleveland and Oklahoma City. Providers will receive a one-time $50 remuneration for their participation in this data collection process in order to compensate them for the potential inconvenience and effort of completing the measure.


The subcontractors, hired as local evaluators for each community is responsible for paying families and providers for participating in this study because they conduct this data collection effort. Remuneration is necessary to maximize participation in follow-up data collection that extends beyond the treatment period because participants tend to withdraw from research studies when they are no longer receiving the treatment for which they were recruited. Payment of respondents serves as an incentive to continue to participate in the study.


Culturally Competent Practices Study. Service providers who completed the Culturally Competent Practices survey were mailed a $10 gift card with the survey as a token incentive to encourage survey completion. This is consistent with the Dillman method for recruiting respondents for mail and Internet surveys. Caregivers, youth and service providers who participate in the focus groups associated with the Culturally Competent Practices Study will each receive remuneration for their participation and to compensate them for their time. Youth will receive $50, caregivers will receive $75, and service providers will receive $100. There will also be a $25 early-bird raffle held 15 minutes prior to the start of each focus group to encourage participants to arrive on time for the session. These payments for focus group participants are consistent with payment levels currently used for focus group data collection by Macro International Inc. and professional focus group companies. Administrators who participate in the focus groups will not receive remuneration because participation in the evaluation studies is part of their function as program administrators.


Sustainability Study. As with the Phase II and III Sustainability Survey, individuals asked to complete the Phase IV Sustainability Survey receive a token incentive (e.g., a refrigerator magnet) to encourage survey completion when they are informed about the survey.


Family Education and Support Study. Participants in the FES study also receive remuneration. Caregivers who participate in the study are asked to participate in additional assessments beyond the core set of assessments for the Child and Family Outcome Study. The National Evaluator will remunerate the participating caregivers $15 at each data collection point. The outcome measure will be administered at 6-month intervals starting with baseline, 6 months, and 12 months after enrollment into system of care services and eligibility for the study has been determined.


Service providers and family members affiliated with sites selected in the TES-II will participate in a focus group each during tier 2 of the design where a critical elements assessment is conducted on-site. We are requesting that caregivers and providers each be paid $50 for their participation in the focus groups. Project directors and clinical supervisors who participate in one-on-one interviews will not be remunerated for their time. As paid grant staff, they agree to participate in all activities related to the grant program (including national evaluation activities) without any compensation other than for their paid position. The grant program may offer a gift incentive to staff respondents as a way of showing gratitude for volunteering to be part of a research study.


The National Evaluator will be responsible for paying families and providers for participating in this study because the National Evaluator will conduct this data collection effort. Remuneration is necessary to maximize participation in follow-up data collection that extends beyond the treatment period because participants tend to withdraw from research studies when they are no longer receiving the treatment for which they were recruited. Payment of respondents serves as an incentive to continue to participate in the study.


Primary Care Provider Study. Pediatricians who are asked to complete the survey will receive an incentive in the amount of $50 with the letter describing the survey to thank them in advance for assisting with the survey. When conducting a mail survey, providing incentives in advance increases response rates, thereby making the data collected more representative of the sample. Incentives combined with a multiple contact strategy have yielded good response rates (greater than 80 percent) in studies using surveys of physicians.


10. Assurance of Confidentiality



Phase IV requires collecting descriptive and clinical data from children and families. In all the grantee sites, data are collected by site staff. These staff are responsible for developing procedures to protect the privacy of all participants in the evaluation data collection, storage of data, and reporting of all information obtained through data collection activities. These procedures include limiting the number of individuals who have access to identifying information, using locked files to store hardcopy forms, assigning unique code numbers to each participant to ensure anonymity, and implementing guidelines pertaining to data reporting and dissemination.


Because of the sensitivity of the information that is collected, CMHS has required that all grantees establish a system whereby data are gathered, stored, and accessed in a manner that protects individuals’ privacy as possible. The National Evaluator provides each grantee with a list of codes that are linked to respondents at each site, and trains staff responsible for data collection on the process of linking the codes to individual respondents. The list linking the assigned codes to respondent names is kept in a locked cabinet and only the on-site data collection staff has access to the list. This list is maintained for the duration of the CMHS program. The purpose of maintaining the list for this period of time is to ensure that the data can be linked back to the identified child and family throughout the data collection process. When the project is completed, the lists will be destroyed. This coding system was developed to facilitate the tracking of children during their involvement with the evaluation and to ensure that no personal identifying information from the grantee sites would need to be made available to either the National Evaluator or CMHS.


The security and privacy of data entered and managed on the Internet-based ICN also are assured. Access to the ICN is password protected, and the ICN uses data encryption to further enhance security and the protection of the information accessed.


Each grantee has implemented an active consent procedure that informs the participants of the purpose of the evaluation, describes what their participation entails, and addresses the maintenance of privacy protection as described above. Informed assent is obtained from participating older children and adolescents (ages 11 through 17). In addition, informed consent is obtained from adolescents who have reached the age of 18 at follow-up data collection. Written informed consent/assent is obtained from children and families at the point of entry into services. Each grantee has obtained local Institutional Review Board (IRB) approval for the informed consent procedures used in this evaluation. To further protect evaluation participants, all grantees have also been asked to obtain a Federal Certificate of Confidentiality, authorized by Section 301(d) of the Public Health Service Act in order to provide additional protection of the information about the participants from civil and criminal subpoena. Each grantee is independently responsible for the protection of human subjects. To date 12 communities have elected to obtain this certificate and have received approval, and 2 have applied and awaiting approval.


As in previous phases of the national evaluation, to further protect study participants for Phase IV, the National Evaluator has obtained a Federal Certificate of Confidentiality, authorized by Section 301(d) of the Public Health Service Act. This certificate provides additional protections of the data from civil and criminal subpoena. Additionally, the National Evaluator has conformed to all requirements of the Privacy Act of 1974, under the System of Records: Alcohol, Drug, and Mental Health Epidemiological, and Biometric Research Data, DHHS, #09-30-0036; the most recent publication in the Federal Register occurred on January 19, 1999 (64 FR 2914). Client records at the sites are also covered under this Privacy Act System of Records.


Treatment Effectiveness Study. In both Cleveland, OH and Oklahoma City, OK, children entering systems of care who are enrolled into the Child and Family Outcome Study and eligible for BSFT are recruited for the study. Caregivers complete the DPS to determine if their child meets criteria for any of the targeted diagnoses of the DSM–IV at the time they are being recruited for the study. It is explained that there may not be enough slots for all the eligible families who seek the treatment and that selection for the treatment is made using a random assignment procedure. Children who are not selected through the random assignment procedure are placed on a waiting list and are eligible for the service upon completion of their participation in the study. Children not assigned to BSFT continue to receive services through the system of care. It is explained that because receipt of BSFT is only available as part of the Treatment Effectiveness Study, completion of the required instrumentation is necessary for children assigned to either the waiting list or to the evidence-based treatment condition.


The entire procedure is explained to the caregiver and child in a standard informed consent process that meets established ethical guidelines for treatment and research, and that has been developed by the selected communities. The consent form includes a clear explanation that (1) the child must meet criteria for BSFT, (2) if eligibility criteria are met, assignment to evidence-based treatment is determined using a random assignment process, (3) if the child is not assigned to the evidence-based treatment condition s/he will continue to receive services through the system of care and is eligible for the receipt of the evidence-based treatment at the completion of the study, (4) families in both treatment conditions are asked to participate in data collection and are paid for their participation. It is made clear that family participation in the Treatment Effectiveness Study is entirely voluntary and that if they choose not to participate, they will continue to receive services through the system of care. Their decision to participate in the Treatment Effectiveness Study does not affect, in any way, the other services they receive through the system of care. In addition, the information provided by respondents is closely protected and held in the strictest confidence. All this, as well as the possible risks and benefits of participation, is explained in the informed consent form. Potential participants have an opportunity to ask questions and those questions are answered. After receiving all the information, individuals choosing to participate sign the consent form indicating that they understand the procedures, risks, benefits, and their rights and have elected to participate.


Sustainability Survey. Data collection for the Sustainability Survey occurs by Web. Because respondents’ identities are known, to ensure that participants’ rights are protected, an active informed consent process is in place. Specific participants (i.e., project director, family organization representative, agency representatives) completing System of Care Assessment interviews are told that they may be asked to answer additional questions about sustainability using a Web survey. Following the System of Care Assessment interview with Phase IV respondents, and determination of appropriate respondents, initial telephone calls are made to respondents to explain the study and solicit willingness to participate and obtain accurate contact information, including e-mail addresses. Following these telephone calls, a letter is sent by e-mail, when possible, to these individuals providing instructions for completion of the Web survey. The letter explains the survey, including the voluntary nature of survey completion, privacy protection of responses, and the risks, benefits, and rights as respondents, and advises the recipient that they will be asked to indicate, by checking a box on the Web survey (or by signing their name on a copy of the letter and returning it), that they agree to participate in the study before they complete and return the survey. Information about the study and participant rights is presented in the Web survey prior to the check box indicating consent to participate. The letter and the Web survey will also provide contact information if the survey recipient has questions or desires clarification prior to participation. If the individual does not have e-mail access, a packet is sent by regular mail containing a cover letter, an informed consent form, a survey, and a return envelope (see Attachments 4.G.3. and 4.G.4.). The cover letter indicates that the respondent is to return the informed consent form and the survey. If the respondent has an e-mail address, an e-mail is sent with a link to the Web survey to prompt completion of the study after 1 week (see Attachment 4.G.5. for Web screen shots of the survey). A postcard reminder is sent to those without e-mail access.


Culturally Competent Practices Study. Data collection for the Culturally Competent Practices Survey occurred by Internet. Passive consent was obtained for this one-time survey. The cover letter explained the survey, including voluntariness of survey completion, anonymity of responses, and the risks, benefits, and rights as respondents, and advised the recipient that completion and return of the survey, indicates consent to participate. The letter provided contact information if the survey recipient had questions or desired clarification prior to participation (see Attachment 4.I.1.).


To maintain anonymity of responses to the Internet survey, as described in item A.3, two databases have been set up: (1) an identifier database containing the respondent name, user id and password, and (2) a database holding the survey responses. Since subjects create their own userid and password that only they know, the National Evaluator is not be able to link respondents to responses, but, is able to know who has not responded because their login has not been used.


For the focus groups, all participants will be required to sign a consent form before the session begins. The consent form will include information that describes who is administering the focus group, what the information will be used for, who has access to the information provided, and what steps will be taken to protect the privacy of respondents. Respondents will be informed that participation is voluntary and may be stopped at any time without penalty. The consent form will also state that by signing the consent form participants are agreeing to keep information about other participants private.


In order to protect the privacy of participants in the focus groups, respondents will be told at the beginning of the focus group sessions that they should introduce themselves by first name only. They will also be reminded that the National Evaluator will keep all identifying information private and that participants have agreed to the same, as indicated in their consent forms. All data will be reported in aggregate and identifiers will not be used in the transcripts or the final report. Transcripts of the discussions will not track responses of individual participants.


Confidentiality Procedures for New Evaluation Components


All new components to the national evaluation will comply with established guidelines for the ethical treatment of human subjects in the research process, and will be approved by the Institutional Review Board of Macro prior to implementation.


Family Education and Support Study. In collaboration with the selected sites in Tier 2 of the design, up to 15 families will be recruited to participate in a focus group at each targeted site. They will be informed about the purpose of the focus group which will be to gather information about the types of family education and support services or intervention they receive and their level of satisfaction with these services or intervention. Service providers and grant staff participating in the interviews and focus groups will be informed through a standard informed consent procedure that the purpose of the study is to conduct a critical elements assessment in order to determine critical domains within family education and support. All participants in Tier 2 will be told that their participation is completely voluntary and the information they provide will be protected. After receiving all the information, individuals choosing to participate will sign the informed consent indicating that they understand the procedures, risks, benefits, and their rights and have elected to participate.


Caregivers enrolling into Tier 3 of the study will be informed about the purpose of the study and that they will continue to receive services through the system of care and they will be required to complete additional instrumentation as a result of their participation in this study. The entire procedure will be explained to the caregiver in a standard informed consent process that meets established ethical guidelines for treatment and research, and that will be developed in collaboration with the selected communities. The consent form will include a clear explanation that their eligibility is determined by whether they receive family education and support services. Such services will be explained to determine whether the caregiver is indeed receiving this service. Families will be asked to participate in data collection and will be paid for their participation. It will also be made clear that family participation in the Treatment Effectiveness Study is entirely voluntary and that if they choose not to participate, they will continue to receive services through the system of care. Their decision to participate in the Treatment Effectiveness Study will not affect, in any way, the other services they receive through the system of care. In addition, the information provided by respondents will be closely protected and held in the strictest confidence. All this, as well as the possible risks and benefits of participation, will be explained in the informed consent form. Potential participants will have an opportunity to ask questions and those questions will be answered. After receiving all the information, individuals choosing to participate will sign the consent form indicating that they understand the procedures, risks, benefits, and their rights and have elected to participate.


Primary Care Provider Survey. Data collection for the Primary Care Provider Survey will occur by mail or by Web-based survey. The Web-based survey option is offered as an alternative to the mail survey. Participants can choose the method they prefer to complete the survey. To ensure that participants’ rights are protected, passive consent will be obtained. The survey cover letter will describe privacy and informed consent procedures, and will explain that return of the completed survey signifies consent to participate in the study. If participants choose to complete the survey online, the cover letter will explain that logging in with the provided username and password signifies consent. A pre-survey letter explaining that the recipient will be asked to participate in a survey will be sent to selected pediatricians in each community, followed 1 week later by a cover letter containing an incentive in the amount of $50 and directions for completing and returning the survey and the survey instrument and directions for completing the survey online if they choose that method. A follow-up reminder postcard will be sent 1 week later, and 1 week after that, another letter containing a hard copy of the survey will be sent to all providers who have not returned the survey. A final reminder letter and copy of the survey will be sent by Fed Ex to non-respondents 2 weeks later. Respondents will be identified by an anonymous code for mailing purposes only and will not be linked to their survey responses. For respondents who complete the online survey, username and password will be maintained for tracking purposes and will not be linked to their survey responses. Study team members will develop a number of procedures to protect the privacy of the information provided by the study participants. These include limiting the number of study staff who have access to identifying information, using locked files to store hard copy forms, assigning unique identifiers to each participant to ensure anonymity, and implementing guidelines pertaining to data reporting and dissemination.


To maintain anonymity of responses to the survey, two databases will be used: (1) an identifier database containing the respondent name, contact information, user id, and password, and (2) a database holding the survey responses. The two databases will not be linked.


11. Questions of a Sensitive Nature



Because this project concerns services to children with serious emotional disturbance and their families, it is necessary to ask questions that are potentially sensitive. Questions address dimensions such as child emotions, behavior, social functioning, school performance, and involvement in unlawful activities. The answers to these questions are aggregated and used to determine baseline status and to measure changes in these areas experienced after entering the system of care. Since each grantee must keep data on child and family status and service use, as well as treatment plan and other information, the data collection required for the national evaluation is not introducing new, sensitive domains of inquiry, but is paralleling standard procedures in the field of children=s mental health.


Although the inclusion of substance use data is sensitive in nature, it does not represent a new domain of inquiry. The frequent co-morbidity of substance use and serious emotional disturbance among adolescent populations, and the increased ability to record dual diagnoses are cited in the case management and mental health literatures. Because of the increased risk of substance use by children with mental illness, Phase IV system of care communities are increasing their focus on children with co-morbidity of substance use. Consequently, it is necessary to collect data about substance use from the children in order to determine the prevalence of this co-morbidity and to track changes in substance use after entering a system of care. Additionally, the Treatment Effectiveness Study focuses on treatments targeted at substance use prevention making it essential to collect data on substance use in order to identify children who are eligible for this study and to track outcomes. Respondents in the Treatment Effectiveness Study provide additional information about child behavior specifically related to disruptive behavior disorders and behavior change expected by the implementation of the evidence-based treatment. In order to determine whether children are eligible for the study and to compare the results of this study to those of other studies on the selected evidence-based treatments, it is necessary to ask these additional questions.


In addition to information on child clinical status and social function, other questions of a sensitive nature are asked of families. These include questions related to family functioning, perceived adequacy of material resources, and caregiver strain. These questions are included in response to growing evidence of the powerful role families play in shaping children=s use of services and their related outcomes. This is particularly important in systems of care where a basic tenet is to involve families in treatment planning and service delivery. Moreover, representatives of family organizations who consulted with the National Evaluators during Phase III identified a lack of information on family life as a weakness in previous studies.


Before collecting data, each grantee or the National Evaluator=s staff obtains active consent from caregivers. In addition, child assent is obtained. In that process respondents are made aware that the information they provide is protected and maintained in the strictest confidence and that they can withdraw their participation at any time. Similarly, respondents can freely choose to refrain from answering any questions they find objectionable.


Questions of a Sensitive Nature for the New Evaluation Components


Questions of a sensitive nature are asked of families who will be enrolled in the Family Education and Support Study. These questions relate to family functioning, social support, parenting skills and parental involvement and caregiver’s use of mental health services. These questions are included in response to growing evidence of the powerful role families play in shaping children=s use of services and their related outcomes. This is particularly important in systems of care where a basic tenet is to involve families in treatment planning and service delivery. While family education and support services are used widely in systems of care, these services are implemented differently across communities. Collecting this information is essential for gaining a better understanding of the impact of these services.


Questions of a sensitive nature are also asked to primary care providers who agree to participate in the Primary Care Provider Study. These questions pertain to perspectives on access to health care, including the role of primary care providers in screening for mental health disorders, providing ongoing mental health care, and prescribing medication. Respondents will also be asked information on the integration of health care services into systems of care, family and youth partnerships with primary care, programmatic barriers to the integration of primary care into systems of care and health care disparities and primary care services. These questions are important for understanding effective collaboration between mental health service providers and primary health care providers.


12. Estimates of Annualized Hour Burden



In accordance with the evaluation design, the descriptive, outcome, intervention, and service information collection for the 27 communities in Phase IV of the national evaluation covers a period of 6½ years which began in April 2004 and ends in September 2010. Burden estimates for this revised clearance were calculated for the final 3-year period starting in May 2007.


Table 1 shows the burden associated with the Phase IV evaluation of 27 grantees. For measures that were previously cleared by the OMB, burden estimates are based on information supplied by grantees in previous phases of the evaluation. Burden estimated for measures that are new or revised in Phase IV are based on averages obtained from piloting of these instruments by the National Evaluator or the instrument developers’ report.


The national evaluation will also have access to service and cost data extracted from grantee sites= fiscal MISs for the children who use services in all sites. Note that because these data are compiled directly from grantees= MISs, it results in no burden to respondents. Clearance for collection of these data is, therefore, not being requested and is not discussed further in this section.


Table 1

Estimate of Respondent Burden

Note: Total burden is annualized over a 3-year period.


Instrument

Respondent

Number of Respondents

Total Average Number of Responses per Respondent

Hours per Response

Total Burden Hours

3-Year Average

Annual Burden Hours

Hourly Wage Rate ($)

Total cost per year ($)

System of Care Assessment

Interview Guides and Data Collection Forms

Key site informants

6481

2

1.000

1296

432

19.302

8,338

Interagency Collaboration Scale (IACS)

Key site informants

648

2

0.133

173

58

19.30

1,112

Child and Family Outcome Study

Caregiver Information Questionnaire

(CIQ-IC)

Caregiver

5,9223

1

0.283

1676

559

9.304

5,195

Caregiver Information Questionnaire Followup (CIQ-FC)

Caregiver

5,922

3

0.200

3553

1184

9.30

11,015

Caregiver Strain Questionnaire (CGSQ)

Caregiver

5,922

45

0.167

3956

1319

9.30

12,263

Child Behavior Checklist (CBCL)/ Child Behavior Checklist 1½–5

(CBCL 1½–5)

Caregiver

5,922

4

0.333

7888

2629

9.30

24,453

Education Questionnaire—Revised (EQ-R)

Caregiver

5,922

4

0.100

2369

790

9.30

7,343

Living Situations Questionnaire (LSQ)

Caregiver

5,922

4

0.083

1966

655

9.30

6,095

The Family Life Questionnaire (FLQ)

Caregiver

5,922

4

0.050

1184

395

9.30

3,672

Behavioral and Emotional Rating Scale—Second Edition, Parent Rating Scale (BERS – 2C)

Caregiver

5,6266

4

0.167

3758

1253

9.30

11,650

Columbia Impairment Scale (CIS)

Caregiver

5,922

4

0.083

1966

655

9.30

6,095

The Vineland Screener (VS)

Caregiver

2,3697

4

0.250

2369

790

9.30

7,343

Delinquency Survey—Revised (DS)

Youth

3,5538

4

0.167

2374

791

5.159

4,075

Behavioral and Emotional Rating Scale—Second Edition, Youth Rating Scale (BERS-2)

Youth

3,553

4

0.167

2374

791

5.15

4,075


Instrument

Respondent

Number of Respondents

Total Average Number of Responses per Respondent

Hours per Response

Total Burden Hours

3-Year Average

Annual Burden Hours

Hourly Wage Rate ($)

Total cost per year ($)

Child and Family Outcome Study (continued)

GAIN Quick–R: Substance Problem Scale

(GAIN)

Youth

3,553

4

0.083

1180

393

5.15

2,025

Substance Use Survey—Revised (SUS)

Youth

3,553

4

0.100

1421

474

5.15

2,440

Revised Children’s Manifest Anxiety Scales (RCMAS)

Youth

3,553

4

0.050

711

237

5.15

1,220

Reynolds Adolescent Depression Scale—Second Edition (RADS-2)

Youth

3,553

4

0.050

711

237

5.15

1,220

Youth information Questionnaire—Baseline (YIQ-I)

Youth

3,553

1

0.167

593

198

5.15

1,019

Youth information Questionnaire—Follow-up (YIQ-F)

Youth

3,553

3

0.167

1780

593

5.15

3,056

Service Experience Study

Multi-Sector Service Contacts—Revised (MSSC-RC)

Caregiver

5,992

310

0.250

4442

1481

9.30

13,769

Cultural Competence and Service Provision Questionnaire (CCSP)

Caregiver

5,992

3

0.167

2967

989

9.30

9,197

Youth Services Survey (YSS – F)

Caregiver

5,922

3

0.117

2079

693

9.30

6,444

Cultural Competence Practices Study (Focus Groups - F)

Caregiver

36

1

1.500

54

18

9.30

167

Youth Services Survey (YSS – Y)

Youth

3,553

4

0.083

1180

393

5.15

2,025

Cultural Competence Practices Study (Focus Groups –Y)

Youth

36

1

1.500

54

18

5.15

93

Cultural Competence Practices Study (Focus Groups – P)

Provider

60

1

1.500

90

30

15.0011

450

Treatment Effectiveness Study

Diagnostic Interview Schedule for Children (DISC-Predictive Scales)

Caregiver

262

1

1.000

262

87

9.3

812

Conflict Behavior Questionnaire (CBQ)

Caregiver

240

4

.167

160

53

9.3

497

Family Assessment Measure (FAM)

Caregiver

240

4

.250

240

80

9.3

744

Therapeutic Alliance Scale- caregiver (TAS)

Caregiver

240

3

.167

120

40

9.3

373

Ohio Scales (caregiver)

Caregiver

240

4

.250

240

80

9.3

744

Therapy Adherence Form - Revised

Caregiver

240

1

.167

40

13

9.3

124

Therapeutic Alliance Scales- youth (TAS-Y)

Youth

192

4

.167

128

43

5.2

220

Ohio Scales- youth

Youth

192

4

.250

192

64

5.2

330

Evidence-Based Practices Provider Attitudes Scale

Provider

50

1

.083

4

1

15.0

21

Family Education and Support Study

Beck Depression Inventory (BDI)

Caregiver

300

3

.117

105

35

9.3

326

Parenting Sense of Competence Scale (PSOC)

Caregiver

300

3

.167

150

50

9.3

466

Alabama Parenting Questionnaire (APQ)

Caregiver

300

3

.117

105

35

9.3

326

Duke Social Support Scale

Caregiver

300

3

.067

60

20

9.3

187

Vanderbilt Mental Health Services Self-Efficacy Questionnaire

Caregiver

300

3

.050

45

15

9.3

140

FES – Focus groups

Caregiver

54

1

1.500

81

27

9.3

251

FES – Focus groups

Provider

54

1

1.500

81

27

15.0

405

FES – Interview

Provider/ Administrator

12

1

1.000

12

4

19.3

77

Primary Care Provider Study

Primary Care Provider Survey

Provider

540

1

.500

270

90

15.00

1,350

Sustainability Study

Sustainability Survey—Caregiver

Caregiver12

27

2

0.500

27

9

9.30

84

Sustainability Survey—Provider

Provider/

Administrator12

81

2

0.500

81

27

19.30

521


Summary of Annualized Burden Estimates for 3 Years


Number of Distinct Respondents

Number of Responses per Respondent

Average

3-Year Burden per Response (hours)

Total Burden

(hours)

Cost

Caregivers

5,922

1.13

2.08

13,954

129,776

Youth

3,553

1.19

1.00

4,220

21,735

Provider/Administrators

648

.542

1.90

669

12,273

Total Summary

10,123



18,844

163,785

Total Annual Average Summary

3,374



6,281

54,595


  1. An average of 24 stakeholders in up to 27 grantee sites will complete the System of Care Assessment interview. These stakeholders will include site administrative staff, providers, agency representatives, family representatives, youth and youth coordinators.

  2. Assuming the average annual income across all types of staff/service providers/administrators is $40,000, the wage rate was estimated using the following formula: $40,000 (annual income)/2080 (hours worked per year) = $19.25 (dollars per hour).

  3. Number of respondents across 27 grantees. Average based on a 5 percent attrition rate at each data collection point. These data are collected as part of the grantees’ routine intake processes. Hence, burden is calculated only for the subset of the Cross-Sectional Descriptive Study sample that also participates in the Child and Family Outcome Study.

  4. Given that 65 percent of the families in the Phase III evaluation sample fall at or below the 2005 DHHS National Poverty Level of $ 19,350 (based on family of four), the wage rate was estimated using the following formula: $19,350 (annual family income)/2080 (hours worked per year) = 9.30 (dollars per hour).

  5. Average number of responses per respondent based on 6 data collection points for children recruited in year 3, 4 for children recruited in year 4, 2 for children recruited in year 5 (of grantee funding).

  6. Estimated number of caregivers with children over age 5, based on Phase IV preliminary needs-assessment that 95 percent of children served will be over age 5.

  7. Estimated number of caregivers with children under age 12, based on Phase IV preliminary needs-assessment that 40 percent of children served will be under age 12.

  8. Based on Phase III finding that approximately 60 percent of the children in the evaluation were 11 years old or older.

  9. Based on the Federal minimum wage rate of $5.15 per hour.

  10. Respondents only complete Service Experience Study measures at follow-up points. Average number of follow-up responses per respondent based on 6 follow-up data collection points for children recruited in year 3, 4 for children recruited in year 4, and 2 for children recruited in year 5 (of grantee funding).

  11. Assuming the average annual income across all types of staff/service providers is $31,200, the wage rate was estimated using the following formula: $31,200 (annual income)/2080 (hours worked per year) = $15.00 (dollars per hour).

  12. 25 respondents will be caregiver and 75 respondents will be administrators/providers.


As indicated in Table 1, the average total annual burden for data collection is estimated at 6,281 hours. This estimate is derived by calculating the burden for each measure, dividing those numbers by 3 (years of data collection in the national evaluation), and summing.


13. Estimates of Annualized Cost Burden to Respondents



Grantees are collecting the majority of the required data elements as part of their normal operations, and maintain this information for their own service planning, quality improvement, and reporting purposes. The additional cost of this data collection is minimal. The costs for operation and maintenance of materials necessary for ongoing data collection are similarly minimal.


Other costs related to this effort, such as the cost of obtaining copyrighted instruments, are costs to the Federal government. Each grantee has been funded, as part of the overall cooperative agreement award, to support two staff positions (or the full-time equivalent) to assist in the evaluation. Therefore, no cost burden is imposed on the grantee by this information collection effort.



14. Estimates of Annualized Cost to the Government



CMHS has planned and allocated resources for the management, processing and use of the collected information in a manner that shall enhance its utility to agencies and the public. Including the Federal contribution to local grantee evaluation efforts, the contract with the National Evaluator and government staff to oversee the evaluation, the annualized cost to the government is estimated at $4,093,139. These costs are described below.


Each grantee is expected to hire two full-time equivalents to recruit families into the evaluation, collect information, manage and clean data, and conduct analyses at the local level. If it is assumed that an average annual salary of $30,000; that 27 grantees will be funded; and that the average Federal contribution (not including state matching funds) will be 73 percent, the annual cost for Phase IV at the grantee level is estimated at $1,182,600. These monies are included in the cooperative agreement awards.


The national evaluation contract has been awarded to Macro International Inc. for evaluation of the 27 grantees in Phase IV. The national evaluation contract provides for 1 base year of $2,036,106 with an option to renew for 4 more years. The estimated average annual cost of the contract will be $2,832,843. Included in these costs are the expenses related to developing and monitoring the national evaluation including, but not limited to, the following activities: development of the design, instrument package (including acquisition of copyrighted instruments), data manual, and training materials; monitoring of and technical assistance to sites; travel to sites and relevant meetings; and data analysis and dissemination activities. In addition, these funds will support staff positions at the Treatment Effectiveness Study and Family Education and Support study sites to assist in the evaluation, and will cover other data collection costs at those sites. Cost for acquisition of copyrighted instrumentation is projected to be $22,851 per year. This cost is included in the total contract award.


It is estimated that CMHS will allocate 75 percent of a full-time equivalent each year for government oversight of the evaluation. Assuming an annual salary of $103,594, these government costs will be $77,696 per year.

15. Changes in Burden



Currently, there are 25,262 hours in the OMB inventory. SAMHSA is requesting 6,281 hours for this submission, a decrease of 18,981 hours. This revision responds to a variety of program changes that explain the decrease in hours: (1) reduction of the number of funded sites for which burden was estimated; (2) a reduction of the number of instruments for the Treatment Effectiveness Study and the Cultural Competent Practices Study; (3) add the Family Education and Support Study in up to six sites; and (4) add a Primary Care Provider Study survey.


a. Program Changes


(1) The previously approved clearance included a calculation of burden based on 32 grantees because this was the anticipated number of sites to be funded in the Phase IV cohort (18 in FY 2002; 7 in FY 2003; 7 in FY 2004). However, the actual number of funded sites in the Phase IV cohort was 29 grantees (18 in FY 2002; 7 in FY 2003; 4 in FY 2004). This number was further reduced to 27 grantees because the funding for two grantees funded in FY 2002 was withdrawn. As a result of this reduction, the number of respondents (i.e., caregivers, youth, and providers/administrators) was reduced by 3,669 respondents. This represents a reduction of 17,458 hours.


(2) The first OMB submission included burden estimates for the System of Care Practice Review (SOCPR) instruments to be used in the Treatment Effectiveness Study that were later dropped. After initial analyses from previous studies using SOCPR revealed minimal association with the treatment selected, it was determined that the instruments would not add additional information. The elimination of these instruments reduced the burden by 480 hours. Additionally, the Wed survey associated with the Cultural Competence Practices Study was completed. The completion of this survey reduced the burden by 240 hours.


(3) A Family Education and Support Study will be conducted in up to six sites using a three-tier design. As described previously, Tier 1 will involve secondary analysis of existing data from the longitudinal child and family outcome study from previously funded communities. Based on results of these analyses, critical domains and elements of family education and support will be identified, and used to inform data collection in Tier 2. For Tier 3, up to six sites will be selected. Site selection may be determined by the timing of introduction of family education and support services subsequent to enrollment, how eligibility to receive such services is ascertained, and by other site-specific implementation factors. The burden calculations took into account the addition of this study and represents an additional 639 hours.


(4) A Primary Care Provider Study seeks to investigate the role of primary health care practitioners in systems of care, to learn how PCPs identify and treat children and youth with mental health needs, and learn more about the factors that facilitate and interfere with communication and interaction between PCPs and mental health providers. The study is the final stage in a three-stage process. The final stage will be conducted in the upcoming year with the use of a mail-out or Web survey to be administered to primary care practitioners, specifically, pediatricians, in communities funded by the CMHI. The burden associated with the addition of this study was calculated, and represents only about 164 additional hours, based on an estimate included in the first submission.








16. Time Schedule, Publication, and Analysis Plans



a. Time Schedule


The time schedule for implementing the remainder of the Phase IV evaluation is summarized in Table 2.

Table 2

Time Schedule


Receive revised OMB clearance for study

June 2007

Data collection completed for 16 sites funded in FY02

September 2008

Data collection completed for 7 sites funded in FY03

September 2009

Data collection completed for 4 sites funded in FY04

September 2010

Process and analyze data

Ongoing

Produce annual reports

Annually in August

Produce public use data base

September 2010

Produce final report

September 2010



b. Publication Plans



Applications of the system of care model have increased in number and funding over the past several years. Thus, the publication of evaluation results is of great interest at the Federal, state, and local levels, all of which have been involved in promoting the system of care model. Interim reports are prepared for CMHS annually in October. A final report will be prepared at the completion of the evaluation for internal use by CMHS and will be widely distributed beyond CMHS.


Because of the importance of this evaluation to the field of children’s mental health and the expansion of the system of care model, we publish the results of the national evaluation in relevant professional journals to inform the research community as well as the decision making of policymakers and program administrators. For the remaining years of the contract, a minimum of 6 publications are planned. Possible publications include manuscripts reporting results from the Culturally Competent Practices Study, the Treatment Effectiveness Study and preliminary results from the Primary Care Provider and Family Education and Support Studies. Specific publications have been developed, such as special editions or monographs, to disseminate this unique information more broadly. Additional publications may include articles on the development of community-based systems of care, effectiveness of services for targeted groups, cost effectiveness of treatment components, and implications of system development approaches for sustainability, among others. All publications are submitted to the Government Project Officer (GPO) and an expert panel designated by the GPO in draft form for review and approval prior to submission to the selected journal.


The cross-agency, interagency, collaborative perspective represented by the system of care model involves multiple audiences, including those involved in mental health, child welfare, juvenile justice, and education. Policymakers, program administrators, and researchers in each of these service sectors have been interested in the findings from this evaluation and will continue to serve as the potential audience for publications.


Examples of journals that are considered as vehicles for publication include the following:


$ American Journal of Public Health

$ American Psychologist

$ Child Development

$ Children Today

$ Evaluation Review

$ Evaluation Quarterly

$ Journal of the American Academy of Child and Adolescent Psychology

$ Journal of Behavioral Health Services Research

$ Journal of Child and Family Studies

$ Journal of Clinical Child and Adolescent Psychology

$ Journal of Consulting and Clinical Psychology

$ Journal of Emotional and Behavioral Disorders

$ Journal of Health and Social Behavior

$ Journal of Mental Health Administration

$ Milbank Memorial Fund Quarterly

$ Social Services Review


Besides audiences associated with specific service sectors, results of the project have been of interest to state legislators. It is this group that often makes decisions about how to configure the service delivery system for children with serious emotional and behavioral disorders and determines matching funds required for this program. The National Conference of State Legislators can help identify the best strategies for reaching this group with evaluation findings.


Two special issues of journals have been published with findings from the national evaluation. One issue of the Journal of Emotional and Behavioral Disorders, published in spring 2001, was dedicated to the national evaluation and reported on findings from Phase I of the National Evaluation. The first issue of 2002 of Children’s Services: Social Policy, Research, and Practice was dedicated to the evaluation and presented findings and policy implications.


The National Evaluation has published 42 articles in peer-reviewed journals, 9 book chapters, and 72 proceedings papers. See Attachment 6 for a list of articles published by the national evaluation.


Beginning in October 1999, a monthly System-of-Care Evaluation Brief, presenting findings from various aspects of the national evaluation in a family-friendly format has been published. System-of-Care Evaluation Briefs have been distributed to grantee sites through annual grantee meetings, and other meetings attended by representatives from grantee sites. These publications are also available electronically to registered grantees using the ICN for data transmittal in Phases III and IV.


The above publications are highlights of publication efforts. Evaluation findings also have been presented in numerous conference presentations, and at various federal meetings. A list of selected conference presentations and meetings from the national evaluation can be found in Attachment 6.


In addition to publications based on aggregated data, data reports are made to CMHS and to grantee sites three times each year, continuous quality improvement data reports are made to CMHS and to grantee sites four times per year, and narrative and quantitative reports of System-level assessments are provided to grantee sites and to CMHS following each site visit to assist sites with program development.


c. Data Analysis Plan



All of the data collection and analytic strategies detailed in this package are linked to the evaluation questions. These linkages are shown in Table 3. Note that the majority of these data are collected at intake and at each data collection point. Exceptions include: 1) descriptive data elements that are not expected to change over time (e.g., gender, race) and are asked only at intake; 2) service and cost data from grantee MISs, which is collected on an ongoing basis; 3) system of care data, which are collected every 18 months; 4) the treatment outcome data for the Treatment Effectiveness Study which are collected once at the beginning of the evidenced-based treatment (i.e., BSFT) and subsequent to treatment, and the treatment fidelity data which are collected at the termination of the evidenced-based treatment; 5) sustainability data that are collected in years 2, 4, and 5 of the evaluation; 6) culturally competent practices data that are collected in years 2 and 4 of the evaluation, with data from year 2 used to determine the data that will be collected in year 4; 7) the outcomes data for the Treatment Effectiveness which will be collected at 6-month intervals from baseline to 12 months; and 8) data on the role of primary care providers that will be collected once in year 4. Analyses are conducted to assess reliability and validity of selected measures as sufficient data to conduct these analyses are obtained in the early stages of the studies. These analyses include, but are not limited to, calculation of reliability using Cronbach=s coefficient alpha to determine internal consistency of ordinal-level and interval-level measures, calculation of the Kuder-Richardson formula 20 to determine internal consistency of dichotomous measures, and confirmatory factor analysis to determine latent variable structure and content of multi-component scales.


Table 3

Evaluation Questions, Indicators,

Data Sources, and Analysis Techniques



Evaluation Questions


Indicators


Data Sources


Data Analysis


System of Care Assessment


Does the system maximize interagency collaboration?


$ Core agencies participate in a collaborative way

$ Integration of staff, resources, functions, and funds

$ Co-location of services of multiple agencies

$ Interagency service planning

$ Shared vision and goals


$ Site Visit

  • IACS



$ Univariate/ Multivariate Analysis


Are the various service components of the system coordinated?


$ Co-location of services of multiple agencies

$ Availability of case management/care coordination services

$ Case manager/care coordinator has broad responsibilities and active referral role

$ Integration and consistency in case management/care coordination across systems/agencies


$ Site Visit


$ Univariate/ Multivariate Analysis


Are services and the system accessible?


$ Proportion of eligible population provided services

$ Time between identification of need and entry to system

$ Waiting lists for entry to system

$ Waiting lists for delivery of key services

$ Active outreach

$ Logistics and supports that encourage access


$ Site Visit



$ Univariate Analysis



Is the service array comprehensive?


$ Availability of broad array of residential, intermediate, outpatient, and wraparound services


$ Site Visit

$ MIS


$ Univariate Analysis


Are services and the system culturally competent?


$ Cultural diversity of the child and family population

$ Cultural diversity of provider population

$ Agency commitment to cultural competency

$ Equitable treatment of all children and families


$ Site Visit

$ CCSP

$ YSS, YSS-F


$ Univariate Analysis


Are services and the system family focused?


$ System and services involve caregivers in developing individual child and family service plans

$ System and services involve caregivers in overall system of care planning activities

$ System and services involve caregivers in service delivery

$ System and services address needs of caregivers and families for support


$ Site Visit

$ YSS, YSS-F




$ Univariate/ Multivariate Analysis



Are services individualized?


$ Active individualized service planning process

$ Frequency of monitoring of ISP by case manager


$ Site Visit

$ YSS, YSS-F


$ Univariate/ Multivariate Analysis


Are services community based?


$ Availability of services within the community

$ Extent of reliance on out-of-county and out-of-state placements


$ Site Visit

$ MIS


$ Univariate/ Multivariate Analysis


Do systems mature over time?


$ Development of infrastructure

$ Development of service delivery capacity


$ Site Visit


$ Multivariate Analysis


Are services provided in the least restrictive setting that is appropriate?


$ Processes to ensure that children Astep down@ to lower levels of care when appropriate

$ Extent of use of intermediate and outpatient placements

$ Extent of use of wraparound services

$ Stability and duration of placements

$ Level of use of mental health services in normative settings (e.g., home, school)


$ Site Visit

$ MIS

$ LSQ


$ Univariate/ Multivariate Analysis


What systems exist at the comparison sites? How do they compare to the CMHS-funded systems of care?


$ Description of system structure and service delivery process

$ Comparison of funded systems of care and non-funded systems


$ Site Visit

$ MIS

$ LSQ


$ Univariate/ Multivariate Analysis


Can differences in child and family outcomes across sites be attributed to varying levels of system development?










$ Description of system structure and service delivery process

$ Comparison of funded systems of care and non-funded systems




$ System of Care Assessment

$ Child and family outcome data





$ Univariate/ Multivariate


Services and Costs Study


What services do children and families receive and what are their service utilization patterns?


$ Previous service history

$ Service setting and type

$ Level of restrictiveness

$ Mix of services

$ Amount and duration

$ Continuity of care


$ MIS

$ LSQ


$ Univariate/

Multivariate Analysis


How do service use patterns relate to child behavioral and functional outcomes?


$ Comparison of service use for children who enter the system at varying levels of challenge

$ Comparison of change in outcomes over time for children in different utilization pattern groups


$ CBCL

$ CIS

$ BERS

$ BERS-2

$ MIS

$ MSSC


$ Univariate/

Multivariate Analysis


How do service use patterns differ across subgroups within a site? Across system of care and comparison sites?


$ Comparisons of types of services used

$ Comparisons of level of restrictiveness

$ Comparisons of service mix

$ Comparison of amount and duration

$ Comparison of continuity of care


$ MIS

$ LSQ

$ MSSC



$ Univariate/

Multivariate Analysis


What costs are associated with services at the aggregate and child/family levels?


$ Total costs of services for individual children and families

$ Average costs per child/family

$ Average cost per service type


$ MIS


$ Univariate/ Bivariate Analysis


Cross-Sectional Descriptive Study


Who are the children and families served in systems of care? What are children and families like?


$ Gender

$ Race

$ Age

$ Educational level and placement

$ Socioeconomic status

$ Parents= employment status

$ Living arrangement

$ Presenting problem(s)

$ Diagnosis at intake

$ Intake/referral source

$ Risk factors for family and child

$ Case status


  • EDIF

  • CIQ – IC



$ Univariate/ Bivariate Analysis


Child and Family Outcome Study


Are there differences between the children served in the systems and those who participated in the Child and Family Outcome Study?


$ Gender

$ Race

$ Age

$ Educational level and placement

$ Socioeconomic status

$ Parents= employment status

$ Living arrangement

$ Presenting problem(s)

$ Diagnosis at intake

$ Intake/referral source

$ Risk factors for family and child

$ Case status


  • EDIF

  • CIQ – FC


$ Univariate/

Bivariate Analysis


Has there been a reduction in children=s negative behaviors?


$ Number of problem behaviors


$ CBCL

$ CIS


$ Univariate/ Multivariate Analysis


Has there been an increase in the level of child=s overall functioning?


$ Child=s ability to accomplish activities of daily living

$ Quality of family relationships

$ Quality of peer relationships


$ CBCL

$ BERS

$ BERS-2

$ CIS

$ FLQ


$ Univariate/ Multivariate Analysis


Has there been improvement in child functioning in the educational environment?


$ School attendance

$ Expulsions, dropouts, suspensions

$ Academic performance

$ BERS

$ BERS-2

$ EQ


$ Univariate/ Multivariate Analysis


Has there been improvement in child regarding involvement with law enforcement?


$ Violations

$ Number of contacts with law enforcement

$ Number of incarcerations


$ DS


$ Univariate/ Multivariate Analysis


Do families experience improvements in family life?


$ Family functioning

$ Caregiver strain (burden of care)

$ Material resources


$ FLQ

$ CGSQ


$ Univariate/ Multivariate Analysis


Are there differences in family outcomes across systems of care?


$ Family functioning

$ Caregiver strain (burden of care)

$ Material resources


$ FLQ

$ CGSQ


$ Univariate/ Multivariate Analysis


Service Experiences Study


How do children and families experience services?


$ Ratings of specific services

$ Ratings of the overall system

$ Provider attitudes and practices


$ YSS

$ YSS-F

$ CCPS


$ Univariate/

Multivariate Analysis


Are there differences in service experiences across system of care and comparison sites? Are differences, if any, associated with differential outcomes?


$ Comparison of ratings of specific services

$ Comparison of ratings of the overall system

$ Comparison of provider attitudes and practices

$ Relationship to child outcomes


$ YSS

$ YSS-F

$ CCSP

$ CBCL

$ CIS


$ Univariate/ Multivariate Analysis

Treatment Effectiveness Study


To what extent do specific evidence-based treatments enhance positive outcomes for children and families participating in systems of care?


$ Comparison of children and families in systems of care who receive EBT to children and families in systems of care who do not receive EBT

  • FAM

  • CBQ

  • TAS – youth & caregiver

  • TAF-R

  • Ohio Scales – youth & caregiver

  • Child & family outcomes study


$ Multivariate Analysis

$ Hierarchical linear modeling



Do these differences, if any, endure over time?



$ Comparison of improvements between EBT and non-EBT children and families in systems-of-care


$ Reliable change index scores based on the outcome measures


$ Univariate/ Multivariate Analysis





To what extent do specific evidence-based treatments impact the service array of children and families?


$ Comparison of children and families in systems of care who receive EBT to children and families in systems of care who do not receive EBT


$ MSSC


$ Bivariate and Multivariate Analyses





To what extent does the implementation of evidence-based treatment impact provider attitudes of evidence-based treatment?



$ Comparison of provider attitudes about evidence-based treatment


$ Evidence-based Practice Provider Attitudes Scale



$ Bivariate and Multivariate Analyses

Are improvements in child and family outcomes associated with EBT in a system of care related to the fidelity with which the EBT is implemented?



$ Improved outcomes for children and families is related to fidelity of the implementation of the EBT


$ TAS – youth & caregiver

  • TAF-R

$ EBP Provider Survey - provider

$ FAM

$ CBQ

$ Ohio Scales


$ Instrumental variables estimation procedure


Culturally Competent Practices Study


Do system of care service providers provide culturally competent care?

  • Percent of providers who meet the criteria for culturally competent service

  • Percent of providers who have a plan for providing culturally competent services

  • Youth and caregiver satisfaction with cultural competence of their services

  • Caregiver, youth and service provider focus groups

  • Culturally competent practices survey

  • Univariate/ Multivariate/ Factor Analysis

  • Thematic analyses

Do recipients and providers of services define culturally competent care in similar ways?




  • Comparison of providers’, caregivers’ and youths’ perceptions of culturally competent services.


  • Caregiver, youth and service provider focus groups

  • Culturally competent practices survey

  • Multivariate Analysis

  • Thematic Analyses

Are children and family outcomes affected by the cultural competence of the services they receive?

  • Comparison of outcomes for children and families in systems of care with high versus low degree of culturally competent practices

  • Culturally competent practices survey

  • Child and family outcome data

  • Hierarchical linear modeling

Do service providers’ level of involvement in a system of care affect their level of culturally competent practices?

  • Relationship between scores for culturally competent practices and involvement in a system of care

  • Culturally competent practices survey

  • Multivariate Analysis

Family Education and Support Study

What are the critical elements of family education and support (FES)?

  • Types and frequency of family education and support

  • MSSC – R

  • Interview guides

  • Focus Groups

  • Descriptive Statistics at entry

  • Thematic Analyses

What are the characteristics of children and families receiving family education and support services?

  • Number of children and families receiving FES services

  • Living arrangement

  • Presenting problem(s)

  • Diagnosis at intake

  • Intake/referral source

  • Risk factors for family and child

  • Family functioning

  • Caregiver strain (burden of care)

  • Material resources

  • CIQ – IC

  • FLQ

  • CGSQ

  • Univariate and Bivariate Analyses

What is the satisfaction level of families receiving family education and support?

  • Rating of satisfaction with FES services


  • YSS

  • YSS – F

  • Focus Groups

  • Bivariate and Multivariate Analyses

  • Thematic Analyses

How does receipt of the critical elements of family education and support services affect family level outcomes

  • Caregiver social support

  • Caregiver functioning/stress

  • Mental health services self-efficacy

  • Parenting skills and involvement

  • Parent use of mental health services

  • DSSS

  • CGSQ

  • Vanderbilt

  • APQ

  • PSOC

  • BDI

  • Multivariate Analysis

$ Logistic Regression


Primary Care Provider Study


What is the physical health status, health care utilization and health care financing status of children with serious emotional disturbance participating in the program?



  • Health care financing

  • Health care utilization

  • Primary health care provider

  • Physical health status


  • CIQ at Services Entry


  • Descriptive statistics at services entry

  • Chi-square

  • Analysis of Variance



How does the physical health status, health care utilization and health care financing status of children with serious emotional disturbance participating vary over time and affect child and family outcomes?



  • Health care financing

  • Health care utilization

  • Primary health care provider

  • Physical health status


  • CIQ at entry and every 6 months thereafter


  • Repeated measures analysis of variance

  • Hierarchical linear modeling


What are the factors that influence primary health care providers’ active participation in the care of children with serious emotional disturbance who are being served within systems of care?



  • Parent responses

  • Youth responses

  • Project Director’s responses

  • Service Provider’s responses

  • Primary health care provider’s responses


  • Discussion groups


  • Thematic analyses


How does the health care provided by primary health care providers influence child and family outcomes?



  • Health care provision

  • Overall approach and screening for mental health disorders

  • Diagnostic and treatment approaches for mental health disorders

  • Familiarity and collaboration with local system of care

  • Organizational and financing factors affecting the provision of mental health care



  • Primary health care provider survey


  • Hierarchical linear modeling


Analyses planned for each of the study components are described below. These analyses have been possible for grantee sites that are able to implement the evaluation as designed, including collection of cross-sectional descriptive data on the census of children and families who enter the system, the proper recruitment of an adequately sized sample, minimal missing data within and across data collection points, retention of families over time, and adherence to prescribed data collection procedures. In sites with constraints (e.g., insufficient size of target population), analyses are tailored to meet the needs of the individual site. The sample table shells presented in Attachment 5 provide examples of how data can be summarized.


Essentially, the objectives of the data analysis are concentrated on an overall goal of understanding the effects of the systems of care approach. The analysis plan focuses on description, explanation, and prediction. The data analyzed in Phase IV include both discrete and continuous variables. The scales on which these variables are measured have important implications for the choice of statistical procedures used in data analysis. Some of the variables used in this evaluation are nominal (e.g., race and ethnicity) and ordinal (e.g., services ranked in order of restrictiveness). These types of measurement scales require the use of nonparametric statistics. It is recognized that nonparametric statistics offer less power relative to parametric tests, and that parametric tests are restrictive but they are more robust to violations of normal distribution. For this reason, research questions measured with ordered discrete variables (such as the ratings of system and service performance) approaching a continuous scale are tested using parametric statistics. The analysis plan being employed across the current study components and the plan to be used with the new study components are described below.


System of Care Assessment. In this evaluation component, Phase IV seeks to determine whether a system of care has been implemented in accordance with the system of care program theory and to document the maturation of the system over time. This study component includes both qualitative and quantitative analyses and both are based on a standard framework. Qualitative analyses are used to describe the infrastructure and the direct service delivery processes of system of care communities. The standard framework ensures that all system of care communities are characterized on similar system operations (e.g., management, client entry into the system of care, service planning and coordination processes) but the qualitative approach provides for the individual and unique features of each system of care community to be portrayed.


Qualitative data obtained through individual interviews at each system of care community and from document reviews are synthesized into a site-specific narrative report that is returned to each system of care community for review and correction. When the reports for each community are finalized after site comment, they are entered into a qualitative database software program (Atlas.ti) that allows meta-analyses across system of care communities and across time.


The quantitative analyses are based on scores given to each system of care community that measure the extent to which it has achieved the program theory=s overarching principles (e.g., individualized care, family focus, cultural competence, coordination) within the system operations described in the qualitative analysis and from quantitative interview questions (e.g., percentage of children who receive an individualized service plan, number of child-serving agencies that attend governing body meetings). This approach allows systems of care to be assessed across principles (e.g., how well system operations incorporate a family-focused approach) and across operations (e.g., how well does the overall management of the system of care reflect the principles as a whole). The relationship among service and system experiences, child and family characteristics, and outcomes over time are explored using correlational, regression, and path analyses.


Information from the IACS is analyzed quantitatively to assess the level of interagency collaboration in system of care communities and to better understand the multidimensional structure of the collaboration construct. The general linear model (GLM) repeated measures analysis allows the National Evaluator to test whether changes over time are significant and whether some groups experience more improvement than others. Responses to the IACS are analyzed using GLM to determine the extent to which interagency collaboration factors of Beliefs/Values, Activities/Behavior, and Knowledge change over time. In addition, system-level characteristics are used to group communities to assess the impact of these characteristics on interagency collaboration scores.


Services and Costs Study. For this component, analyses focus primarily on utilization patterns (e.g, types, combination and amount of services used) and the factors that influence use. Analyses are conducted at the aggregate and individual child and family levels. On the aggregate level, the distribution of service use across the client population is described. At the individual child and family level, service utilization patterns are described (e.g., distribution of children using various combinations of services, mean and median amounts of services used).


Latent class analysis and other case-grouping techniques are used to group children who experience similar utilization patterns, based on combinations and amount of services. Multinomial logistic regression analysis are employed to predict service utilization patterns with child clinical, and family life variables measured at intake. The longitudinal outcomes of children in various service utilization groups are compared to see if some utilization patterns are associated with greater gains and, if so, for which groups of children.


Cross-Sectional Descriptive Study. This component profiles all of the children and families who enter the CMHS-funded systems of care. Analyses planned for this study are primarily descriptive. The distribution of demographic variables (e.g., age, gender, race, ethnicity) are analyzed using frequencies, proportions, and univariate descriptive statistics (e.g., means, medians). Clinical and functioning variables analyzed this way include diagnosis, school attendance and academic performance, previous mental health history, and prior involvement with juvenile justice. Descriptive profiles are reported for each system of care site and for all sites combined.


Differences among children who enter the systems in different years are tested to assess whether the types of children who enter the system differ over time as the system matures (e.g., more challenged children served in year 3 compared to year 2). Differences within and among sites are tested for statistical significance. If the predictor variables are dichotomous or categorical (e.g., gender, race, diagnosis), chi-square are used to test differences. When the predictor variables are continuous (e.g., age), t-tests are used.


Child and Family Outcome Study. For this evaluation component, data collected at intake are analyzed to describe the sample in terms of intake demographic characteristics, symptomatology (i.e., CBCL scores), social functioning (i.e., peer relations, DS, and SUS scores), and stability of living arrangements (i.e., LSQ). Families are described in terms of their intake demographic features, functioning (i.e., FLQ scores), and level of caregiver strain (i.e., CGSQ scores). Univariate descriptive analyses are performed to characterize the families participating in this evaluation, including score ranges, means, and medians. These analyses are reported for each system of care site as well as for all grantees combined.


Change in child and family outcomes over time are tested using a variety of techniques. Repeated measures ANOVA are used to test the significance of change over time within and between groups at each site. In addition, repeated measures analysis of covariance (ANCOVA) are used to control for differences present at intake, which is prudent, even when those differences are not statistically significant.


HLM provides improvement in estimating individual effects, an opportunity to model cross-level effects (i.e., individuals within systems, over time), and greater precision in partitioning components of effects across multiple levels. The following provides an illustration of how HLM will be used in the evaluation. The children and families in the longitudinal study are located (or “nested”) within systems of care. We assume that children experience an intervention and that, as a result of that intervention, they experience change. We know from the evaluation of the 22 grant communities originally funded in 1993 and 1994 that systems of care vary in terms of their overall development (Brannan et al., 2002; Vinson et al., 2001). We expect that differential system development (approximated with system-level assessment scores) will mediate child and family outcomes. HLM allows us to estimate growth curves (e.g., changes in the level of symptomatology) based on repeated observations. These repeated measures are “nested” within the individual child. Using this three-level design, HLM permits us to estimate how much of the variance found in the first level (e.g., changes in symptoms) is due to the second (e.g., individual receiving treatment), and how much of the variance can be attributed to the third level (e.g., the degree of system-of-care development).


The GLM repeated measures analysis allows the National Evaluator to test whether changes over time are significant and whether some groups experience more improvement than others. Within a community, these techniques will be used to explore whether certain service utilization patterns yield better outcomes. Path analysis and other structural equation modeling techniques will be used to investigate the direct and indirect effects of causal variables (such as ratings of system performance and adherence to service plans) on dependent outcome measures (such as clinical assessments, restrictiveness of care, and family functioning). The National Evaluator does not view the use of path analysis as a method of causal discovery, but rather as a method of confirming appropriate models derived from empirical and theoretical considerations


Service Experience Study. In this component of the Phase IV evaluation, analyses assess the extent to which children and families receive services as they were intended; that is, consistent with the system of care program model. Similar to data collected from the Services and Costs Study, the distribution of self-reported service use across the client population are described (i.e., MSSC). Service utilization patterns also are described. HLM or ANOVA have been performed to examine: 1) change in service utilization patterns of children and their families, 2) whether there are differences between groups of children in the system of care sites who receive an evidence-based treatment and those who do not in terms of client satisfaction as measured by the abbreviated satisfaction questionnaires (i.e., YSS-F and YSS) and ratings of the cultural competence of services as measured by the CCSP; 2) whether children and families stay in services longer on average in sites with higher average service and system of care ratings; and 3) whether within sites, caregivers of children who received fewer services in the previous 6 months (as measured by the MSSC) also reported being less satisfied or rated their services and systems lower.


Treatment Effectiveness Study. Preliminary analyses of the provider attitudes about evidence-based treatment (i.e., EBP Practice Provider Attitudes Scale), treatment fidelity (i.e., TAS-caregiver, TAS-youth and Therapy Adherence Form) and treatment outcome (i.e., CBQ, FAM, and Ohio Scales) data have been conducted to assess reliability and validity of the selected measures. These analyses included, but are not limited to, calculation of reliability using Cronbach=s coefficient alpha to determine internal consistency of ordinal-level and interval-level measures, calculation of the Kuder-Richardson formula 20 to determine internal consistency of dichotomous measures, and confirmatory factor analysis to determine latent variable structure and content of multi-component scales.


This study uses a randomized clinical trial design to assess effectiveness of Brief Strategic Family Therapy within systems of care on clinical outcomes by comparing outcomes among children who received the standard system-of-care services plus an evidence-based treatment to those among children who received only the standard system-of-care services. The study measures treatment fidelity and outcome measures that are specific to the treatment goals. Measures are administered to service providers, caregivers and youth from 2 system-of-care sites (i.e., Oklahoma City, OK and Cleveland, OH). The study includes only children with the specific diagnostic characteristics for which the evidence-based treatment is designed (i.e., disruptive behavior disorder and at-risk for substance use). Outcomes for children receiving an evidence-based treatment will be compared to outcomes for a control group of matched children from the same system-of-care site. Descriptive statistics are employed to summarize the characteristics of the research sample at each community and across the two communities for the two groups. Subgroup analyses are performed to assess potential differences among the groups on descriptive variables and pretest measures. Analyses assess the relationship between treatment fidelity, and treatment outcome among those children assigned to the evidence-based treatment groups.


Repeated measures multivariate analysis of variance will be performed to compare clinical and functional outcomes across children. At the end of data collection, if it is possible to combine data across the two communities, HLM will be employed to evaluate differential trajectories of change depending on treatment group assignment. The goal of these analyses will be to show the extent to which participation in the evidence-based treatment results in improvements in child and family functioning. Potential dependent outcome variables that will be tested with these models include the child measures and indicators included in the Child and Family Outcome Study.


Sustainability Study. For the Sustainability Survey, the analysis plan includes both quantitative and qualitative components. Web survey data are aggregated and analyzed quantitatively and qualitatively. Quantitative data obtained from factors related to sustainability are for reliability and compared to system characteristics. To examine factors in relation to system development, survey data pertaining to system features are compared to responses related to factors contributing to sustainability. In addition, survey data are combined with data from final System of Care Assessment site visits, including assessment scores from these visits, to create a more robust picture of the status and process of sustainability in each community. Quantitative data obtained about system features and factors impacting sustainability are tallied for each site. This information is tallied across all sites, yielding cross-site information on the extent to which specific system of care features are in place in Phase IV sites during various stages of their funding, positive and negative factors affecting sustainability, and the effectiveness of strategies implemented to sustain systems of care. Quantitative ratings are assigned to each site across the various assessment areas, and are ranked according to their importance. Quantitative comparisons of these features are made across sites where appropriate.


Culturally Competent Practices Study. This component of the evaluation assesses how providers, administrators, caregivers and youth perceive culturally competent services (i.e., what does it mean to provide/receive culturally competent service?), whether culturally competent services are being provided, how outcomes are affected by culturally competent practices, and whether providers’ involvement in a system of care affects their level of cultural competence. To address these issues, data from the Culturally Competent Practices survey are qualitatively and quantitatively analyzed, in conjunction with qualitative analyses of focus group data. Descriptive statistics on quantitative data (e.g., means, standard deviations, percentages) are used to profile the extent to which clinicians are aware of and utilize culturally competent practices. Differences in the use of culturally competent practices across the types of providers (e.g., clinical social worker, psychiatrist, psychologist) are tested using analysis of variance (i.e., for continuous variables) and chi-square tests (i.e., for categorical data). Analyses are performed to predict use of culturally competent practices using information about the providers gleaned from the survey, as well as information collected in other evaluation components (e.g., System of Care Assessment , Cross-Sectional Descriptive Study), including types of training offered at the systems of care, and population served. Because the majority of the independent variables used in these predictive analyses are system-level data, it is important to address the intra-class correlation associated with system. Hence, HLM is used to model the factors that contribute to the use of culturally competent practices at the provider and system levels. The relationship between child and family outcomes and culturally competent care are analyzed using outcome measures from the Child and Family Outcome Study (e.g., CBCL, CGSQ). Since providers are anonymous and cannot be linked to individual children or families, data on outcomes and cultural competence are aggregated to the site level and analyzed using HLM.


Qualitative data from the survey and focus groups are summarized to identify common themes and to shed light on the findings from the quantitative analyses. Focus group discussions have been taped and transcribed. The next step is to code and analyze the text within Atlas.ti (i.e., software for text analysis). Queries will be performed on the coded text to compare themes across respondent types in order to understand differences and similarities in perceptions and needs with respect to providing and receiving culturally competent care.


Family Education and Support Study. This study examines the relative impact of a community-based intervention (i.e., Family Education and Support; FES) on child and family outcomes. The analysis plan will include both qualitative and quantitative components. Qualitative data will be obtained from one-on-one interviews conducted with project directors and clinical supervisors, and from focus groups conducted with providers and family members. A thematic analysis will be conducted with these data to assess critical elements of FES services. The transcriptions of the interviews and focus groups will be entered into Atlas.ti (i.e., a qualitative software package). A coding scheme will be developed both inductively and deductively from recurring themes and patterns in the data and from extant literature. Atlas.ti will be used to examine relationships among coding categories and compare responses and dialogue across interviews and focus groups. These analyses will help determine the critical elements of family education and support.


Quantitative analyses will be conducted using existing national evaluation data to assess types and frequency of FES services, as well as child and family-level outcomes associated with FES service receipt. These secondary data analyses will include descriptive statistics such as frequencies, proportions, and univariate analysis (e.g., means and medians). Differences within sites with respect to demographic variables associated with receipt of FES will be tested for statistical significance using chi-squares (for categorical data) and ANOVA (for continuous variables). To assess child and family level variables associated with FES, logistic regression analyses will be conducted to determine factors related to receiving or not receiving FES services


Quantitative analyses will be collected also from family-level outcomes measures assessing caregiver social support (i.e., DSSS); caregiver functioning (i.e., CGSQ); mental health services self-efficacy (i.e., Vanderbilt); parenting skills (i.e., APQ & PSOC) and parent use of mental health services (i.e., BDI). An assessment of the natural variation of critical elements will be conducted within each site. A multiple regression model will be used to assess variability in the critical elements experience within site with family level outcomes as the dependent variable and critical elements as the predictors. General linear model (GLM) repeated measures analysis will be conducted to examine whether changes over time are significant and whether some groups experience more critical elements than others. Within each site, these techniques will be used to explore whether receipt of the critical elements of FES yield better outcomes.


Primary Care Provider Study. The Primary Care Provider Study assesses the role of primary care providers in systems of care and how they can better serve children in these programs. Qualitative analyses of focus group data were conducted and the findings were used to develop the PCPS survey. Descriptive statistics on quantitative data (e.g., means, standard deviations, percentages) will be used to profile the physical health status, health care utilization, and health care financing status of children. Factors that influence primary health care providers’ active participation in the care of children being served within systems of care will be tested using HLM or ANOVA (i.e., for continuous variables) and chi-square tests (i.e., for categorical data). Analyses will be performed to determine the extent to which primary health care providers interact with mental health providers (including Systems of Care providers), medication prescribing practices for mental health problems, rates of referral to mental health providers and attitudes toward mental health issues. Providers cannot be linked to individual children or families.


17. Display of Expiration Date



All data collection instruments will display the expiration date.


18. Exceptions to the Certification Statement



Certifications are included in this clearance.



B. Statistical Methods




  1. Respondent Universe and Sampling Methods



System of Care Assessment. System of Care Assessment respondents are selected based on their affiliation with the system of care community and must be serving in specific roles. To determine the respondents, the National Evaluator sends a site informant list to each community 8 weeks prior to its site visit. The site informant list identifies categories of respondents who offer a variety of perspectives about each community’s system of care. The document outlines the specific positions and roles, specialized functions, number of interviewees and estimated interview time for each respondent category. The system of care community selects potential respondents that meet the requirements outlined in the list. System of care communities e-mail the completed list to the National Evaluator at least 4 weeks prior to the scheduled visit so that the list of projected interviewees can be reviewed to ensure that each category of respondent is adequately represented. The respondent categories include representatives of core child-serving agencies, project directors, family representatives and representatives of family advocacy organizations, quality monitoring participants, intake workers, care coordinators and case managers, direct service providers, case review participants, caregivers, youth, youth coordinator, and managed care directors. For each system of care community, there are approximately 21 respondents per site visit. Site visits are conducted in all system of care communities. Based on previous experience, we expect a response rate for this study component of approximately 84 percent.


The universe for the Phase IV Cross-Sectional Descriptive Study, the Child and Family Outcome Study, and the Service Experience Study consists of the children served by the CMHS program in the 27 grantee sites.


Cross-Sectional Descriptive Study. For this evaluation component, data are collected on children and families at intake into services. Descriptive data are collected on the census of all children and their families who are being served by the CMHS program. To be included in this study component children must: 1) meet the site=s service program eligibility criteria; and 2) receive services at that site. Because these data are routinely collected at the sites for internal purposes, descriptive data on all the children and families who receive services are generally available. Follow-up descriptive data are collected only on the families participating in the Child and Family Outcome Study.


Child and Family Outcome Study. To gather data for this component that can be meaningfully interpreted while not creating an overwhelming burden for some grantees, a sample of families are selected for participation in this component. Recall that this is a longitudinal study. For ease of discussion, samples are discussed as longitudinal and cross-sectional samples.


The Child and Family Outcome Study sample is selected from the pool of children and their families entering the Phase IV funded systems of care. Although each site is funded for 6 years, the first year is committed to initial system development with data collection occurring in the last 5 years of their funding. Hence, recruitment of family participants occurs in years 2, 3, and 4 of the national evaluation.


As systems of care develop differentially over the length of the project, it is important to consider the growth of the system of care. If the entire sample is recruited in the first year, the opportunity is lost to assess whether changes in the client population occurred as the system matured (e.g., increasingly serving children with more severe problems or children referred through the juvenile justice system). For that reason, recruitment is spread across 3 years and the number of children and families recruited each year is standard across sites.


It is important that we draw a large enough sample in each grantee site to ensure that the evaluation detects the impact of the system of care initiative on child and family outcomes. If the samples are too small, significant differences of an important magnitude might go undetected. The effect sizes of the phenomena of interest form the basis of determining the minimum sample size needed through a statistical power analysis. Briefly, the power of a statistical test is generally defined as the probability of rejecting a false null hypothesis. In other words, power gives an indication of the probability that a study design will detect an effect of a given magnitude that, in fact, really exists in the population. The power analysis does not indicate that a design will actually produce an effect of a given magnitude. The magnitude of an effect, as represented by the population parameter, exists independent of the study and is dependent on the relationship among the independent and the dependent variables in question. The probability of detecting an effect from sample data, on the other hand, depends on three factors: 1) the level of significance used, 2) the size of the treatment effect in the population, and 3) sample size.


For the Child and Family Outcome Study in the grantee sites, the longitudinal design assesses whether individual children and families experience meaningful improvements in outcomes between the time they enter the systems of care and subsequent data collection points. Comparisons of outcomes among different groups within a site and across sites are also made. Previous research has indicated that comparisons of served population groups yield small to medium effect sizes (.27 to .33). Table 4 shows the power calculations used to determine the sample size required to detect effect sizes of various magnitudes for the comparison of outcomes between groups. For example, to detect a difference between two groups with a small to medium effect size with power of .80 would require a total sample size of 553. Thus, each site will be expected to recruit 277 children (i.e., 553 / 2 = 276.5) into the Child and Family Outcome Study. This will ensure that sufficient power will be achieved for the longitudinal analysis within the systems of care over time, between different groups within sites, as well as between sites.


Table 4

Effect Size: Latent Variable Model



Power


Small (.20)


Small to Medium (.30)


Medium (.50)


.80


690


553


330


.85


810


625


420


.90


930


700


510


The estimate of the number of children and families that need to be recruited in the Child and Family Outcome Study incorporates an anticipated attrition rate of 5 percent at each data collection point, which results in approximately 86 percent retention at the end of data collection. That is, to end up with follow-up data on at least 237 families after 4 data collection points, a larger number of families will need to be recruited. In addition, to study the longitudinal impact of the program on functional development (e.g., advance to college, work), sites will continue to follow children and families for the duration of the evaluation. Follow-up data collection continues into the last year of the grantees= funding, allowing the children and families recruited in the first year of data collection to be followed for 36 months, those recruited the second year of data collection to be followed for 30 months, and those recruited in the third year to be followed for 18 months. Table 5 shows the data collection schedule for the 3 years of recruitment and 5 years of data collection. While past experience with this study component has indicated that some sites will have difficulty maintaining an attrition rate of 5 percent at each data collection point, a majority of sites in Phase III of the evaluation have retention rates above 80 percent at 6 months, with one fourth retaining over 90 percent of study participants at 6 months. Overall, retention rates at 12 months are above 70 percent. The National Evaluator has established a number of strategies and techniques for maximizing recruitment and retention (see Section B.3.) and will work closely with all communities to determine the best methods for recruiting and retaining study participants.


To reach these numbers, some grantee sites will need to recruit all willing families into the Child and Family Outcome Study sample. For these sites, the cross-sectional descriptive and the longitudinal samples will be identical. Other sites will need to employ a sampling strategy to randomly select a sufficient number of families from the pool of children who enter the system of care. At these sites, a systematic sampling approach will be used. A random starting point between 1 and the nearest integer to the sampling ratio (n/N) will be selected using a table of random numbers. Children will be systematically selected for inclusion at intervals of the nearest integer to the sampling ratio. For example, every tenth child (after the random starting point) would be sampled in a site serving 2770 children (n/N C 2770/277=10) and every fifth child would be sampled in a site serving half that number or 1385 children (n/N C 1385/277=5) (where n = the number of children in the population and N = the number of children to be recruited into the sample.)


Table 5

Data Collection Schedule for the Child and Family Outcome Study



Data Collection Year

Year Recruited1

Sites2

FY04

FY05

FY06

FY07

FY08



FY09


FY10

Year 2

Funded FY02 (16 sites)

(intake)

14723

(6 mos)

1398

(12 mos)

1328

(18 mos)

1262

(24 mos)

1199

(30 mos)

1139

(36 mos)

1082







Data collection only if necessary

Funded FY03 (7 sites )



(intake)

644

(6 mos)

611

(12 mos)

580

(18 mos)

551

(24 mos)

523

(30 mos)

497

(36 mos)

472





Funded FY04 (4 sites)





(intake)

368

(6 mos)

350

(12 mos)

332

(18 mos)

316

(24 mos)

300

(30 mos)

285

(36 mos)

271



Year 3

Funded FY02 (16 sites)


(intake)

14723

(6 mos)

1398

(12 mos)

1328

(18 mos)

1262

(24 mos)

1199

(30 mos)

1139






Data collection only if necessary

Funded FY03 (7 sites )





(intake)

644

(6 mos)

611

(12 mos)

580

(18 mos)

551

(24 mos)

523

(30 mos)

497




Funded FY04 (4 sites)







(intake)

368

(6 mos)

350

(12 mos)

332

(18 mos)

316

(24 mos)

300

(30 mos)

285

(36 mos)

271

Year 4

Funded FY02 (16 sites)




(intake)

14723

(6 mos)

1398

(12 mos)

1262

(18 mos)

1199






Data collection only if necessary

Funded FY03 (7 sites )







(intake)

644

(6 mos)

611

(12 mos)

580

(18 mos)

551




Funded FY04 (4 sites)









(intake)

368

(6 mos)

350

(12 mos)

332

(18 mos)

316

(24 mos)

300


Year 5

All 27 sites

Completion of data collection if data collection goals have not been met.

Year 6

All 27 sites

Completion of data collection if data collection goals have not been met.

  1. Refers to the year of the national evaluation in which the family was recruited into the study. Across all sites, the national evaluation spans 6½ years. Although data collection will occur in years 2 through 6, recruitment ends in year 5 with follow-up data collection continuing in year 6. Any sites that have not met their participant recruitment goals will be allowed to continue data collection into the final service funding year (year 6) and for 6 months into the no-cost extension period, if applicable.

  2. Sites were funded across 3 years.

  3. Assumes 92 children and families recruited per site and 5 percent attrition at each data collection point. Calculation began with the number of families needed for the last data collection point, adding 5 percent more cases at each previous data collection point, and rounding to the nearest integer (e.g., in the first row, 1472 * 0.95 = 1398, 1398 * 0.95 = 1328, 1328 * 0.95 = 1262, etc).


The purpose of the sampling strategy described above is to maximize the chance that the children who participate in the Child and Family Outcome Study are indeed representative of the universe of children who enter the systems of care. If this is achieved, the findings from data collected from the randomly selected sample are more likely to generalize to the overall client pool. Every effort will be made to recruit and follow the children who are randomly selected into the Child and Family Outcome Study. However, one should expect that some of the families approached about entering the study would refuse to participate. When a family refuses to participate, the next family that meets the selection criteria will be selected. Past experience indicates that sites vary in their abilities to recruit Descriptive Study sample members into the Child and Family Outcome Study with the majority of sites recruiting over 60 percent of the Descriptive Study sample into the Child and Family Outcome Study sample. In order to estimate the effect of the refusals on the representativeness of the sample, the families who refuse will be compared to the participating sample on, at minimum, demographic characteristics (see Data Analysis Plan section above). Recall that descriptive data will be collected on all families that enter the system of care. This will provide the data upon which to make comparisons.


Experience from previous phases of the national evaluation has shown that, although sites can make estimates, it is difficult to predict precisely how many children will be served by the grantee systems of care. In addition, the number of children who enter the systems of care may increase over time as grantees expand their service capacity and enhance outreach efforts. For that reason, sampling strategies will have to remain flexible during the recruitment period and will be monitored closely by the National Evaluator. The sampling strategies are based on the sampling ratio approach to random selection described above. In the first year of their funding, grantees typically monitor the number of children that enter their systems of care. A sampling ratio was developed based on the first year of enrollment into the system of care. This sampling ratio has been tested in the first 3 months of data collection and is monitored throughout the recruitment period to ensure that it remains on target.


The actual process of recruitment differs across sites. This is necessary because children and families enter services differently across sites. For example, in one site, the primary portals of entry might be the schools, while in another it might be the court system. It is also likely that sites will have a variety of portals of entry (e.g., mental health centers, schools, and courts). Every effort has been made to ensure that the recruitment process is as standardized as possible across sites and at the various portals of entry. The rudiments of sample selection and recruitment have been documented in the national evaluation procedures manual, with additional guidelines developed specifically for each site. Training has been conducted at each site. Whether a family is to be recruited into the Child and Family Outcome Study (i.e., whether they are selected for inclusion in the sample) is determined as soon as it is known whether they meet the eligibility criteria. Intake workers, regardless of their location, training or service sector affiliation, are trained to conduct the consent to contact process in a uniform manner. Scripts are used to make sure that each potential participant receives the same information before agreeing to be contacted by the evaluation staff (see Attachment 3.B.). Similarly, evaluation staff have been trained to conduct the informed consent process uniformly. Standard forms are used to document refusals to be contacted or to participate in the study. These are established procedures in field research, and the National Evaluator continues to closely monitor them.


Service Experience Study. The sampling and recruitment procedures for the Multi-Sector Service Contacts, the Family and Youth Services Surveys, and the Cultural Competence and Service Provision Questionnaire are identical to that of the Child and Family Outcome Study; that is, the same randomly selected sample of children and families being served in all system of care sites. Thus, anticipated response rates and retention rates are the same as for the Child and Family Outcome Study.



Treatment Effectiveness Study. The Treatment Effectiveness Study is conducted in Oklahoma City, Oklahoma and Cleveland, Ohio, which were selected based on the selection criteria described and approved in the original OMB submission. As described and approved in the original proposal, screening criteria is used to identify a specific subpopulation of children and families that are appropriate candidates for the Brief Strategic Family Therapy. These criteria include children aged 9 to 17 years of age with disruptive behavior disorders and at risk for substance use. The DPS used to identify this pool of candidates for consideration for the study.


The effect sizes of the phenomena of interest form the basis of determining the minimum sample size needed through a statistical power analysis. Children and families who enter the study will be randomly assigned to one of the two treatment conditions. In order to detect the small to medium effect size expected, at least 120 children and families at each of the two systems of care will need to be enrolled in this aspect of the study based on the power analysis described in the original OMB submission. For a completely balanced design this will require that 42 children and families complete data collection at 2 years post enrollment in each condition. Based on previous experience, we anticipate a retention rate of approximately 85 percent for the Treatment Effectiveness Study.


Sustainability Study. For each site, four site-level respondents (i.e., project director, key mental health representative, family organization representative, agency representative) are asked to complete the Web survey. The project director, the director of the local family organization and the two agency representatives who are asked to complete the survey are individuals interviewed for System of Care Assessments. Data collected from the sustainability survey will be integrated with data from their System of Care Assessment interviews. Previous experience indicates that the response rate for the sustainability study should be approximately 97 percent.


Culturally Competent Practices Study. Respondents for the Culturally Competent Practices Survey are mental health service providers from each of the 27 CMHI-funded communities. To identify these providers, a modified snowball sampling procedure was employed in an attempt to identify a comprehensive list of mental health service providers for each community, with efforts made to sample the same types of mental health providers (e.g., psychologists, psychiatrists, social workers) in each locale. A two-stage structured process was used to identify the comprehensive list of mental health service providers. The first stage involved structured telephone contact with the community project director during which he or she was asked to identify all agencies and organizations that provide mental health services to children eligible for or enrolled in system of care services. The second-stage involved contact with agencies or organizations identified at stage one, and a request for a list (including names and address) of their mental health clinicians. In addition, the second stage contact requested identification of other agencies/organizations in the area that provide services to these same children.


After creating the provider lists, contact was made with a sample of providers in each community in order to recruit them to complete the survey. Assuming a 70 percent response rate, it was necessary to contact 43 providers in each community in order to ensure at least 30 respondents per community. Recruitment was conducted using the Dillman method for mail and Internet surveys. This method involves mailing out a pre-survey notification letter to selected providers that explains that the recipient will be asked to participate in a survey, followed 1 week later by an invitation letter containing an incentive and directions for logging onto a Web site to complete the Web-enabled survey.


Focus groups on culturally competent practices will be conducted during the final 3-year period. Recruitment for the focus groups varies by respondent type. Focus groups will be held in only two of the communities that were surveyed, with the choice of communities determined by the findings from the survey. Within these two communities, service providers will be sampled from the comprehensive list of providers created for these communities during the survey-stage of the study. Providers who were contacted for the survey will be removed from the list before this second round of sampling occurs. Caregivers and youth will be selected from the families who participate in the Child and Family Outcome Study. Caregivers will be randomly selected from all caregivers in the Child and Family Outcome Study in the focus group communities. Selection of youth will be limited to those aged 11 and older, and decisions about which of these youth to contact will be made in conjunction with system of care staff so that only youth who will be able to function in a focus group setting will be contacted for recruitment. In each focus group community, providers, caregivers, and youth will be contacted until 18 participants for each respondent type have been successfully recruited. This number of participants is needed in order to conduct 2 focus groups with 9 people in each for each respondent type, which allows for a broad range of opinions to be voiced while keeping the groups small enough that everyone will have an opportunity to speak. The administrator focus groups will involve administrators from multiple system of care communities and will be conducted at a System-of –Care Meeting. Using the results from the survey, communities will be divided into those that provide a high-level of culturally competent services and those that provide a low or inconsistent level of culturally competent services. From each of these groupings, project directors will be randomly selected until 6 participants have been successfully recruited into each of the high and low/inconsistent culturally competent practices focus groups. Administrator focus groups will be smaller than the other groups because the pool of potential participants is smaller and because the administrators come from different communities so that more diversity is captured with fewer people.


Once potential focus group participants have been selected, they will be recruited by the National Evaluator using a screener developed by Macro International Inc. (see Attachment 4.I.2.). The screener will include a script informing potential participants that the discussion during the focus group will focus on the cultural competence of services provided by mental health practitioners, and they will be told how they were selected to be contacted. Potential participants will also be informed of the financial incentive for participation in the groups. At the end of the screening call, potential participants will be asked if they are interested and available to participate in the focus group. For those who do want to participate, a confirmation letter will be mailed to them and they will receive a reminder phone call the night before their session.


Family Education and Support Study. As described earlier, the Family Education and Support Study will be conducted using a three-tier design. As described previously, Tier 1 will involve secondary analysis of existing data from the longitudinal child and family outcome study from previously funded communities. Based on results of these analyses, critical domains and elements of family education and support will be identified, and used to inform data collection in Tier 2.


For the focus groups conducted within up to six identified communities in Tier 2, service providers will be sampled from a comprehensive list of providers that will be created for these communities through contact with all involved child-serving agencies within the system of care. Caregivers will be randomly selected from all caregivers in the Child and Family Outcome Study in the focus group communities. In each focus group community, providers and caregivers will be contacted until 15 participants for each respondent type have been successfully recruited. This number of participants is needed in order to conduct one focus group with 9 people in each for each respondent type, which allows for a broad range of opinions to be voiced while keeping the groups small enough that everyone will have an opportunity to speak.


For Tier 3, up to six sites will be selected from those targeted in Tier 2. As noted previously, our experience with other phases of the national evaluation indicates that there is considerable variability in the ways sites operate and implement their programs. It is anticipated that multiple family education and support models may emerge from the findings in Tier 2. The critical elements assessment will identify key components of these models at each site. In order to allow for an examination of the impact that variation in local implementation may have on outcomes, sites will be selected based on the variability of the family education and support implementation characteristics and the variability of the critical elements experience within sites. Within a site, only families with an identified need for family education and support services, as described in a plan of care, will be recruited for into the study. Families will be assessed on their critical elements experience through data collected from the ongoing service management process (e.g., service planning meetings and case record reviews) that is part of a community’s service provision process. Site selection may be determined by the timing of introduction of family education and support services subsequent to enrollment and how eligibility to receive such services is ascertained. For analyses within each site to examine the impact of variability in experience of critical elements on outcomes, a sample size of 50 in a site will achieve 80 percent power to detect an R-squared of 0.14 attributed to one independent variable with an alpha of 0.05.


Primary Care Provider Study. Primary health care providers will be responding to a survey and will be recruited for participation using a list obtained from InfoUSA, the Macro International Inc. parent company. This survey will be administered at one time only in each community, which is the minimum needed to understand the pool of pediatricians and their perspectives. The list will be derived from ZIP Codes of children participating in the Phase IV funded sites. ZIP Code data of children will be pulled from the existing database. Once the list is obtained, a random stratified sample will be selected. The targeted yearly sample total will be 540. We anticipate a response rate of approximately 80 percent.


2. Information Collection Procedures



System of Care Assessment. The National Evaluator continues to collect data for this component during periodic site visits. Data collection includes semi-structured interviews with key informants, review of documents and randomly selected case records, and observations. To document changes in system of care development that occur over time, all system of care communities are visited three times during the 5 years of data collection (every 18–24 months), beginning in the second year of project funding and repeated in the fourth and sixth years of project funding. Data collection site visits are scheduled according to the relative development of the individual programs so that more advanced communities are scheduled first followed by all others until all have completed the data collection process within the timeframe allotted.


In previous phases of the evaluation, the System of Care Assessment protocol yielded an average of 21 individual interviews and 6 case record reviews per data collection site visit. The respondent category has been revised to include youth and youth coordinators. The interview guides were piloted and revised and can be found in Attachment 4.A.5. In addition to these informants, other key informants include the local project director, core child-serving agency representatives, representatives of family organizations, care coordinators, direct service providers, and caregivers of children who are receiving services through the system of care. The average time to obtain the required information from each person is about one hour. Prior to the site visit the National Evaluator sends out tables to be completed by the system of care community. These tables collect information on: 1) the structure and participants of the governing body, 2) trainings that have been provided on system of care principles, 3) demographics of program staff, 4) services provided in the system of care community’s service array, 5) amounts, sources, and types of funding, and 6) participants on the case review team. These completed tables are e-mailed to the National Evaluator approximately 4 weeks prior to the site visit. See Attachments 4.A.1. through 4.A.5. for System of Care Assessment protocols.


The IACS is administered to approximately 14 respondents per site visit, including project directors, core child-serving agency representatives, representatives from family organizations, care coordinators, and direct service providers. The System of Care Assessment interview guides, the IACS and the protocol for arranging for site visits and identifying potential respondents are presented in Attachment 4.A.6.


Services and Costs Study. For this evaluation component, data are compiled from existing records continually maintained in sites= fiscal Management Information Systems (MIS) for all children and families who receive services through the systems of care as part of routine operating procedures. Those data files are transmitted to the National Evaluator at regular intervals. The National Evaluator has become intimately familiar with how the data are collected and maintained, and the sites= purpose for and use of the data. This understanding is important for shaping how the data are analyzed and for ensuring that interpretations of findings are warranted.


Cross-Sectional Descriptive Study. Data for the Cross-Sectional Descriptive Study are collected at entry into services for all children and families in the grantee sites. Data for this component are collected by sites= intake staff, who are trained by the National Evaluator to ensure standard collection of these data. To standardize the collection of these data across sites, the National Evaluator has developed the Enrollment and Demographic Information Form (EDIF) and the Child Information Update Form (CIUF) (see Attachments 4.B.1. and 4.B.2.). The information can be collected from case records or from interviews conducted at intake. The National Evaluator strongly recommends that all grantees incorporate these items into their intake process. These data can be directly entered into a Web-based database by intake personnel to facilitate capture of basic descriptive characteristics of children served. There is no burden associated with the Enrollment and Demographic Information Form (EDIF) or Child Information Update Form (CIUF). The information collected in the Enrollment and Demographic Information Form (EDIF) includes elements required in the Guidance for Applicants (listed below) plus a few additional elements specific to the evaluation. The required descriptive information includes the following:


$ The number of children served by the CMHS service program

$ Demographic characteristics of the children and families

$ Diagnostic information on the child


For families participating in the Child and Family Outcome Study, the descriptive information that may change over time (e.g., diagnosis, insurance status) will also be collected at each follow-up data collection point using the Child Information Update Form (CIUF). Evaluation staff will collect these follow-up descriptive data elements in conjunction with other follow-up data collection for the Child and Family Outcome Study (see below). Again, the information collected in the Cross-Sectional Descriptive Study creates no additional respondent burden.


Child and Family Outcome Study. Data collection for this evaluation component begins in the second year of the grantees= funding. Because respondents= reading levels vary, the instruments are administered in interview format. This approach has been successfully implemented in Phases II and III and continues to be successful in Phase IV. These data are collected at intake and follow-up data collection points. In Phase IV, child and family outcome data are collected from a sample of children, their caregivers, and their service providers (instruments are provided in Attachment 4.C.). The CMHS program=s Guidance for Applicants requires grantees to collect the following information on child and family outcomes:


  • Standardized assessments of child symptoms and social functioning

  • Functional indicators including school performance and contacts with law enforcement

  • Restrictiveness of child=s service placements

  • Family functioning


Following children and families as long as possible allows the assessment of the long-term impact of the system and permits important functional outcomes to be assessed as children develop toward maturity (e.g., completion of high school). Thus, children and families who enter the study in the first year are followed for 36 months, those who enter in the second year are followed for 30 months, and those who enter in the third year are followed for 18 months.


Seven of the measures, the Youth Services Survey, the Delinquency Survey, the Substance Use Survey, the GAIN Quick–R: Substance Problem Scale, the Youth Information Questionnaire, the Revised Children’s Manifest Anxiety Scales, and the Reynolds Adolescent Depression Scale-Second Edition, are completed by youth 11 years of age and older.


The following measures to assess child mental health and family outcomes were cleared by the OMB for use during the first years of the project. Previously approved measures include the following:


$ Information regarding the residential status of children is collected from caregivers using the Living Situations Questionnaire (LSQ). The LSQ replaces the Restrictiveness of Living Environments and Placement Stability Scale (ROLES) but modifications only affect scoring and do not impact respondent burden (see Attachment 4.C.1.).


$ To measure child clinical symptomatology, caregivers of children age 6 and older complete the Child Behavior Checklist (CBCL 6–18) (see Attachment 4.C.2.a.). The CBCL has been widely used in children=s mental health services research to assess social competence, behaviors, and feelings.


$ The Caregiver Strain Questionnaire (CGSQ) (see Attachment 4.C.3.) is used to measure how families are affected by the special demands associated with caring for a child with serious emotional disturbance.


$ To identify the emotional and behavioral strengths of children, caregivers of children over age 5 complete the Behavioral and Emotional Rating Scale–Second Edition, Parent Rating Scale (BERS–2C). The BERS–2C is a strengths-based measure of social competence (see Attachment 64C.4.).


    • To measure child clinical symptomatology in young children, caregivers of children under age 6 complete the Child Behavior Checklist 1½–5 (CBCL 1½–5) (see Attachment 4.C.2.b.). The CBCL has been widely used in children=s mental health services research to assess social competence, behaviors, and feelings.


    • To measure children’s functioning in school environments, caregivers complete the Education Questionnaire (EQ) (see Attachment 4.C.5.).


  • The Family Life Questionnaire (FLQ) is used to assess how families interact and communicate (see Attachment 4.C.6.).


  • Youth complete the Delinquency Survey (DS). This measure identifies delinquent or risky behavior by, for which youth with mental illnesses may be at high risk (see Attachment 4.C.7.).


  • The GAIN Quick–R: Substance Problem Scale (GAIN) measures substance use, abuse, and dependence and is administered to youth (see Attachment 4.C.8.).


  • The Substance Use Scale (SUS) is administered to youth to determine alcohol, tobacco and drug use during the previous 30 days and 6 months (see Attachment 4.C.9.).


  • To determine if youth are experiencing anxiety, they are administered the Revised Children’s Manifest Anxiety Scales (RCMAS) (see Attachment 4.C.10.).


  • Youth are administered the Reynolds Adolescent Depression Scale–Second Edition (RADS–2) to assess if they are experiencing depression (see Attachment 4.C.11.).


  • There is a wide array of information that is important to know about youth’s experiences, perceptions and symptoms. The Youth Information Questionnaire (YIQ) is a compilation of questions on a range of topics, including coercion, acculturation, symptomatology, peer relations, employment status, suicidality, and neighborhood safety that are answered by youth (see Attachment 4.C.12.).


  • To identify the emotional and behavioral strengths of children from their own perspective, youth complete the Behavioral and Emotional Rating Scale–Second Edition, Youth Rating Scale (BERS–2Y) (see Attachment 4.C.13.).


  • The Columbia Impairment Scale (CIS) is completed by caregivers of children over age 5 to measure children’s general level of functioning (see Attachment 4.C.14.).


  • The Vineland Screener (VS), which assesses development in young children, is completed by caregivers of children age 5 and younger (see Attachment 4.C.15.).


On-site data collectors hired and managed by the sites, collect data in the funded systems of care. In these sites, the people who collect the data depend on the resources and needs of the sites. For example, some sites may choose to hire two full-time staff to manage the local evaluation and to collect all the data. Other sites might choose to hire one full-time evaluator to manage the evaluation but collect data with flexible part-time staff.


The National Evaluator documents and monitors data collection procedures in the system of care sites to ensure the greatest possible uniformity in data collection across sites. In addition, evaluation staff and data collectors are trained using standard materials developed by the National Evaluator.


Service Experience Study. Data for the Service Experience Study are collected along with data for the Child and Family Outcome Study and includes: 1) recording service contacts on the Multi-Sector Service Contacts Questionnaire (MSSC) (see Attachment 4.D.1.), 2) an assessment of service experience, satisfaction and perceived outcomes with the Family and Youth Services Surveys (YSS–F and YSS) (see Attachments 4.D.2. and 4.D.3.), and 3) caregiver report on the cultural competence of services provided using the Cultural Competence and Service Provision Questionnaire (CCSP) (see Attachment 4.D.4.). The Service Experience Study also examines the congruence between the program=s original design and what is actually experienced by clients during implementation of that design. The Youth Services Surveys focus on whether the overall service system experienced by youth and their caregivers reflect the key principles of the system of care model. Caregivers and youth report their perceptions of whether services they received were accessible, well-coordinated, family-focused, culturally competent, helpful in meeting therapeutic goals, and matched with the individual needs of the child and family.


This corresponds to the Guidance for Applicants (see Attachment 1) which requires sites to collect data on:


  • collaboration and coordination of system components,

  • family involvement in services, and

  • family and youth satisfaction with services.


Data for the Service Experience Study are collected in all system of care sites. These data are completed at follow-up for families who have received services as indicated in the gate question and are participating in the Child and Family Outcomes Study. On average, children and families complete 5 follow-up points.


Treatment Effectiveness Study. The Treatment Effectiveness Study assesses the effectiveness of Brief Strategic Family Therapy when integrated into the system of care approach, versus system of care services as usual, on clinical outcomes and provider attitudes about evidence-based treatment (i.e., EBP Provider Attitudes Survey). In addition to the provider measure and Child and Family Outcome Study measures, the following measures are collected:


    • Approximately 262 caregivers whose children, through initial screening appear to have the diagnostic characteristics for BFST are recruited for the Treatment Effectiveness Study. Caregivers complete selected modules of the DISC Predictive Scale (DPS) (see Attachment 4.E.2.) Because provider diagnoses can be unreliable, the DPS modules are administered to caregivers to ensure that children in the study meet the diagnostic criteria for inclusion in the Treatment Effectiveness Study. It is expected that the DPS will confirm that approximately 240 of the 262 children have the diagnosis required for receiving the selected evidence-based treatment. The modules used in this study include the disruptive behavior disorders module and the substance abuse module, but may also include other modules depending on the specific evidence-based treatment implemented at a selected system of care community.


    • Approximately 240 caregivers of children enrolled into the Treatment Effectiveness Study are expected to complete two fidelity measures that focus on the treatment process (Therapeutic Alliance Scale–Caregiver version and Revised Therapy Procedures Checklist) (see Attachment 4.E.5.a.). The Therapeutic Alliance Scale is administered at 1 month after treatment, 2 months, and 3 months. Three months corresponds to the estimated end of treatment for the BSFT intervention group (i.e., post-test). The Therapy Adherence Form (TAF–R) (see Attachment 4.E.6.) is administered only at 3 months, which corresponds to the end of treatment for the BSFT intervention group (i.e., post-test).


    • Approximately 240 youth enrolled into the Treatment Effectiveness Study are expected to complete one fidelity measure that focuses on the treatment process (Therapeutic Alliance Scale–Youth version; see Attachment 4.E.5.b.). This measure is administered at 1 month after treatment, 2 months, and 3 months. Three months corresponds to the estimated end of treatment for the BSFT intervention group (i.e., post-test).


    • Approximately 240 caregivers of children enrolled into the Treatment Effectiveness Study are expected to be administered two family outcome measures (Family Assessment Measure General Scale and Conflict Behavior Questionnaire; (see Attachment 4.E.1.). These measures are administered at baseline, 3 months, 6 months, and then every 6 months thereafter up to 18 months. The 3-month follow-up assessment corresponds to the estimated end of treatment for the BSFT intervention group (i.e., post-test). In addition, a clinical outcome measure (i.e., The Ohio Scales) is administered to approximately 240 caregivers and 240 youth age 9 or older. The Ohio Scales (see Attachment 4.E.4.) is administered at baseline and 3-month follow-up. The 3-month follow-up assessment corresponds to the estimated end of treatment for the BSFT intervention group (i.e., post-test). Given that BSFT is a family therapy intervention, this measure will obtain information from multiple perspectives in the family.


    • No more than 50 service providers for children enrolled into the Treatment Effectiveness Study are administered the Evidence-Based Practices Provider Attitudes Survey to track changes in attitudes over time (see Attachment 4.E.7.).


Sustainability Study. The Sustainability Study involves collecting data in each grantee community via a Web-based survey. This study gathers data on system of care characteristics and factors related to sustainability, and monitor and evaluate the success of sites to be sustainable post-funding. The Sustainability Survey is completed by four selected staff (i.e., project director, family organization representative, agency representative, key mental health representative) from each grantee site in years 2, 4, and 5 of the evaluation (see Attachments 4.G.1. and 4.G.2.).


Following recruitment activities and verification of contact information, survey mailing occurs by e-mail or mail. The National Evaluator will continue to implement this Web-based survey. Implementation of this survey adheres to accepted methods for mail and Internet surveys. After initial solicitation of participation by a key individual in each site and identification of appropriate survey participants, a pre-survey letter explaining that the recipient will be asked to participate in a survey is sent to these selected staff in each community, followed 1 week later by a letter containing a token incentive and directions for logging onto a Web site to complete the Internet survey. Instructions are also provided for obtaining a hard copy of the survey if desired. A follow-up reminder postcard is sent 1 week later, and 1 week after that, another letter containing a hard copy of the survey is sent to all providers who have not completed the Web survey. Two weeks later, another copy of the survey is sent by registered mail to all non-respondents. Telephone reminder calls are made to any remaining non-respondents. Respondents are contacted to schedule an appointment for their telephone follow-up interview, and then receive a letter explaining the interview and an informed consent form to sign and return prior to their interview. These data collection instruments and procedures are the same as those previously approved by OMB for Phases II, III and IV of the national evaluation.


Data collected for this component correspond to the Guidance for Applicants (see Attachment 1), which requires sites to collect data on their progress to become increasingly sustainable over the life of the award, with the amount of program funding from non-award sources increasing incrementally in each year of the award.


Culturally Competent Practices Study. Phase 1 of data collection for this component consisting of a Web-based survey has been completed. Phase 2 consists of focus groups that will be held in year 4 of the evaluation. Focus groups will be held in one community that ranked high in cultural competence on the Web survey, and one community that ranked low or variable on the survey will be selected for qualitative focus groups. In each of the two communities there will be two focus groups with each of service providers, caregivers, and youth (i.e., six focus groups per community). Each focus group will have nine participants, for a total of 108 in-community participants. Additionally, administrator focus groups will be held at a System of Care Meeting. There will be two administrator focus groups, with each group consisting of six administrators either from communities that ranked high in culturally competent care in the provider survey or from communities that ranked low or inconsistent in the survey. Moderator guides for the focus groups are attached (see Attachment 4.I.3.). The National Evaluator will collect the data in Phase 2.


Family Education and Support Study. This study examines the relative impact of receipt of the critical elements of family education and support on child and family outcomes. In addition to the Child and Family Outcomes Study measures, the following measures will be collected for Tier 3:


  • Approximately 300 caregivers will be administered the Duke Social Support Scale to assess levels of social support that families receive. The DSSS will be administered three times at 6-month intervals from baseline, 6 months, to 12 months.


  • Approximately 300 caregivers will complete one measure to assess caregiver functioning and stress. Families will complete the Beck Depression Inventory (BDI), which is a self-report measure of depression for adults 17-80. The BDI assesses cognitive symptoms such as hopelessness, irritability, and feelings of guilt. Questions will be added to the BDI to assess parent use of mental health services. The BDI will be administered three times at 6-month intervals from baseline, 6 months, to 12 months.


  • Approximately 300 caregivers will be administered the Vanderbilt Mental Health Services Self-Efficacy Questionnaire to measure a parent’s sense of self-efficacy to access mental health services for his or her child. The Vanderbilt will be administered three times at 6-month intervals from baseline, 6 months, to 12 months.


  • Approximately 300 caregivers will complete two measures to assess parenting skills and parental involvement. Caregivers will be administered the Alabama Parenting Questionnaire (APQ) and the Parenting Sense of Competence Scale (PSOC) to examine the relation between parenting practices and child behavior problems. Both the APQ and the PSOC will be administered three times at 6-month intervals from baseline, 6 months, to 12 months.


Primary Care Provider Study. Data for the Primary Care Provider Study come from descriptive information on participating children’s health status, care and financing that will be collected continually throughout the national evaluation as part of the Child and Family Outcomes Study. In addition, participating primary care providers from each of the funded sites will complete a one-time survey. This instrument is attached (see Attachment 4.G.2.).


Table 6 summarizes the respondent, data collection procedure, and periodicity for each measure.


Table 6

Instrumentation, Respondents, and Periodicity



Measure


Indicators


Data Source(s)


Method


When Collected


System of Care Assessment (all sites)



System of Care Assessment Tool (Interview Guides and Data Collection Forms)

$ Family-focus

$ Individualized services

$ Cultural competence

$ Interagency collaboration

$ Service coordination

$ Service array

$ System & service accessibility

$ Community-based services

$ Least restrictive service provision

Project staff, core agency representatives, service providers, family members, youth, youth coordinators other constituents


Documents

Interview










Review

Every 18–24 months


Interagency Collaboration Scale (IACS)


  • Interagency collaboration

IACS completed by project staff, core agency representatives, service providers, family members, other constituents

Survey

Every 18–24 months


Services and Costs Study (all sites; caregivers: all enrolled in the child and family outcome study)


Management Information Systems (MIS)


$ Previous service history

$ Service setting and type

$ Level of restrictiveness

$ Mix of services

$ Amount and duration

$ Continuity of care

$ Service costs

$ Funding sources & third party reimbursements


MIS systems maintained by State and local agencies


Data abstraction


Continuously; data transmitted at regular intervals


Child and Family Outcome Study (a sample of children and families enrolled in the system of care)


Caregiver Information Questionnaire (CIQ)


$ Age

$ Educational level and placement

$ Socioeconomic status

  • Race/ethnicity

$ Parents= employment status

$ Living arrangement

$ Presenting problem(s)

$ Intake/referral source

$ Risk factors for family and child

$ Child and family physical health

  • Coercion for services

  • Service use


Caregiver


Interview


Intake, 6 mo., and every 6 months thereafter


Living Situations Questionnaire (LSQ)


$ Living situations

$ Number of placements

$ Restrictiveness of placements


Caregiver


Interview


Intake, 6 mo., and every 6 months thereafter


Behavior and Emotional Rating Scale (BERS)


$ Strengths

$ Social competence


Caregiver of children age 6 and older


Interview


Intake, 6 mo., and every 6 months thereafter


Child Behavior Checklist (CBCL) and Child Behavior Checklist 1½ -5 (CBCL 1½ -5)


$ Symptomatology

$ Social competence


Caregiver


Interview


Intake, 6 mo., and every 6 months thereafter

Education Questionnaire (EQ)

  • Functioning in school environments

Caregiver

Interview

Intake, 6 mo., and every 6 months thereafter

The Family Life Questionnaire (FLQ)

  • Family interaction and communication


Caregiver

Interview

Intake, 6 mo., and every 6 months thereafter

The Vineland Screener (VS)

  • Development

  • Personal and social sufficiency

Caregiver of children age 5 and younger

Interview

Intake, 6 mo., and every 6 months thereafter

The Columbia Impairment Scale (CIS)

  • General functioning

Caregiver of children age 6 and older

Interview

Intake, 6 mo., and every 6 months thereafter


Caregiver Strain Questionnaire (CGSQ)


$ Caregiver strain



Caregiver


Interview


Intake, 6 mo., and every 6 months

Behavior and Emotional Rating Scale-Second Edition, Youth Scale (BERS-2)

  • Strengths

  • Social Competence

Youth

Interview

Intake, 6 mo., and every 6 months

Delinquency Survey (DS)

  • Delinquent or risky behaviors

Youth 11 years and older

Interview

Intake, 6 mo., and every 6 months thereafter

GAIN Quick–R: Substance Problem Scale (GAIN)

  • Substance use, abuse, and dependence

Youth 11 years and older

Interview

Intake, 6 mo., and every 6 months thereafter

Substance Use Scales (SUS)

  • Alcohol, tobacco and drug use

Youth 11 years and older

Interview

Intake, 6 mo., and every 6 months thereafter

Revised Children’s Manifest Anxiety Scales (RCMAS)

  • Child anxiety

Youth 11 years and older

Interview

Intake, 6 mo., and every 6 months thereafter

Reynolds Adolescent Depression Scale-Second Edition (RADS-2)

  • Child depression

Youth 11 years and older

Interview

Intake, 6 mo., and every 6 months thereafter

Youth Information Questionnaire (YIQ)

  • Acculturation

  • Coercion

  • Peer relations

  • Symptomatology

  • Suicidality

  • Neighborhood Safety

  • Presenting problems

  • Employment status

Youth 11 years and older

Interview

Intake, 6 mo., and every 6 months thereafter


Service Experience Study


Multi-Sector Service Contacts (MSSC)


$ Type of service

$ Amount of service

$ Location of service


Caregiver


Interview


Every 6 months after intake if services received

Youth Services Survey-Families (YSS-F)

  • Service experience

  • Client satisfaction

  • Perceived outcomes


Caregiver


Interview


Every 6 months after intake if services received


Youth Services Survey (YSS)


  • Service experience

  • Client satisfaction

  • Perceived outcomes


Youth 11 years and older


Interview


Every 6 months after intake if services received


Cultural Competence and Service Provision Questionnaire (CSSP)





$ Cultural competence


Caregiver


Interview


Every 6 months after intake if services received


Treatment Effectiveness Study


Diagnostic Interview Schedule for Children- Predictive Scales (DPS)


$ DSM-IV Diagnosis


Caregiver


Interview


Entry into treatment, or within 6 months of enrollment in system of care


Therapeutic Alliance Scale (TAS)–Caregiver

Therapeutic Alliance Scale (TAS)–Youth

Therapy Adherence Form- Revised (TAF–R)


$ Adherence to evidence-based treatment

$ System of care service experience


Youth 9 years and older or Caregivers


Interview


1, 2, & 3 months after treatment





3 months after treatment


Evidence-Based Practices Provider Attitudes Survey


$ Attitudes about the implementation and practice of evidence-based treatment


Clinician/case manager


Interview


Administered once per provider at the time the first study family is added to the provider case load.


Conflict Behavior Questionnaire (CBQ)

Family Assessment Measure (FAM)

Ohio Scales- Caregiver

Ohio Scales- Youth


$ Treatment-specific outcomes


Caregiver and Youth 9 years and older


Interview


At intake and every 6 months to 18 months

(Pre- and post-evidence-based treatment for Ohio Scales).

Family Education and Support Study



Duke Social Support Scale (DSS)




  • Social support



Caregiver



Interview



Baseline, 6 & 12 months



Beck Depression Inventory (BDI)



  • Depression symptoms

  • Loneliness

  • Feelings of guilt

  • Parent use of mental health services

  • Caregiver functioning



Caregiver



Self-report



Baseline, 6 & 12 months



Alabama Parenting Questionnaire (APQ)



  • Parenting skills

  • Parental involvement



Caregiver



Interview



Baseline, 6 & 12 months



Vanderbilt Mental Health Services Self-Efficacy Questionnaire



  • Mental health services self-efficacy



Caregiver



Interview



Baseline, 6 & 12 months



Parenting Sense of Competence Scale (PSOC)










  • Parenting skills

  • Parental involvement



Caregiver



Interview



Baseline, 6 & 12 months

Primary Care Provider Study


Primary Care Provider Survey





  • Health care provision

  • Overall approach and screening for mental health disorders

  • Diagnostic and treatment approaches for mental health disorders

  • Familiarity and collaboration with local system of care

  • Organizational and financing factors affecting the provision of mental health care



Primary health care provider


Survey


Once in year 4

Culturally Competent Practices Study


Focus Groups


$ Caregiver and youth attitudes, perceptions and needs

$ Provider practices, perceptions and attitudes


Caregivers, youth, administrators, service providers


Focus groups


Once in year 4




3. Methods to Maximize Response Rates



To maximize the response rate for all data collection efforts, a number of steps are taken:


The National Evaluator continues to take an active role providing technical assistance and support to the grantee sites. This is done by providing: 1) a detailed Data Collection Procedures Manual, 2) an initial training on evaluation protocols, 3) evaluation workshops at semi-annual national meetings, 4) one-on-one contact with national evaluation liaisons, 5) regular teleconferences and site visits throughout the evaluation period, 6) forums for cross-community facilitated discussions, 7) reading materials, and 8) additional guidance and information, as questions arise. In addition, resources to assure that site evaluators are aware when an interview is due for completion is provided in the form of a Tracking System in Microsoft Access specific to this evaluation, and reminder e-mails generated by the Internet-based data collection system to eliminate the need for site-level duplication of effort and expense in the design of local tracking materials.


Additionally, the National Evaluator provides mechanisms for sites to communicate with the National Evaluator and other sites. This is done by provision of an Internet-based listserv for facilitating communication about training and technical assistance regarding evaluation implementation and utilization. The listserv allows site evaluators to communicate with the National Evaluator and each other through group e-mail. Any e-mail message sent to the listserv is automatically distributed to all site evaluators. The listserv is run at no cost to site evaluators. As well, a computer bulletin board has been established and can be used to provide a safe avenue for exchanging electronic copies of documents such as evaluation reports and research instruments to use for training and technical assistance purposes.


Special efforts around training in communities with smaller service populations is conducted to ensure that as many people as possible from the target population are enrolled and that site-staff are familiar with methods for maximizing response rates. The National Evaluator encourages these sites to keep in frequent contact with study participants in order to update telephone numbers and addresses and to create an identifier for the site to engage families. As well, the National Evaluator provides these sites with contact information for staff from other sites that have had high response rates and assists them in applying strategies that have been used successfully in other communities.


To help ensure that data are being collected regularly and in keeping with national evaluation standards, the data collection staff at the local sites work closely with local providers, staff from various agencies, and evaluation staff. These contacts provide focus to the evaluation, data collection procedures, and any questions or concerns of the participating providers or agencies. As well, local parent groups have been enlisted to encourage the cooperation of families in providing child and family information.


Following from the national evaluation standards, information is collected from participants in the longitudinal Child and Family Outcome Study to facilitate contacting them in the future. This has included the names, phone numbers, and addresses of close friends and family members who are likely to always know where the participants are if they move. At the time of follow-up data collection, staff attempt to contact respondents at different times of the day and week using a variety of methods (e.g., phone calls, mailed postcards). This process continues until the determination is made that a family has refused further participation or cannot be found. Efforts to contact respondents for follow-up data collection begin by 1 month before the follow-up interview is due. Other efforts to increase the response rate include:


$ providing an incentive payment for completing follow-up interviews;

$ administering the instruments to children and their parents/caregivers at times and settings of their choice and administering multiple instruments at one time;

$ developing a close working relationship between the data collection staff and providers at each site to facilitate tracking;

$ conducting follow-up and informational mailings throughout the study period to maintain contact with study participants;

$ using a centralized data collection and tracking system involving trained interviewers and at least one person dedicated to the tracking of study participants over time to keep study attrition to a minimum;

$ employing proven tracking techniques (e.g., request address corrections from the post office for forwarded mail, use CD ROM software with names and addresses, employ locator services to search for respondents);

$ obtaining permission from caregivers for evaluators to contact other agencies for the purpose of getting new addresses and phone numbers if the family has moved since the last interview; and

$ providing sites with useful feedback on data obtained through the evaluation activities that will assist them in planning and service delivery.


Recognized strategies to maximize response rates for mail or Internet surveys has been employed for the Sustainability Survey and will be employed for the Primary Care Provider Study. These methods include pre-survey notification mailings, survey mailings with explanatory cover letters and incentives, and follow-up postcards, letters and phone calls.


Focus group participants associated with the Cultural Competence Practices Study and the Family Education and Support Study will be sampled from a comprehensive list of respondent categories (e.g., caregivers and providers) that will be created for each community through contact with all involved child-serving agencies within the system of care to recruit providers and from the Child and Family Outcomes Study to recruit caregivers. Respondents will be randomly selected in the focus group communities. In each focus group community, respondents will be contacted until 15–18 participants for each respondent type have been successfully recruited. This number of participants is needed to maximize participation and obtain nine participants, which allows for a broad range of opinions to be voiced while keeping the groups small enough that everyone will have an opportunity to speak.






4. Tests of Procedures




Many instruments used in Phase IV are standardized instruments that have been tested through use in children=s mental health services research and practice. These include the Child Behavior Checklist, the Behavioral and Emotional Rating Scale, the GAIN Quick–R: Substance Problem Scale, the Youth Services Surveys, the Revised Children’s Manifest Anxiety Scales, the Reynolds Adolescent Depression Scale–Second Edition, and the Interagency Collaboration Scales. Selection of measures was based on expert panel reviews, and an assessment of measurement quality as reported in the literature. (Information on the reliability and validity of the measures and other supporting materials appears along with the instruments in Attachment 4.) Decisions about Phase IV instrumentation were made in conjunction with expert reviewers, site representatives and family members. These consultants are listed in Attachment 2.


In addition to providing input into the selection of standardized instruments, the team of consultants also suggested measures to be removed from the evaluation, and specific items to include in the evaluation (which have been incorporated into the new and revised measures). New and revised measures have been administered to determine burden estimates. Additionally, the measures were tested with less than 10 caregivers and youth for face validity, understandability, and additional burden estimates. The following are measures in Phase IV:


  • The Family Life Questionnaire

  • Cultural Competence and Service Provision Questionnaire

  • Youth Services Survey (youth and caregiver versions)

  • Interagency Collaboration Scale

  • GAIN Quick–R: Substance Problem Scale

  • Revised Children’s Manifest Anxiety Scales

  • Reynolds Adolescent Depression Scale–Second Edition

  • Caregiver Information Questionnaire (Baseline and follow-up versions)

  • Education Questionnaire

  • Living Situations Questionnaire

  • Multi-Sector Service Contacts

  • Delinquency Survey

  • Substance Use Survey

  • Child Behavior Checklist

  • Behavioral and Emotional Rating Scale

  • Caregiver Strain Questionnaire


In addition to the measures listed above, there are 14 protocols that have been developed for the System of Care Assessment to be used with a variety of interview respondents including agency representatives, project directors, family and youth representatives, evaluation team, case review team, service providers, caregivers and youth.


The majority of measures used in the Treatment Effectiveness Study is included in the Child and Family Outcome Study and have been used in prior phases of the national evaluation. Of the measures unique to the Treatment Effectiveness Study, the DISC Predictive Scales has been used in prior phases of national evaluation. The Evidence-Based Practices Provider Attitudes Survey is used to gather evidence-based treatment attitude information at baseline. This measure has also been used in prior phases of the national evaluation. The Therapeutic Alliance Scale and the Therapy Adherence Form are the treatment fidelity measures and are used to assess the therapist/client relationship and the therapeutic orientation of the clinician from the perspective of the caregiver and the youth. The Family Assessment Measure General Scale (FAM), Conflict Behavior Questionnaire (CBQ), and Ohio Scales assess the effect of Brief Strategic Family Therapy on child and family outcomes. The Ohio Scales will gather functioning, symptomotology, and satisfaction information. These measures have been piloted within each community and shown to have good psychometric properties. The Treatment Effectiveness Study measures were approved on March 16, 2005 through an OMB desk review.


Moderator’s guides to be use for focus groups held in year 4 of the evaluation were developed based on initial findings of the Web survey for the Cultural Competence Practices Study. Two communities will be selected for qualitative focus groups. Focus groups will be held in one community that ranked high in cultural competence on the Web survey, and one community that ranked low or variable. In each of the 2 communities there will be two focus groups with service providers, caregivers, and youth (i.e., 6 focus groups per community). Each focus group will have 9 participants, for a total of 108 in-community participants. Additionally, focus groups with administrators will be held at a System of Care Meeting. There will be two administrator focus groups, with each group consisting of six administrators either from communities that ranked high in culturally competent care in the provider survey or from communities that ranked low or inconsistent in the survey.


Tests of Procedures for New Study Components


The majority of measures to be used in the Family Education and Support study are included in the Child and Family Outcome, including the Caregiver Strain Questionnaire. The measures unique to the FES study which are new include: the Duke Social Support Scale to assess social support; the Alabama Parenting Questionnaire and Parenting Sense of Competence Scale to assess parenting skills and parental involvement; the Beck Depression Inventory to assess caregiver functioning; and the Vanderbilt Mental Health Services Self-Efficacy Questionnaire to assess self-efficacy of mental health services. These measures are standardized instruments that have been tested in mental health services research and practice.


The Primary Care Provider survey has been developed based on the qualitative data that were obtained from the focus groups during year 2. The survey has been piloted tested to insure the face validity and understandability of the items.

All the measures for Phase IV have been translated into Spanish. The reliability and validity of the Spanish CBCL has been reported in the literature. Translation of measures has been conducted using established procedures, as has been done in earlier phases. First, experienced bilingual translation consultants translated the measures from English to Spanish. Then, to maximize the accuracy of the translation, selected sections of each measure were then back-translated from Spanish to English by other translators.


5. Statistical Consultants



The National Evaluator has full responsibility for the development of the overall statistical design, and assumes oversight responsibility for data collection and analysis for Phase IV. Training, technical assistance, and monitoring of data collection will continue to be provided by the National Evaluator. The individual responsible for overseeing data collection and analysis is:


Brigitte Manteuffel, Ph.D.

Macro International Inc.

3 Corporate Square, Suite 370

Atlanta, GA 30329

(404) 321-3211


The following individuals will serve as statistical consultants to this project:


Michael Foster, Ph.D.

University of North Carolina at Chapel Hill

School of Public Health

Department of Maternal and Child Health

CB 7445

Chapel Hill, NC 27599

(814) 865-1923


Paul Greenbaum, Ph.D.

Florida Mental Health Institute

University of South Florida

13301 Bruce B. Downs Boulevard

Tampa, FL 33612

(813) 974-4552


Anna Krivelyova, M.S.

Macro International Inc.

3 Corporate Square, Suite 370

Atlanta, GA 30329

(404) 321-3211


Robert Stephens, Ph.D., M.P.H.

Macro International Inc., Inc.

3 Corporate Square, Suite 370

Atlanta, GA 30329

(404) 321-3211


The agency staff person responsible for receiving and approving contract deliverables is:


Sylvia Kay Fisher, Ph.D.

Program Director for Evaluation

Child, Adolescent and Family Branch

Center for Mental Health Services

Substance Abuse and Mental Health Services Administration

1 Choke Cherry Road   Room 6-1047

Rockville, Maryland  20857

(240) 276-1923


2


File Typeapplication/msword
File TitleSupporting Statement
AuthorGordon
Last Modified Byphyllis.gyamfi
File Modified2007-03-16
File Created2007-03-09

© 2024 OMB.report | Privacy Policy