NCTSI OMB Renewal Statement_032009

NCTSI OMB Renewal Statement_032009.doc

Cross-Site Evaluation of the National Child Traumatic Stress Initiative (NCTSI)

OMB: 0930-0276

Document [doc]
Download: doc | pdf

Supporting Statement





Cross-site Evaluation of the National Child Traumatic Stress Initiative

















Center for Mental Health Services

Substance Abuse and Mental Health Services Administration



Table of Contents



Supporting Statement for THE CROSS-site Evaluation of the National Child Traumatic Stress Initiative

  1. JUSTIFICATION

A1. Circumstances of Information Collection

Background

The Center for Mental Health Services (CMHS), Substance Abuse and Mental Health Services Administration (SAMHSA), is requesting a revision from OMB to approve data collection associated with the cross-site evaluation of the National Child Traumatic Stress Initiative (NCTSI). The current approval is under OMB No. 0930 0276 which expires on 4/30/2009.The overall presence of traumatic events in life is high in the United States, for children and for adults, in general, and especially for adults who experienced traumas in childhood, as findings from the National Comorbidity Survey (NCS) and other surveys and studies indicate. Conducted between 1990 and 1992, the NCS found that the lifetime prevalence of experiencing a traumatic event severe enough to cause psychopathology, such as posttraumatic stress disorder (PTSD), is more than 50%. Approximately 20% to 25% of individuals who experience a traumatic event go on to develop PTSD, with the lifetime prevalence of PTSD estimated to be nearly 8% in the general population. Qualifying events for PTSD and other trauma-related disorders were common, with many respondents reporting the several such events during their lifetimes (Kessler, Sonnega, Bromet, Hughes, & Nelson, 1995). Moreover, many of the traumas reported by adults in the survey in this sample of Americans aged 15–54 were actually experienced in childhood.

Children’s experience of trauma and trauma-related disorders occurs as a result of child maltreatment; witnessing or experiencing community or domestic violence; accidents; injury; terrorist acts or events due to war; witnessing or experiencing natural disasters, such as floods, hurricanes, and fires; experiencing loss of family or friends; displacement and refugee trauma; and medical trauma—all of which can have devastating effects on children and their families. The current climate of war and heightened risk of terrorism in the United States only increase the potential that children will experience trauma as a result. Even for those who have not experienced war and terrorism firsthand, the media has made such events readily accessible. A number of recent studies have reported an association between televised violence and experience of diagnosable PTSD (Brener, Simon, Anderson, Barrios, & Small, 2002; Ozmert, Toyran, & Yurdakok, 2002; Pfefferbaum, 1999; Terr et al., 1999). Recent research continues to document that trauma often leads to a wide range of psychopathologies capable of having lifelong effects and intergenerational impact (Breslau, Davis, Andreski, & Peterson, 1991; Hubbard, Realmuto, Northwood, & Masten, 1995; National Advisory Mental Health Council, 2001).

Beyond violence in the local and global community, many children incur just as much risk, if not more, of traumatic experience in their own homes or at the hand of someone close to them. In 2002, an estimated 896,000 children were victims of child abuse or neglect in the United States, and more than 60% were neglected by their parents or other caregivers (U.S. Department of Health and Human Services, 2002). For those children surviving child maltreatment, the negative impact on their psychological health can be significant and enduring. Studies of children in violent homes have shown increased dissociation and other trauma-related symptoms relative to children in nonviolent homes (Rossman, 1999), as well as lower self-esteem, lower levels of social functioning, and higher levels of depression, anxiety, behavioral problems, and aggressive behavior (Fantuzzo et al., 1991; Graham-Bermann, 1998). Long-term impairments in adulthood have included sexual disturbances, anxiety and fear, depression, low self-esteem, aggressive behavior, PTSD, and interpersonal problems (Silverman, Reinherz, & Gianconia, 1996).

Many studies have documented the long-term negative effect on children of a range of different trauma exposures (Honig, Grace, Lindy, Newman, & Titchener, 1993; Hubbard et al., 1995; Išpanovic-Radojkovic, 1993; Jones & Ribbe, 1991; Widom, 1989). Decades after her landmark child trauma study, researcher Lenore Terr described how childhood psychic trauma appears to be a crucial etiological factor in the development of a number of serious disorders both in childhood and adulthood (Terr et al., 1999). Davis (2000), who comprehensively reviewed the literature, noted that PTSD appears to be a potentially serious disorder in children and adolescents, not only because of the intense suffering it wreaks on young people, but because of its adverse effect on biological, psychological, and social development. Additionally, Tyano and others (1996) found that in the case of more severe PTSD symptoms, children were more likely to have longer term maladjustment.

Research has shown that appropriate intervention at the appropriate time can drastically affect whether and to what extent children recover from exposure to trauma. For example, counseling children very soon after a catastrophic event has been shown to reduce some symptomatology (Chemtob, Nakashima, & Carlson, 2002; Goenjian, Karayan, & Pynoos, 1997). Studies also have shown that because a parent’s reaction to the event strongly influences children’s ability to recover, children are more likely to suffer fewer effects when parents receive emotional support or counseling following a traumatic stressor (Cohen, Mannarino, & Knudsen, 2004; De Clercq, 1995; Keren & Tyano, 2000). Although the evidence base for treatment and intervention is growing, a comprehensive report from the National Advisory Mental Health Council’s Behavioral Science Workgroup indicates that even in the case of treatments found to be effective, the protocols are not widely understood by clinicians in the field, and are not being translated into practice often enough (National Advisory Mental Health Council, 2001).

The National Child Traumatic Stress Network

As research and recent national reports have suggested, without a coordinated and sustained effort to address the gaps in children’s mental health research and treatment, many children will miss an opportunity for care and recovery from traumatic experiences, as well as a chance “to live, work, learn, and participate fully in their communities” (New Freedom Commission on Mental Health, 2003, p. 1). In building a bridge between science and services—and between services and future research—the NCTSI has the potential to simultaneously fulfill many priority needs identified by a consensus of experts, including the need for implementation of evidence-based interventions and for research studies to inform the development of refined and increasingly effective treatment interventions.

Organized into three interlinking tiers, the National Child Traumatic Stress Network (NCTSN) comprises a nationwide network of over 80 current and previously funded grantees involved in diverse ways with improving access to care and raising the standard of care for children exposed to trauma. The current grantees include 37 Community Treatment and Services Programs (CTSs), 13 Treatment and Service Adaptation Centers (TSAs), and 2 grantees that together compose the National Center for Child Traumatic Stress (NCCTS). The CTSs provide services in community settings, collect clinical data on traumatized children receiving treatment, provide leadership and training on child trauma for service providers in the community, and in some instances serve as community “laboratories” for effectiveness studies of trauma-specific interventions and treatments (NCTSN, 2008). The TSAs identify, support, improve, and develop treatment and service approaches for different types of child and adolescent traumatic events (NCTSN, 2008). TSAs also may provide direct services to children who experienced traumatic events and their families, but they are usually involved in the development and testing of trauma-specific interventions and treatments, as well as the dissemination of best practices to mental health professionals. TSAs funded before 2005 were called Intervention and Development Evaluation Centers (IDEs), but throughout this statement, the term TSA will be used to refer to both TSAs and IDEs. The NCCTS provides leadership and guidance in coordinating activities to the network of grantees. In addition, the NCCTS develops and disseminates evaluation, treatment, and public mental health strategies for children, families, and communities affected by traumatic events (NCTSN, 2008). Through its multitiered structure, the NCTSN draws on the strengths of academic centers, hospitals, community-based agencies, schools, and other entities to respond to children’s immediate clinical needs, while establishing lasting partnerships and relationships needed to lay the foundation for a network of services organized along a continuum of care.

The characteristics of individual NCTSN grantees reflect attention to national priorities regarding the quality and type of services and research needed in the field. Grantees serve geographically, demographically, and clinically diverse populations and provide a range of treatments, addressing the need for a broad array of services to meet eclectic needs. Moreover, many grantees have responded to the call for building interagency relationships among entities serving children and adolescents at the local level. Many grantees are working to involve families in the planning and delivery of their services. Some are particularly focused on the development and evaluation of interventions that are developmentally appropriate, and others evaluate and disseminate interventions that respond to the needs of culturally diverse populations.

The Need for Evaluation

In 2004, SAMHSA issued a request for task order proposals (RFTP) for the cross-site evaluation, which had the following objectives:

  • Assess the evaluation activities conducted by grantee sites

  • Report evaluation results and lessons learned on the basis of a cross-site analysis of existing evaluation data

  • Develop logic models that delineate the relationship between center collaboration and improved outcomes for children who have experienced traumatic stress

  • Develop a comprehensive data collection package

  • Coordinate with the NCCTS to ensure cross-site evaluation activities build on the work of grantee sites

  • Create materials and training plans to assist grantees in meeting future evaluation requirements

The issuance of this RFTP can be attributed to several factors, including the initial congressional evaluation requirements for the NCTSI in 2000 (P.L. 106–310), goals and recommendations stemming from the 2003 report from the President’s New Freedom Commission on Mental Health (specifically, Goal 5), and Federal program requirements as outlined within the Government Performance and Results Act of 1993 (GPRA), and the Office of Management and Budget’s (OMB’s) Program Assessment Rating Tool (PART). PART requirements for the NCTSI include regularly scheduled, objective, independent evaluations that examine how well the program is accomplishing its mission and meeting its long-term goals. These evaluations should be conducted by nonbiased parties with no conflict of interest and should include recommendations for improving program performance.

The evaluation has been and will continue to be focused on the organization, collaborative efforts, function, and impacts of the NCTSI as a whole rather than designed to assess the effectiveness of specific programs or interventions. Cross-site evaluation data will be used to

  • determine the extent to which the NCTSI, through the NCTSN, has been able to achieve its goal of improving mental health services and access to care for children and adolescents, while improving the evidence base;

  • assist the NCTSN better meet its goals;

  • focus technical assistance and support; and

  • ensure accountability to stakeholders, including Federal agencies and the children and families served by the NCTSN, by informing them of progress made by the NCTSN nationwide.

Evaluation data provide the information necessary for shaping and influencing program and policy development through the systematic analysis and aggregation of information across the components of large-scale initiatives, thus contributing to an understanding of overall program effectiveness. Moreover, as challenging as evaluation of large-scale multisite initiatives like the NCTSI can be, without comprehensive evaluation information, the implementation of programs cannot be monitored effectively and their expected outcomes and large-scale product dissemination may be difficult to identify.

Previously Approved Clearance

The previously submitted OMB clearance request was approved for the first 3 years of the cross-site evaluation of the NCTSI.

The goals of the cross-site evaluation are to describe the children and families served by the NCTSN and their outcomes, assess the development and dissemination of effective treatments and services, assess intra-Network collaboration, and assess the Network’s impact beyond the NCTSN. The evaluation addresses the following overarching research questions in order to attain these goals:

  • Who are the families and children being served by the NCTSN centers, and to what extent do their outcomes improve over time?

  • What type and amount of services do children and families receive?

  • How satisfied are children and families with the services they have received?

  • What impact has the NCTSN had on affiliated providers’ knowledge and practice of trauma-informed services?

  • What products/innovations have been developed and disseminated within the Network, and what factors influence product/innovation development and dissemination?

  • What Network-generated products/innovations have been adopted by Network centers and affiliated providers, and what factors are associated with adoption?

  • What is the level of collaboration among Network members, and how does collaboration influence center development and outcomes?

  • What impact has the NCTSI had on the knowledge and delivery of trauma-informed services beyond the NCTSN?

  • What evidence-based practices have been developed and are currently being disseminated by the NCTSN through registration in the National Registry of Evidence-based Programs and Practices?

The cross-site evaluation’s design includes participation in one or more of eight study components that employ both qualitative and quantitative methods to comprehensively examine the impact of NCTSI funding. This evaluation provides the opportunity to advance the understanding of clinical outcomes among children served in the NCTSN, systematically assess the development and dissemination of evidence-based treatments, and examine in greater detail specific efforts and goals of the NCTSI. The eight study components are as follows.

Descriptive and Clinical Outcomes. To address the GPRA goals of increasing access to services and improving outcomes, the cross-site evaluation utilizes descriptive and clinical outcome data to describe the characteristics of children in formal treatment at NCTSN-funded centers, monitor the type and amount of services that they receive, and assess whether children’s outcomes improve over time. The focus is on children and families receiving intensive treatment for trauma exposure. The cross-site evaluation approach is built on the Core Data Set, collected by the NCCTS by providing guidance, support, and monitoring of that data collection activity. The Core Data Set includes instruments specifically designed for this initiative as well as standardized checklists from the field. The characteristics of clients are summarized using descriptive statistics; understanding the different characteristics and needs of clients is accomplished through multivariate analyses, such as cluster analysis and latent class analysis; client change through treatment is assessed with Reliable Change Index scores; and individual and center effects are measured with hierarchical linear modeling. Data are being collected from children and caregivers at entry into services and at 3-month intervals, for the duration of 1 year.

Satisfaction Study. The cross-site evaluation is assessing the Network’s goal of increasing access and capacity of trauma services to children and their families with an examination of service satisfaction among clients receiving direct clinical mental health services. This survey is administered to caregivers of children who have received direct mental health services from a Network center at the close of treatment or at 6 months into treatment, whichever occurs first. This survey is administered to each caregiver one time, using mixed methods of telephone interviews and hard-copy mailout. All caregivers who have consented to participate in the Descriptive and Clinical Outcomes Study are invited to participate. These data are analyzed with descriptive statistics and compared by demographics of the family consumers, funding cohort, target population and maturity level of the center, and historical and clinical characteristics of the client.

Knowledge and Use of Trauma-informed Services. This study component assesses the extent to which funded Network centers enhance the trauma-informed service knowledge base and use among service providers affiliated with the Network through training and outreach activities. Centers participate in the Trauma-informed Services (TIS) Survey by administering it to human service providers after training and outreach events. The data are analyzed with descriptive and inferential statistics compared by provider demographics, funding cohort, and center target population. In addition, change in knowledge base and use of trauma-informed services will be analyzed longitudinally, as more data are collected.

Product/Innovation Development and Dissemination. This component of the cross-site evaluation identifies and describes the products developed and disseminated to Network and non-Network partners. Three methods of data collection are used in this study component: the Product/Innovation Development and Dissemination Survey (PDDS), telephone interviews with existing NCTSN collaborative workgroup/taskforce coordinators (chairpersons), and case studies. The PDDS is included and completed as part of centers’ quarterly progress reports and the combined fourth quarter/annual report. These reports are completed by project directors or staff from both TSAs and CTSs. Coordinators (chairpersons) of active collaborative workgroups participate in telephone interviews conducted every other year. Five case studies focusing on the development and dissemination of specific Network products/innovations are conducted biannually (in years alternating with the collaborative workgroup coordinator interviews). Five telephone or in-person interviews are conducted for each product selected as a case study. Respondents include key informants who are knowledgeable about the development and dissemination of each of these products. The collaborative workgroup coordinator interviews and the case studies occur in alternating years.

The PDDS is incorporated into the Network’s current quarterly and annual reporting process and provides an updated list of all Network products and innovations developed each year. The survey provides general descriptive information regarding the development and dissemination process. The workgroup coordinator telephone interviews examine the role and impact of the Network’s collaborative workgroups in the development and dissemination of products and innovations. Information obtained from the PDDS and the workgroup coordinator telephone interviews is used to inform case studies conducted in the subsequent year of the evaluation. These case studies provide descriptions of the development and dissemination of Network products and identify best practices in this area. The case studies include telephone and face-to-face interviews with individuals identified as playing key roles in the development and dissemination of Network products.

Both descriptive statistics and thematic qualitative analysis are used to analyze all data collected on product/innovation development and dissemination. These data may be analyzed longitudinally, as well as included in hierarchical models to test for variation by center type and descriptive and outcome data. Qualitative analysis of case studies focuses on identifying best practices for the development and dissemination of products/innovations.

Adoption of Methods and Practice. This component of the cross-site evaluation is designed to evaluate the extent to which trauma-related practices, knowledge, methods, and products, particularly products created or disseminated by the NCTSN, are being adopted by Network centers and non-Network partners. The information obtained through this study enhances understanding of the pathways through which adoption and implementation occur, common barriers, and best practices leading to successful adoption and implementation. The study design consists of a two-stage data collection effort: (1) an annual Web-based survey of all NCTSN centers to determine the types of trauma-related products that are in the process of being adopted, as well as the factors affecting the adoption and implementation of them, and (2) subsequent telephone interviews with a subset of centers to collect additional in-depth, qualitative information about the factors that hinder or support the process of adoption and implementation.

Network Collaboration. This component measures the extent and nature of collaboration among centers by examining how collaboration is used as a conduit for sharing and transferring knowledge, resources, and technology to achieve NCTSI goals. Data are collected from key personnel at each funded center in alternating years of the evaluation via a Web-based survey about the extent to which they interact with every other center on select key activities, such as governance and decision making; information and resource sharing; coordinating activities; product/innovation development; professional training; consumer and client training; and increasing public awareness. Another Web-based survey is administered in the off years to quantify the activities and impact of formal collaboration structures in the Network.

National Impact. This component of the cross-site evaluation examines the extent to which the existence of the NCTSN has impacted trauma-informed services information, knowledge, policy, and practices among mental health and non–mental health child-serving agencies external to the Network. The Web-based National Impact Survey collects data about these agencies’ knowledge and awareness of childhood trauma and practices, about their knowledge and connections to the NCTSN centers, and about their policies, practices, and programs targeted to children and adolescents who have been exposed to traumatic experiences. Findings from this component assist in assessing the role of the NCTSN in diffusing trauma-informed care beyond NCTSN communities.

National Registry of Evidence-based Programs and Practices (NREPP) Review. The NREPP was created by SAMHSA’s Center for Substance Abuse Prevention as part of an effort to help policymakers, consumers, and providers learn more about science-based prevention programs and as a mechanism for disseminating such programs to the field. For the cross-site evaluation, the progress of grantees in submitting practices for NREPP review, as well technical assistance provided by the centers, is tracked. In addition, grantees are monitored in the field and through NREPP to confirm that evidence-based programs developed by or through the NCTSN have been submitted to NREPP for review and possible inclusion in the registry, or are working toward such a submission. The information about NREPP submission and inclusion is organized and warehoused. Although clearance is not requested for data associated with this study component (as they constitute no additional burden for staff or families), the study is mentioned here in order to describe the full scope of the cross-site evaluation.

Clearance Request

SAMHSA is requesting approval for revisions to the previously approved cross-site evaluation package. Changes requested are described below.

  • The original OMB clearance was requested and approved for the first 3 years of the evaluation. Respondent burden for the revised clearance is calculated for the next 3 years of data collection.

  • The number of centers for which burden was calculated remains at 44. However, this is simply an estimate, as the numbers of centers active per year changes as new grants are awarded. For the first year of this approval, there will be 51 active centers. After the first year, in September 2009, the 27 grantees funded in 2005 will reach the end of their data collection. At that point, additional centers may be funded or funded again. Because of this variability, the estimate of 44 centers is used, as included in the original OMB package.

  • Modifications were made to the TIS Survey after it was approved as an amendment to the original OMB approval. In response to feedback from centers that had administered the survey, the survey was shortened significantly by removing several questions, resulting in a reduction in estimated burden. The revised instrument is presented in Attachment 3.G.

  • Changes in the Product Development and Dissemination component include a change in the schedule and method of completing the PDDS. The survey was originally contemplated as being administered annually as a stand-alone instrument. It was modified to be included as part of the quarterly progress reports and the combined fourth quarter/annual report completed by centers. The numbers relating to respondent burden have been changed accordingly. Also, as a result of the restructuring of the collaborative workgroups in fiscal year 2005, the number of active workgroups is less than 35—the number that was originally anticipated. Estimates in this package are based on the expectation of 15 active workgroups.

  • Changes in the Adoption of Methods and Practices Study involve a new method of recruitment for service providers. Originally, providers were recruited through the distribution of postcards following center-sponsored training and outreach events. Providers would then self-identify online following the postcard instructions. Now, TIS Survey respondents can indicate a willingness to participate in additional cross-site evaluation surveys. This increased pool of identified service providers will serve as potential respondents for the adoption study survey.

A2. Purpose and Use of Information

This evaluation serves several purposes: (1) collect and analyze descriptive, outcome, and service experience information about the children and families served by the NCTSN; (2) assess NCTSN research, training, and dissemination activities and their impact; (3) assess Network collaboration; and (4) assess the Network’s broader impact. Exhibit 1 illustrates the relationship between the evaluation goals, what is evaluated to assess each goal, and the purpose and utility of evaluation findings. The exhibit also highlights the enhanced evaluation capacity that results from the cross-site evaluation. By providing centers with data, assistance on how data can be used to meet local and Network needs, and, ideally, increasing the appreciation for evaluation and ability to conduct evaluation activities, evaluation findings can be more effectively utilized and evaluation activities can be sustained. The collected data also are useful to SAMHSA, other Federal agencies, individual children and their families, and the research field.

EXHIBIT 1

N
ational Child Traumatic Stress Initiative Evaluation Goals and Utility



SAMHSA can use the results from the evaluation to develop policies and provide guidance regarding the development of the NCTSN. Information and findings from the evaluation can help SAMHSA plan and implement other efforts related to trauma services. SAMHSA also can use the findings from the evaluation to provide objective measures of its progress toward meeting targets of key performance indicators put forward in its annual performance plans as required by law under GPRA. The GPRA indicators that are required to be tracked by the cross-site evaluation are the number of children and adolescents reached by improved services (i.e., whether this number is increasing) and children’s outcomes (i.e., whether outcomes are improving). The measures used in the Descriptive and Clinical Outcomes Study of the evaluation address the GPRA indicators. Specifically, these measures include the Child Behavior Checklist 1.5-5/6-18 (CBCL 1.5-5/6-18); various Core Clinical Characteristics Forms (Baseline Assessment Form, Follow-up Assessment Form, General Trauma Information Form, and Trauma Detail Form), the Trauma Symptom Checklist for Children—Abbreviated (TSCC-A), and the UCLA-PTSD Index (UCLA-PTSD).

Findings from the evaluation can be used by grantees to improve the services, processes, and functions of their centers. Demographic and outcome data on a sample of children and families who participate in the Network aid grantees in identifying the program elements that help children and families function better and that lead to client satisfaction. Grantees can use the information gathered to better identify their target populations and improve their services. Evidence-based treatment development, dissemination, and adoption data can help grantees understand the program processes being implemented, factors that facilitate and hinder these processes, and approaches that can be used to modify existing approaches. Data on Network collaboration can be used to strengthen network relationships by identifying cooperating entities. National impact data can be used to assess the impact of the Network on the broader communities and to determine if goals related to increased access to, and provision of, evidence-based trauma services are being met.

The research community, particularly the field of children’s mental health services research, will continue to profit in a number of ways from the information gathered. First, evaluation of the NCTSI adds significantly to the developing research base about the use of trauma-informed services. Second, the focus on child and family outcomes allows researchers to examine and understand who is being treated for trauma-related problems and the outcomes of that treatment. Third, assessment of the process by which evidence-based trauma services and processes are developed, disseminated, and adopted contributes to understanding the barriers and facilitators that affect this process. Finally, the analysis of evaluation data aids researchers in formulating new questions about the Network and helps both service providers and researchers improve the delivery of children’s trauma services.

If these data collection activities are not continued, policymakers and program planners at the Federal and local levels will not have the necessary information to determine whether centers within the network are working collaboratively and whether they are meeting their objective of developing, disseminating, and implementing evidence-based treatments and processes. As well, they will not have detailed knowledge of who is receiving trauma services and the outcomes of these services.



A3. Use of Information Technology

Web-based surveys are used for data collection in the Adoption of Methods and Practices, Network Collaboration, and the National Impact studies. (In the original approval package, the TIS Survey was proposed as a Web-enabled survey, but this methodology was changed in an amendment in 2007). The use of Web-based surveys decreases respondent burden, as compared to that required for alternative methods, such as a paper format, by allowing for direct transmission of the instrument. In addition, the data entry and quality control mechanisms built into the Web-based survey reduce errors that might otherwise require follow-up, thus reducing burden, as compared to that required for a hard-copy administration. As well, respondents can complete the survey at a time and location that is convenient for them. Respondents are recruited through an e-mail invitation that includes an embedded link to the survey Web site’s URL, which further increases the ease of responding.

The following surveys/processes are Web based or have a Web-based option:

  • General Adoption Assessment Survey (GAAS)

  • Network Survey

  • Child Trauma Partnership Tool (CTPT)

  • National Impact Survey

All of the Web surveys associated with the cross-site evaluation recruit respondents to participate through an e-mail invitation. The e-mail process occurs in four stages: (1) an advance invitation to participate, (2) a formal invitation, which includes the Web site’s URL and unique user name and password, (3) a reminder to all respondents, and (4) a final targeted reminder to nonresponders and those who have only partially completed the survey.

For the Product/Innovation Development and Dissemination Study, the PDDS is incorporated into the NCTSN’s current quarterly and annual progress reports. Reporting includes annual completion of the annual progress report form, which is a Microsoft Word form that is e-mailed to the project director at each center. These completed forms are e-mailed back to the NCCTS Monitoring and Evaluation Team, and the data are shared with the cross-site evaluator. Other data for this study are collected via telephone interviews with collaborative workgroup coordinators and from telephone and in-person interviews with Network and non-Network key informants involved in the development and disseminations of products/innovations selected as case studies. Responses offered during the telephone and in-person interviews are recorded by members of the evaluation team, thereby eliminating the need for respondents to complete or return by mail any questionnaires or surveys. Respondents are, however, be provided with the interview questions in advance to facilitate their participation in this study.

A4. Efforts to Identify Duplication

While still understudied, research on the effects of trauma on children has been slowly building over the 20th century and into the current century. To identify data on children’s trauma, the cross-site evaluation team conducted a thorough literature review. As well, the evaluator conducted meetings with experts in the field, including researchers from the NCCTS, and attended a national meeting on trauma. Results of these efforts indicated that while data have been collected on children’s trauma, aspects of the field have not been researched or have failed to have large-scale, systematic data collection. This evaluation includes data collection on those areas that have not been studied sufficiently, if at all.

Research that has been conducted includes case studies of children’s reactions to traumatic events, such as surgeries or natural and industrial disasters. These data provided descriptive information about child symptoms that predated, and later correlated to, the diagnostic criteria for PTSD in adults, first outlined in the Diagnostic and Statistical Manual of Mental Disorders (3rd edition) (DSM-III) (American Psychiatric Association, 1980). In the mid-1940s, David Levy (1945), in observing children’s recovery from operations, was perhaps one of the first to suggest that children exhibit traumatic responses similar to those of adults. In 1976, Lenore Terr conducted the first major child trauma research project that was controlled and prospective. Terr (1991) described four major characteristics that she found to be specific to children’s experience of trauma.

Although child trauma research has steadily advanced since Levy’s observations in the 1940s, significant gaps remain, including an understanding of the variability in individual responses to traumatic stressors. Many researchers have recorded differences in children’s responses to traumatic events. The National Comorbidity Survey found that none of the severe traumatic events reported invariably produced PTSD in those exposed to it, and particular types of traumatic events did not necessarily affect different sectors of the population in the same way (Kessler et al., 1995). Other studies also have noted this variability and identified risk factors that affect the likelihood that children exposed to trauma will develop trauma-related symptoms (Breslau, 2001; Chemtob et al., 1997; Davis, 2000; Earls, Smith, Reich, & Jung, 1988; Lloyd & Turner, 2003; Pfefferbaum, 1997; Pynoos et al., 1987; Terr, 1991).

Although many risk factors have been identified, much work remains to improve the understanding of protective factors and treatments that address the range of children’s individual experiences. A 2000 literature review of PTSD in children observes that “only recently has the mental health community recognized the applicability of diagnostic criteria for PTSD in children and adolescents, including a consideration of age-related features” (Davis, 2000, p. 135).

This evaluation generates data that have not previously been collected, or have only minimally been collected, in the field of children’s trauma. This includes information on the development, dissemination, and use of evidence-based treatments (EBTs) in trauma services; the impact of Network-developed/promoted EBTs on trauma services within and outside the NCTSN; and the processes by which distinct centers that provide and support trauma services collaborate. As well, data on who receives trauma services, the types of services they receive, and the outcomes related to receipt of these services are collected in a systematic manner that yields more extensive, detailed, and consistent information than has previously been obtained.

In sum, existing research and data in the area of child trauma are not sufficient to address the questions posed in this evaluation. For questions related specifically to the functioning and impact of the NCTSN, no data exist. For questions related to descriptive and clinical information on children receiving trauma services, existing data have not been consistently collected such that the data are representative and comparable across service environments. While data have been collected on EBTs in general, data do not exist on the development and use of EBTs in treating child trauma, nor specifically on the role of the NCTSN in this area. Thus, this evaluation generates new data and will not be reproducing existing data.

A5. Involvement of Small Entities

Most data for this evaluation are collected from service providers, administrators, and researchers affiliated with NCTSN centers, which are public or private agencies that receive funding from the Federal Government and for whom participation in the evaluation is considered to fall within their job responsibilities. Some data are collected from mental health and non–mental health service providers working outside of the NCTSN centers. While most of these data are collected from public agencies, some organizations and individuals providing services to the target population, such as community-based organizations, not-for-profit agencies, or private providers, may qualify as small entities, but not a significant impact.

A6. Consequences if Information Is Collected Less Frequently

Descriptive and Clinical Outcomes Study. Descriptive data are collected when children and families first enter services. As part of their normal operations, grantees collect these data for multiple purposes, including third-party reimbursement verification and for aggregate reporting to various State and local funders. If these data are not collected at entry into services, it would be impossible to identify the characteristics of individual cases and to subsequently analyze the impact that these characteristics have on clinical outcomes.

Outcomes data are collected at entry into services and every 3 months for up to 1 year for a subset of children. Three-month intervals were selected in order to capture changes after initial entry into treatment and to monitor those changes closely after children transition out of intensive treatment or out of treatment altogether. Although many children will experience significant improvement in the first 3 to 6 months of trauma-focused treatment, it is necessary to continue collecting data at 9 and 12 months in order to understand the maintenance of changes across time. Longer and less frequent data collection intervals would miss important changes that are likely to happen with children during their treatment episode or shortly thereafter. It is necessary to have multiple data collection points to effectively monitor these changes in clinical outcomes.

Consumer Satisfaction. Data for the Satisfaction Study are collected one time per client at the end of treatment or at 6 months into treatment (whichever occurs first) from caregivers of children enrolled in the outcome study (i.e., children enrolled into clinical services). Satisfaction data are critical for quality monitoring at the local and national levels, and less frequent collection of this information would not provide the opportunity for timely, data-driven programmatic improvement by the grantees or SAMHSA.

Adoption of Methods and Practice. Data for the GAAS and the Adoption and Implementation Factors Interview (AIFI) are collected annually. The process of local and national grantees adopting products developed by the Network will extend throughout the life of the program. New products will be developed and existing products will be improved during the life of the program. The sites adopting the products will be exposed to new products and will elect to adopt products as they are developed and refined during the life of the program. Multiple data collection points are needed in order to assess the degree of proliferation of Network products and the extent to which products are being utilized over time.

Network Collaboration. Network collaboration data are collected annually for the duration of the evaluation, with each of the collaboration instruments administered in alternate years. The Network Survey was administered in the first and third years and will continue in odd alternating years of the evaluation, while the CTPT was administered during the second and fourth years and will continue in even years. The Network Survey utilizes social network analysis techniques (Wasserman & Faust, 1995) to inquire about the extent to which each NCTSN center interacts with every other center on select key NCTSN activities (governance/decision making, information sharing, coordination of activities, product development, product dissemination and adoption, and training and technical assistance). The CTPT was designed to assess the activities and impact of the NCTSN collaboration structures (workgroups, committees, consortia) in terms of membership activities, accomplishments, formalization, leadership, communication, vision, decision making, resource allocation, and understanding/valuing. It is expected that there will be greater collaboration over time as relationships among NCTSN centers become more interdependent and as the formal workgroups mature and become productive. Therefore, measuring relationships among centers (Network Survey) and the organization and performance of workgroups (CTPT) in alternating years will provide the minimum frequency of data collection required to assess change over time and whether goals of the NCTSI, regarding the vital role of collaboration in information and technology transfer, are being met.

Provider Knowledge and Use of Trauma-informed Services. TIS Survey data are collected from providers affiliated through training activities with the NCTSN after center-sponsored training and outreach events. Increased awareness and use of trauma-specific services among child service providers is critical to the overarching mission of the NCTSI to increase the quality and access of care for children who experience trauma. Less frequent data collection would result in the inability to understand the extent to which the Network and its centers are enhancing the understanding and increasing the use of appropriate services for children who experience trauma. It may also result in a sample of training events that isn’t representative of all NCTSN training activities, leading to potentially biased results. Ongoing data collection is needed to assess the change in the knowledge base and use of trauma-informed services as the Network and its affiliated centers mature.

Product Development and Dissemination. The PDDS has been incorporated into centers’ quarterly progress reports and the NCTSN’s current annual progress reporting system. Interviews with collaborative workgroup coordinators were conducted in years 1 and 3 of the evaluation and will continue in alternating years. Five case studies that will include telephone and face-to-face interviews with individuals involved in the development and dissemination of Network products and innovations will continue to be conducted in alternating years opposite the workgroup coordinator interviews. The schedule described above minimizes respondent burden by limiting the frequency of data collection to only what is necessary to adequately describe and assess the product development and dissemination process within the NCTSN. Multiple data collection points are needed to maintain a current inventory of Network products and to examine how Network strategies and approaches develop over time.

National Impact. To evaluate national impact, the Web-based National Impact Survey is administered annually to assess change in agencies’ policies and practices towards more trauma-informed care. The survey is administered to children’s mental health agencies and non–mental health child serving agencies (e.g., education, child welfare, justice) in alternating years. This data collection schedule provides the minimum frequency of data collection required to assess impact beyond the NCTSN, and minimizes burden for respondents. Less frequent data collection would limit our ability to assess the impact, over time, of the NCTSN on trauma-informed care beyond the NCTSN.

A7. Consistency With Guidelines of 5 CFR 1320.5

The data collection fully complies with the requirements of 5 CFR 1320.5(d)(2).

A8. Consultation Outside the Agency

SAMHSA published a notice in the Federal Register on Monday, January 26, 2009 (pg 4442) soliciting public comment on this study. SAMHSA received no comments on the planned data collection.

Consultation on the design, instrumentation, data availability and products, and statistical aspects of the evaluation occurred throughout the development of the evaluation design process and throughout the first 3 years of the evaluation. Consultations have been sought from the following:

  • The Federal Government

  • Experts in collaboration

  • Experts in development, dissemination, and adoption

  • Experts in logic modeling

  • Experts in cultural competence

  • Family representatives

  • Family members (i.e., families receiving services in the NCTSN)

  • Network staff

These consultations had several purposes: (1) to ensure coordination and collaboration of the cross-site evaluation with the NCCTS; (2) to ensure the rigor of the evaluation design, the proper implementation of the design, and the feasibility of implementation; and (3) to verify the general relevance of the data to be collected and their specific relevance to families and members of minority groups.

Some of these consultants worked with the cross-site evaluator over the course of several months to design the evaluation and its study components (i.e., the experts in collaboration; logic modeling; and product development, dissemination, and adoption). (Attachment 1.B provides a list of expert consultants.) In addition, Network staff provided feedback and input into the evaluation design during eight site visits to a selection of NCTSN centers. (Attachment 1.E provides a list of sites visited.)

Consultation was also sought upon completion of the draft evaluation design via presentation to various consultant groups and solicitation of structured feedback. For example, the cultural competence consultants who constitute the Cultural Competence Review Committee met with the evaluator on March 17, 2005, to review the draft evaluation design and provide feedback. (Attachment 1.C provides a list of Cultural Competence Review Committee members.) The family representatives and family members attended a Family Review Committee meeting in Atlanta, GA, on March 15 and 16, 2005, at which time they were presented the evaluation design and solicited for targeted feedback. (Attachment 1.D provides a list of Family Review Committee members.) In addition, NCTSN representatives (i.e., NCCTS staff, Network Steering Committee members) provided feedback on the evaluation design and its proposed instruments.

All instruments and guides, with the exception of the TIS Survey and the AIFI, which were developed later, underwent cognitive testing, pilot testing, expert review, or usability testing during July and August of 2005. The feedback from these efforts was incorporated into this supporting statement and the proposed instrumentation of the cross-site evaluation. The TIS Survey and the AIFI were both developed during 2007 and underwent expert review and pilot testing at that time.

Finally, SAMHSA, and in particular the Project Officer for the cross-site evaluation, provided ongoing input and review of all aspects of the evaluation design. The Federal Project Officers assigned to each of the funded centers reviewed the draft evaluation design and provided comprehensive written feedback on each component. (Attachment 1.A provides a list of Federal consultants who reviewed the initial evaluation design.)

All feedback was requested in a systematic way and provided in a structured and written format. The feedback received was considered and included, as appropriate, in the revision of the evaluation design and its associated instruments.

A9. Payment to Respondents

Remuneration is not provided by the cross-site evaluator to respondents for the majority of study components. Many of the respondents who will be providing data work in an NCTSN center and receive wages from the NCTSI grant, which is federally funded. These respondents are not eligible to receive additional remuneration for participating in the evaluation. Study components that do involve remuneration are the trauma-informed services study, the product development and dissemination case studies, the Adoption of Methods and Practices Study, and the Satisfaction Study.

For the satisfaction with direct mental health services survey, participants are being asked to self-identify and then participate in the satisfaction survey. In order to promote self-identification and encourage completion of the survey, the cross-site evaluation remunerates caregivers for their time with a $20 money order. This amount of remuneration is suggested to achieve response rates of 80% and to appropriately compensate caregivers for their time and contribution.

For the TIS Survey, training participants are thanked for taking the time to complete the survey and are reminded that their participation is critical to expanding the knowledge base related to trauma-informed service provision. In addition, respondents are informed that, having completed the survey, they are now eligible to participate in a lottery drawing for a $50 gift certificate from Amazon.com. If they would like to be entered into the lottery, they must complete the contact information form on the last page of the survey. One $50 gift certificate will be provided to a participating respondent of each training event. The opportunity to participate in the lottery serves as remuneration for completing the TIS Survey. Each individual who submits a contact information form will be entered into the lottery for each training event, one winner will be randomly drawn, and the $50 gift certificate will be mailed or e-mailed to the respondent. The potential respondents for the TIS Survey are affiliated with Network centers through training activities, but not necessarily receiving wages through the NCTSI grants.

For the product development and dissemination case studies, caregivers participating in the case study interviews are paid $25 for their time. Payment is provided once the participant has agreed and provided consent. Participants can withdraw from the interview at any time and still retain payment.

For the AIFI, staff members who are employed by formerly funded centers (i.e., centers that are no longer funded by the SAMHSA grant) are paid $40 for their time. Participants provide consent just before the start of the interview; following the interview, they are provided with the payment. Participants can withdraw from the interview at any time, refuse to answer some questions, or refuse to be recorded, and still retain payment.

Payments to respondents for participating in descriptive and outcomes data collection is at the discretion of each individual center participating in the NCTSN. Prior to the evaluation, centers collected this information for clinical evaluation purposes only and have not paid respondents. For evaluation data collection, the cross-site evaluator has strongly recommended that centers pay respondents for participation in each data collection interval. The cross-site evaluation team recommends that centers pay families $10 for participation in each data collection interval (approximately 45 minutes), resulting in a total of $50 for participation in all five data collection points.

Remuneration is standard practice in this type of longitudinal research to acknowledge participants’ value to the study. It is essential to help maximize participation rates, particularly given the additional time being asked of families who already face multiple challenges and demands on their time in caring for their children. Caregivers and children who participate in the Descriptive and Clinical Outcomes portion of the evaluation are asked to complete more assessments than are ordinarily required in the course of receiving services. To complete the instruments at the time of entry to services and at subsequent follow-up points requires the evaluation participants to spend time away from other activities. The combination of the number of instruments and their periodicity creates a burden to the caregivers and children that exceeds the burden that ordinarily would be placed on them if they were seeking services not associated with this evaluation. This approach of recommending to sites in a cross-site evaluation that they pay respondents from whom the sites collect data directly was used in other evaluations cleared by OMB (e.g., OMB numbers 0930-0171, 0930-0192, 0930-0209, and 0930-0257).

A10. Assurance of Confidentiality

For all studies in the cross-site evaluation, all reports and publications from these data include only group-level analyses that fully protect the privacy of individual participants, and no data have been or will be stored with identifying respondent information.

Descriptive and Clinical Outcomes. This portion of the evaluation requires collecting descriptive and clinical data from children and families. In all grantee sites, data are collected by site staff. These staff members are responsible for developing procedures to protect the privacy of all participants in the evaluation data collection, storage of data, and reporting of all information obtained through data collection activities. These procedures include limiting the number of individuals who have access to identifying information, using locked files to store hard-copy forms (if used), assigning unique code numbers to each participant to ensure anonymity, and implementing guidelines pertaining to data reporting and dissemination.

Data from caregivers and youth are collected through interviews by site staff. The content of some questions is sensitive in nature, and some participants may experience psychological or social distress during an interview. The cross-site evaluation team provides guidance to local staff through procedures manuals and training to assist communities in establishing local interviewer training to address respondent distress and other circumstances that may arise during an interview. Local evaluators develop procedures appropriate to local requirements, including guidelines for referral to requested services, and report abuse, neglect, and harm to self or others according to local law.

Each grantee implements an active consent procedure that informs the participants of the purpose of the evaluation, describes what their participation entails, and addresses the maintenance of privacy as described above. Informed assent is obtained from participating older children and adolescents (ages 7–17). In addition, informed consent is obtained from adolescents who have reached the age of 18 at follow-up data collection. Written informed consent/assent is obtained from children and families at the point of entry into services. Given that some children targeted for study recruitment may be the victims of ongoing domestic violence and their treatment status may be unknown to the perpetrator, special considerations will be taken around the signing of physical consent forms (e.g., when the signing of a consent form leaves a “paper trail” that may put the child or other study recruit in harm’s way, verbal consent procedures can sometimes be approved and secured through local institutional review boards [IRBs]) and methods for contacting the family for follow-up data collection interviews (e.g., using alternative methods of contact or disguised interviewer identity). Each grantee obtains local IRB approval for the informed consent procedures used in this evaluation. To further protect evaluation participants, all grantees are asked to obtain a Federal Certificate of Confidentiality, authorized by Section 301(d) of the Public Health Service Act in order to provide additional protection of the information about the participants from civil and criminal subpoena.

To further protect study participants, the cross-site evaluator obtained a Federal Certificate of Confidentiality, authorized by Section 301(d) of the Public Health Service Act. This certificate provides additional protections of the data from civil and criminal subpoena. Additionally, the cross-site evaluator conforms to all requirements of the Privacy Act of 1974, under the System of Records: Alcohol, Drug and Mental Health Epidemiological, and Biometric Research Data, U.S. Department of Health and Human Services (HHS), #09-30-0036; the most recent publication in the Federal Register occurred on January 19, 1999 (64 FR 2914). Client records at the sites are also covered under this Privacy Act System of Records. In addition, the cross-site evaluator obtained a Federalwide Assurance (FWA), which ensures compliance with U.S. Federal regulations for protection of human subjects in research including the Common Rule (Title 45 Code of Federal Regulations Part 46) and other regulations as applicable. The cross-site evaluator also requests that all grantees obtain an FWA.

Consumer Satisfaction. In order to maintain anonymity, caregivers self-identify and provide contact information to the evaluation contractor survey center staff who oversee this data collection effort. The survey center maintains separate databases for the contact information and the survey data. The survey center sends the cross-site evaluator the data linked to the respondent ID but with no personal identifiers. Every month, Network centers send the evaluator the survey respondent ID from the invitation linked to the ID from the Descriptive and Clinical Outcomes Study, but they do not send any personal identifiers. It is necessary to include the respondents’ Descriptive and Clinical Outcomes Study ID so that the evaluator can analyze the satisfaction survey data in conjunction with the outcomes data. For example, the outcomes study contains information on demographics and how long the child and family has been receiving services, and this can be analyzed to determine if there are associations between satisfaction with services and length of time in services, race, gender, age, and income. After the data have been sent to the cross-site evaluator, the survey center destroys the database that contains the contact information. The survey center administers a verbal consent when the respondents are contacted, and the survey is conducted only if the respondent consents to participate.

Participants are asked their preferred survey mode (mail survey or telephone) and are administered the survey according to their preference. There are separate databases with contact information and survey data, which are linked with a unique, randomly generated ID. This random ID will be used for tracking purposes only, and the link is destroyed as soon as the survey is returned to the evaluator. However, the study ID cannot be linked with any personal identifiers because the link to contact information is destroyed when the surveys are returned and the study ID is contained in the contact database. As well, when surveys are received by the evaluator, the survey is immediately separated from the envelope (which may have the respondents’ return address).

Adoption of Methods and Practice. Respondents to this survey include direct service providers employed by Network centers, service providers affiliated with the Network through training and outreach activities centers’ program director or administrator, and the program evaluator. Consent is obtained from respondents before they complete the Web-based GAAS or participate in the AIFIs. In the case of each data collection effort, the initial recruitment letter explains the survey, including the voluntary nature of survey completion, anonymity of responses, and the risks, benefits, and rights as respondents, and it advises the recipient that by completing and submitting the survey or participating in the interview, they are indicating consent. Information about the study and participant rights also is presented at the start of the GAAS and is sent to each interview participant before the interview.

The security and anonymity of data entered and managed on the Web-based GAAS are assured. Access to the GAAS is password protected, and the GAAS uses data encryption to further enhance security and protect privacy. To maintain anonymity of responses, two databases have been created for the GAAS: one to store the identifying information, including name, user ID, and password, and the other database to store the survey responses. The two databases are not linked after the data are collected. While data are being collected, only the system administrator has the key that links the two databases, and this key is destroyed when the data are transmitted to the evaluator.

For the GAAS, respondents are asked to log in using an assigned ID and randomly generated password that is provided in the formal invitation. After the respondent logs on to the survey, it is possible to check off that the subject responded to the survey in the identifier database.

For the AIFI, in advance of each interview, respondents are provided a confirmation of the date and time of the interview, which includes background information about the purpose of the study component, the purpose of the AIFI telephone interview, the risks and benefits of participating, privacy and consent information, and respondent selection criteria. A consent form is attached to the document describing these points (Attachment 2.C.4). Before conducting the survey, the interviewer reviews the informed consent form with the respondent, verifies that the respondent agrees to consent to the interview, and advises that by completing the interview, the respondent is indicating consent.

Network Collaboration. The Network Survey and the CTPT are administered to individuals directly affiliated with the NCTSN through a Web-based format. Respondents of the Network Survey are NCTSN center directors and center associate directors. Respondents for the CTPT are members of the approximately 15 workgroups (the number of workgroups could increase to about 25 depending on how many new groups are added each year) that have been assembled by the NCTSN. Full contact information for respondents, including name, address, phone, and e-mail addresses, is gathered from the SAMHSA Program Officer and from the NCCTS, which tracks workgroup activities for the Network. Respondents are recruited to participate through an e-mail invitation.

The surveys are administered in alternating years throughout the evaluation, but respondents may be different at each point because of possible changes in center directors or associate directors over time (Network Survey) or changes in the membership of workgroups over time (CTPT). Passive consent is obtained. The formal invitation explains the survey, including the voluntary nature of survey completion, privacy and anonymity of responses, and the risks, benefits, and rights as respondents, and advises the recipient that completion and submission of the survey indicate consent to participate. This invitation also provides contact information if the survey recipient has questions or desires clarification prior to participation.

Access to the Network Survey and CTPT is password protected, and both surveys use data encryption to further enhance security and protect privacy. To maintain anonymity of responses, two databases have been created for each survey: one to store the identifying information, including name, user ID, and password, and the other database to store the survey responses. The two databases are not linked after the data are collected. While data are being collected, only the system administrator has the key that links the two databases, and this key is destroyed when the data are transmitted to the evaluator.

Respondents are asked to log in using an assigned ID and password that is provided in the formal invitation. After the respondent logs on to the survey, it is possible to check off that the subject responded to the survey in the identifier database.

Provider Knowledge and Use of Trauma-informed Services. The TIS Survey is administered to providers affiliated, through training activities, with NCTSN centers. The paper survey is distributed to all participants of center-sponsored training and outreach events. A consent form is included as the first page of the TIS Survey and explains the survey, including the voluntary nature of survey completion, privacy of responses, and the risks, benefits, and rights as respondents, and it advises the recipient that completion and submission of the survey indicate consent to participate. In addition, as described previously, TIS Survey respondents are invited to provide their contact information if interested in participating in the TIS Survey respondent lottery or in authorizing the use of their contact information for future cross-site evaluation surveys regarding the implementation of trauma-informed services. Contact information provided by respondents is collected separately from completed TIS Surveys by the NCTSN center trainer to ensure anonymity. A tracking code is included on each TIS Survey that matches the code printed on the attached contact information form (i.e., for each individual TIS Survey and attached contact information form, tracking codes printed on each will match). The code is used for two purposes: (1) to ensure that the lottery winner submitted a TIS Survey and (2) to link TIS Survey data with data collected through other cross-site evaluation surveys designed to assess the implementation of trauma-informed services. Linking respondent participation across surveys provides an opportunity to examine the relationship between respondents’ exposure to NCTSN training and outreach events and the implementation of or change in trauma-informed services provision over time. The contact information and TIS Survey data are stored in separate databases and will never be linked.

Product Development and Dissemination. With the exception of the PDDS, which is completed as part of the Network’s regular progress reporting process, informed consent is obtained for each instrument administered for this component of the evaluation and for the audiotaping of any interviews. Identifying information is only used to contact respondents and schedule interviews. Any information used to identify respondents is maintained in a contact database that remains separate from the database that stores data from each completed interview. For each interview, the evaluation team assigns the respondent a nominal ID.

National Impact. The Web-based survey is administered every 12 months as a cross-sectional assessment of agencies’ policies and practices. In alternating years, the survey is administered either to children’s mental health agencies (years 1, 3, 5, etc.) or to child welfare, justice, and education agencies (years 2, 4, 6, etc.). Active consent is obtained at each wave of survey administration from respondents who are the agencies’ executive directors or directors of children’s services, depending on who is most knowledgeable about their agencies’ polices/practice and relationships with other agencies in the service system.

Full contact information for respondents, including name, address, phone, and e-mail addresses, is assembled from the membership rosters of professional organizations representing mental health, child welfare, education, and juvenile justice agencies. The cross-site evaluator establishes cooperative agreements with these professional organizations to access their roster information where possible.

Respondents are recruited to participate through an e-mail invitation. The formal invitation explains the survey, including the voluntary nature of survey completion, anonymity of responses, and the risks, benefits, and rights as respondents. This invitation also provides contact information if the survey recipient has questions or desires clarification prior to participation. The second page of the survey contains an informed consent form that asks the potential respondents to certify (by checking a space for “agree” or “do not agree”) that they have read the informed consent form, understand its content, and freely agree to participate in the project.

Access to the National Impact Survey is password protected, and the survey uses data encryption to further enhance security and protect privacy. For anonymity of responses, two databases are created for the survey: one stores the identifying information, including name, user ID, and password, and the other database stores the survey responses. The two databases are not linked after the data are collected. While data are being collected, only the system administrator has the key that links the two databases, and this key is destroyed when the data are transmitted to the evaluator.

Respondents are asked to log in using an assigned ID and password that is provided in the formal invitation. After the respondent logs on to the survey, it is possible to check off that the subject responded to the survey in the identifier database.

A11. Questions of a Sensitive Nature

Because this project concerns services to children who have experienced traumatic events and their families, it is necessary to ask questions that are potentially sensitive as part of the Descriptive and Clinical Outcomes Study. However, only information that is central to the study is being sought. Questions address dimensions such as suicidality and other self-injurious behaviors, criminal activity, developmentally inappropriate sexual behaviors, negative feelings, and experience of specific types of traumatic events, such as physical/sexual/psychological maltreatment, natural disasters, or terrorism. The answers to these questions are used to understand who is being served by the NCTSN, to determine baseline status, and to measure changes in these areas experienced after receiving NCTSN services. The measures that contain the sensitive questions are from the Network’s Core Data Set and have been selected by, and used in, the Network prior to the evaluation. Thus, the cross-site evaluation is not introducing new, sensitive domains of inquiry.

A12. Estimates of Annualized Hour Burden

In accordance with the evaluation design, data collection for the 19 CTSs, 8 TSAs, and 2 NCCTS centers funded in 2005, the 10 CTSs and 5 TSAs centers funded or funded again in 2007, and the 8 CTSs funded in 2008 will span the 3 years covered by this reapproval. Centers funded in 2005 will stop their participation in the evaluation at the end of their grant funding period, in September 2009. Similarly, centers funded in 2007 will not participate in data collection after September 2011. Additional centers also may be funded in future years, and they will be incorporated into the evaluation as they are funded. Because of the variability and uncertainty in the number of funded centers in each year, burden estimates are calculated on the basis of the 44 centers, as was the case in the original OMB approval package. Based on the data collection experience during the first three years of this evaluation, we estimate that only 75% of centers are eligible for participation in the Descriptive and Clinical Outcomes and Satisfaction studies, based on variation in programmatic focus across centers. As a result, burden estimates for those two studies are based on an estimate of 33 participating centers.

Table 1 shows the burden associated with the cross-site evaluation during years 4–6 of the evaluation, the period for which OMB clearance is being sought. Burden estimates presented in Table 1 are based on information supplied by various sources. Measures that are newly developed for this evaluation were piloted by the evaluator to determine average burden estimates and have since been implemented in the field, allowing for updated burden estimates. These measures include the GAAS, National Impact Survey, Network Survey, Product/Innovation Development and Dissemination Interview, PDDS, CTPT, and TIS Survey.

The CBCL 1.5-5 and CBCL 6-18 were used in another OMB-approved study (clearance number 0930-0257), and the estimated average burdens used in that study are used here. The Core Clinical Characteristics Forms, the TSCC-A, and the UCLA-PTSD have been used for data collection by the grantees prior to the cross-site evaluation. Estimated burden times for these measures are based on feedback from the grantees and piloting conducted by the evaluator.

The cross-site evaluation will continue to review contributions to the NREPP by grantees. Because these data will be compiled from the registry and not directly from grantees, they will result in no burden to respondents. Clearance for collection of these data is, therefore, not being requested and is not discussed further in this section.

TABLE 1

Estimate of Respondent Burden

Note: Total burden is annualized over the 3-year clearance period.

Instrument

Number of Respondents

Average Number of Responses per Respondent

Hours per Response

Total Burden Hours

3-year Average of Annual Burden Hours

Hourly Wage Rate ($)

Total Cost per Year ($)

Caregivers

Child Behavior Checklist 1.5-5/6-18 (CBCL 1.5-5/6-18)

2,4751

5

0.3

4,084

1,361

10.202

13,882

Trauma Information/Detail Form

2,475

5

0.2

2,723

908

10.20

9,262

Core Clinical Characteristics Form

2,475

5

0.4

4,950

1,650

10.20

16,830

Youth Services Survey for Families (YSS-F)

2,475

1

0.1

198

66

10.20

673

UCLA-PTSD Short Form (UCLA-PTSD)

2,475

5

0.2

2,104

701

10.20

7,150

Case Study Interviews

103

1

1.5

15

5

10.20

51

Youth

Trauma Symptoms Checklist for Children-Abbreviated (TSCC-A)

1,8814

5

0.3

3,104

1,035

6.555

6,779

Service Providers

Provider Trauma-informed Services Survey (TIS)

29,250

1

0.2

5,850

1,950

19.256

37,538

General Adoption Assessment Survey (GAAS) Providers

14,040

1

0.5

7,020

2,340

19.25

45,045

Adoption and Implementation Factors Interview (AIFI) Provider Assessment & Clinical Components

150

1

1.0

150

50

19.25

963

Project Directors/Principal Investigators

Product/Innovations Development and Dissemination Survey (PDDS)

44

12

1.0

528

176

19.25

3,388

General Adoption Assessment Survey (GAAS) Administrators

44

3

0.5

66

22

19.25

424

Adoption and Implementation Factors Interview (AIFI) Administrator Assessment & Clinical Components

45

1

1.0

45

15

19.25

289

TABLE 1

Estimate of Respondent Burden

Note: Total burden is annualized over the 3-year clearance period.

Instrument

Number of Respondents

Average Number of Responses per Respondent

Hours per Response

Total Burden Hours

3-year Average of Annual Burden Hours

Hourly Wage Rate ($)

Total Cost per Year ($)

Network Survey

84

1

1.0

84

28

19.25

539

Other Network Staff

TIS Training Summary Form

1,4637

1

.1

122

41

19.25

782

Workgroup/Taskforce Coordinator Interview

158

1

1.5

23

8

19.25

144

Case Study Interviews

209

1

2.0

40

13

19.25

257

General Adoption Assessment Survey (GAAS) Evaluators

4410

3

0.5

66

22

19.25

424

Adoption and Implementation Factors Interview (AIFI)

3011

1

1.0

30

10

19.25

193

Network Survey

4412

1

1.0

44

15

19.25

289

Child Trauma Partnership Tool (CTPT)

20013

2

0.8

320

107

19.25

2,053

Non-Network Mental Health Professionals

National Impact Survey

1,600

1

0.5

800

267

19.25

5,133

Non-Network Non–Mental Health Professionals

National Impact Survey

1,600

2

0.5

1,600

533

19.25

10,267

Non-Network Product Developers

Case Study Interviews

20

1

1.5

30

10

19.25

193

Total summary

62,959

61



33,996


487,674

Total annual summary

20,986

20



11,333


162,558

  1. On average, 75 percent of centers are eligible to participate in the Descriptive and Clinical Outcomes Study (33 of 44 centers). At each of these centers, an average of 25 caregivers will participate in each year.

  2. Assuming that most of the families participating in the evaluation sample fall at or below the 2008 HHS National Poverty Level (U.S. Department of Health and Human Services, 2008) of $21,200 (based on a family of four), the wage rate was estimated using the following formula: $21,200 (annual family income)/2,080 (hours worked per year)=$10.20 per hour.

  3. One caregiver will participate in each of the 10 case studies that will be conducted during the clearance period.

  4. On the basis of the children enrolled at centers participating in the cross-site evaluation through June, 30, 2008, approximately 76% of the children in the evaluation will be between the ages of 7 and 18.

  5. Based on the Federal minimum wage rate of $6.55 per hour.

  6. Assuming the average annual income across all types of staff/service providers/administrators is $40,000, the wage rate was estimated using the following formula: $40,000 (annual income)/2,080 (hours worked per year)=$19.25 per hour.

  7. Respondents will be center trainers or evaluation staff. On average, one Training Summary Form is completed for every 20 TIS Surveys (one per training event).

  8. Respondents will be workgroup/taskforce coordinators.

  9. Respondents will be stakeholders.

  10. Respondents will be evaluators.

  11. Respondents will be researchers, supervisors, and administrators.

  12. Respondents will be associate directors.

  13. Respondents will be collaboration structure staff.

As indicated in Table 1, the average total annual burden for data collection is estimated at 11,333 hours. This estimate was derived by calculating the burden for each measure, dividing those numbers by 3 (years of data collection in the cross-site evaluation for which OMB clearance is being requested with this submission), and summing.

A13. Estimates of Annualized Cost Burden to Respondents

There are no startup, capital, and maintenance costs associated with data collection for respondents. Grantees are collecting the data for the Descriptive and Clinical Outcomes Study as part of their normal operations, and they maintain this information for their own service planning, quality improvement, and reporting purposes. The instruments used in this study are those that constitute the NCTSN’s Core Data Set, and these data have been collected by the grantees prior to implementation of the cross-site evaluation.

Each grantee has been funded, as part of the overall cooperative agreement award, to participate in the cross-site evaluation, with up to 20% of the grant award available for evaluation efforts and data collection. Therefore, no cost burden is imposed on the grantee by this information collection effort. Other costs related to this effort, such as the cost of data collection for studies other than the Descriptive and Clinical Outcomes Study, data analyses, and materials, are costs to the Federal Government.

A14. Estimates of Annualized Costs to the Government

SAMHSA has planned and allocated resources for the management, processing, and use of the collected information in a manner that enhances its utility to agencies and the public. Including the Federal contribution to local grantee evaluation efforts, the contract with the cross-site evaluator, and Government staff to oversee the evaluation, the annualized cost to the Government is estimated at $3,577,321. These costs are described below.

Each grantee is expected to participate in the cross-site evaluation, including collection of information for the Descriptive and Clinical Outcomes Study and the Trauma-informed Services Study. Assuming (1) that 44 centers will participate in the Descriptive and Clinical Outcomes Study or the Trauma-informed Services Study, (2) that each of these 44 centers will have 1.5 full-time equivalents (FTEs) dedicated to evaluation, (3) an average annual salary of $30,000 for evaluation staff, and (4) that the average Federal contribution will be 100%, the annual cost for implementing the cross-site evaluation at the grantee level is estimated at $1,980,000. These monies are included in the cooperative agreement awards.

The original cross-site evaluation contract was awarded to Macro International Inc. for evaluation of all NCTSN centers. The cross-site evaluation contract provides for 1 base year of $1,406,740, with an option to renew for 4 more years. The estimated average annual cost of the contract is $1,558,321. Included in these costs are the expenses related to developing and monitoring the cross-site evaluation, including, but not limited to, the following activities: development of the design, instrument package, and training materials; monitoring of and technical assistance to sites; travel to sites and relevant meetings; and data analysis and dissemination activities. Although the original evaluation contract will expire in September 2009, these same cost estimates are applied for the 3 years of the evaluation spanned by this reapproval.

It is estimated that SAMHSA will allocate 60% of an FTE each year for Government oversight of the evaluation. Assuming an annual salary of $65,000, these Government costs will be $39,000 per year.

A15. Changes in Burden

The estimate of annual burden hours associated with the original 3-year approval period was 17,222 (based on the original OMB approval and a subsequent desk review for the TIS Survey and AIFI). The program is requesting 11,333 annual hours for this submission, a reduction of 5,889 annual hours. This revision responds to a variety of minor changes: (1) revision of the TIS Survey to remove 4 pages of content; (2) a change in the method of data collection for the PDDS, and (3) a change in the recruitment of provider respondents for the GAAS. In addition to these changes, the estimates of the number of respondents to the Descriptive and Clinical Outcomes Study instruments, the TIS Survey, and the GAAS were changed significantly based on our data collection experience during the past three years. Also, because several of the evaluation instruments are administered in alternating years of the evaluation, some instruments were administered twice during the original 3-year approval period which will now be administered only once during this reapproval period (Network Survey and workgroup coordinator interviews). Similarly, other instruments were administered once and will now be administered twice during the reapproval period (CTPT, product development case studies). Finally, the TIS interviews and focus groups were conducted during the first 3-year approval period in order to inform the development of the TIS Survey. These will not be conducted again during this approval period. These explain the remaining difference in burden estimates across the two approval periods.

Program Changes

  • Modifications were made to the TIS Survey after it was approved as an amendment to the original OMB approval. In response to feedback from centers that had administered the survey, the survey was shortened significantly by removing several questions, primarily focusing on the types of trauma covered in the training and the topics covered in the training. These questions were moved to the Training Summary Form. We also reduced by half our estimate of the number of respondents who will be targeted, based on our knowledge of the percentage of trainings that are relevant for TIS Survey distribution. We also added 122 burden hours associated with center completion of the Training Summary Forms. The annual reduction in burden for this study component was 3,859 annual hours.

  • Changes in the Product Development and Dissemination Study include a change in the schedule and method of completing the PDDS. The survey was originally contemplated as being administered annually as a stand-alone instrument. It was modified to be included as part of the quarterly progress reports and the combined fourth quarter/annual report completed by centers. This resulted in a net increase in burden of 132 hours. Also, as a result of the restructuring of the collaborative workgroups in fiscal year 2005, the number of active workgroups is less than 35—the number that was originally anticipated. Estimates in this package are based on the expectation of 15 active workgroups, yielding a reduction in burden of 82 hours.

  • Changes in the Adoption of Methods and Practices Study involve a new method of recruitment for service providers. Originally, providers were recruited through the distribution of postcards following center-sponsored training and outreach events. Providers would then self-identify online, following the postcard instructions. Now, TIS Survey respondents can indicate a willingness to participate in additional cross-site evaluation surveys. This increased pool of identified service providers will serve as potential respondents for the adoption study survey. This change results in an estimated increase of 5,370 burden hours.

A16. Time Schedule, Publication, and Analysis Plans

Time Schedule

The time schedule for continuing the cross-site evaluation is summarized in Table 2. A 3-year clearance is requested for this project.

TABLE 2

Time Schedule

Receive OMB reapproval for study

April 2009

Continue data collection for centers funded in 2005, 2007, and 2008

Ongoing

Process and analyze data

Ongoing

Complete data collection for centers funded in 2005

September 30, 2009

Produce annual report under existing evaluation contract

September 30, 2009, and annually thereafter

Produce final report under existing evaluation contract

September 30, 2009

Begin data collection for new centers

October 1, 2009, and potentially annually thereafter

Complete data collection for centers funded in 2007

September 30, 2011

Complete data collection for centers funded in 2008

September 30, 2012



Publication Plan

A final report will be submitted to SAMHSA with anticipated subsequent dissemination to other interested parties, such as researchers, policymakers, and program administrators at the Federal, State, and local levels. Although not required under contract, it is also anticipated that results from this data collection will be published and disseminated in peer-reviewed publications. Examples of journals that may be considered as vehicles for publication include the following:

  • American Journal of Public Health

  • Journal of Clinical Child & Adolescent Psychology

  • American Psychologist

  • Journal of Consulting & Clinical Psychology

  • Child Abuse & Neglect

  • Journal of Emotional & Behavioral Disorders

  • Child Development

  • Journal of Health & Social Behavior

  • Child Maltreatment

  • Journal of Mental Health Administration

  • Children Today

  • Journal of School Psychology

  • Developmental Psychology

  • Journal of the American Academy of Child & Adolescent Psychiatry

  • Development & Psychopathology

  • Journal of Traumatic Stress

  • Evaluation Quarterly

  • Milbank Memorial Fund Quarterly

  • Evaluation Review

  • Social Services Review

  • Journal of Behavioral Health Services Research

  • Trauma Violence and Abuse

  • Journal of Child & Family Studies




Data Analysis Plan

All of the data collection and analytic strategies detailed in this package are linked to the evaluation questions. These linkages are shown in Table 3.

TABLE 3

Evaluation Questions, Indicators, Data Sources, and Analysis Techniques

Evaluation Questions

Indicators

Data Sources

Data Analysis

Descriptive and Clinical Outcomes

Who is being served?


  • Gender, race/ethnicity, age, socioeconomic, ZIP code, insurance status

  • Living arrangement, legal guardian

  • Trauma type

  • Presenting problem(s), diagnosis, intake/referral source

  • Risk factors for family and child

  • Core Clinical Characteristics (Baseline Assessment Form)

  • Core Clinical Characteristics (Follow-up Assessment Form)

  • Core Clinical Characteristics (General Trauma Information Form)

  • Core Clinical Characteristics (Trauma Detail Form)

  • Univariate/multivariate analysis

To what extent do children’s outcomes improve over time?

  • Child functioning

  • Child’s emotion and behavior

  • UCLA-PTSD

  • TSCC-A

  • CBCL

  • Univariate/multivariate analysis

  • Reliable Change Index

What services are received?

  • Inpatient and residential services

  • Outpatient therapy

  • Clinicians/providers

  • Techniques and activities

  • Primary treatment(s)

  • Core Clinical Characteristics (Baseline Assessment Form)

  • Core Clinical Characteristics (Follow-up Assessment Form)

  • Core Clinical Characteristics (General Trauma Information Form)

  • Univariate/multivariate analysis

TABLE 3 (continued)

Evaluation Questions, Indicators, Data Sources, and Analysis Techniques

Evaluation Questions

Indicators

Data Sources

Data Analysis

Descriptive and Clinical Outcomes

How do services influence outcomes?

  • Inpatient and residential services

  • Outpatient therapy

  • Clinicians/providers

  • Techniques and activities

  • Primary treatment(s)

  • Child functioning

  • Child’s emotion and behavior

  • Core Clinical Characteristics (Baseline Assessment Form)

  • Core Clinical Characteristics (Follow-up Assessment Form)

  • UCLA-PTSD

  • TSCC-A

  • CBCL

  • Hierarchical linear modeling

How do center- and Network-level characteristics influence outcomes across time?

  • Involvement in Network

  • Degree of collaboration with other centers

  • Level of partnership in Network activities

  • Discrete characteristics of centers

  • CTPT

  • Core Clinical Characteristics (Follow-up Assessment Form)

  • Network analysis

  • UCLA-PTSD

  • TSCC-A

  • CBCL

  • Hierarchical linear modeling

Satisfaction Study

How satisfied are consumers with center staff, services provided, progress of their child, service environment, and access to these services?

  • Access

  • Participation in treatment

  • Cultural sensitivity

  • Appropriateness/client satisfaction

  • Perceived outcomes

  • YSS-F

  • Descriptive statistics

  • Univariate/multivariate analysis

Knowledge and Use of Trauma-informed Services

To what extent have Network centers enhanced the trauma-informed services knowledge base?

  • Number of providers trained by Network centers

  • Training satisfaction

  • Ratings of knowledge enhancement due to training

  • TIS Survey

  • Descriptive statistics

  • Univariate/multivariate analysis

To what extent have Network centers enhanced the use of trauma-informed services by Network providers?

  • Providers’ predicted use of information gained through training

  • TIS Survey

  • Descriptive statistics

  • Univariate/multivariate analysis

  • Hierarchical linear modeling

TABLE 3 (continued)

Evaluation Questions, Indicators, Data Sources, and Analysis Techniques

Evaluation Questions

Indicators

Data Sources

Data Analysis

Product/Innovation Development and Dissemination

What products/innovations are being developed by the Network?


  • Clinical interventions

  • Assessment instruments

  • Training materials

  • Consultation

  • Information resources

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

Which specific trauma types are targeted?


  • Diagnoses

  • Trauma focus areas

  • Types of services and interventions

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

Which specific populations are targeted?


  • Service populations

  • Trauma focus areas

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

What is the process for evaluating products/ innovations?


  • Pilot testing, evaluative efforts to assess products/innovations

  • Approaches for determining effectiveness

  • Status of effectiveness assessments

  • PDDS

  • Workgroup coordinator interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

What is the current status of product/innovation development?


  • Number of products developed

  • Quality of products developed

  • Length of time for developing products

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

What Network-developed or adapted products or innovations are being actively disseminated?

  • Products developed and disseminated to other centers/agencies

  • Dissemination materials

  • Awareness of products/innovations

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

What dissemination methods/approaches have been used?

  • Dissemination materials

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

TABLE 3 (continued)

Evaluation Questions, Indicators, Data Sources, and Analysis Techniques

Evaluation Questions

Indicators

Data Sources

Data Analysis

Product/Innovation Development and Dissemination

To what extent have dissemination efforts resulted in transporting the intervention to a new setting?

  • Number of agencies/organization/ providers that have adopted product/innovation

  • Awareness of products/innovations

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

What are the factors facilitating or impeding the development and dissemination of products/innovations?


  • Number of centers that have adopted products

  • Awareness of products/innovations

  • Centers that have/have not adopted products

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

What are the best practices within the Network that promote the development and dissemination of products/innovations?

  • Product development and dissemination approaches and strategies

  • PDDS

  • Workgroup coordinator interview

  • Case study interview

  • Univariate

  • Multivariate

  • Thematic/qualitative

Adoption of Methods and Practices

How many Network-generated/supported practices are adopted each year?

  • Number of unique products indicated by all respondents

  • GAAS

  • Univariate analysis

What types of Network-generated/supported practices are most widely adopted within the Network?

  • Distribution of adopted products

  • GAAS

  • Univariate analysis

What actors (managers, clinicians, consumers, etc.) are involved in the adoption process, and what are their characteristics?

  • Distribution and characteristics of type of staff

  • GAAS

  • AIFI

  • Univariate analysis

What organizational culture factors within and external to the center are associated with adoption?

  • Organizational characteristics

  • AIFI

  • Univariate/multivariate analysis

What are the supports within the Network that facilitate the adoption process?

  • Organizational characteristics, organization finance, and resources

  • GAAS

  • AIFI

  • Univariate/multivariate analysis

TABLE 3 (continued)

Evaluation Questions, Indicators, Data Sources, and Analysis Techniques

Evaluation Questions

Indicators

Data Sources

Data Analysis

Adoption of Methods and Practices

What are the most common pathways of practice adoption?

  • Time required to adopt practices, stages of practice adoption

  • GAAS

  • AIFI

  • Univariate/multivariate analysis

What aspects of the practices are associated with adoption?

  • Product characteristics, initial product characteristics, revised product characteristics

  • AIFI

  • Univariate/multivariate analysis

How are practices implemented?

  • Distribution of methods of implementation by product type

  • GAAS

  • AIFI

  • Univariate/multivariate analysis

To what degree are practices implemented?

  • Ratings of degree of adoption

  • GAAS

  • AIFI

  • Univariate/multivariate analysis

What are the organizational factors internal and external to the centers that are associated with implementation?

  • Organizational characteristics

  • AIFI

  • Univariate/multivariate analysis

How are adopted practices modified over time?

  • History of implementation

  • AIFI

  • Univariate/multivariate analysis

How are adopted practices sustained over time?

  • Presence of products in milieu over time, history of implementation

  • GAAS

  • AIFI

  • Univariate/multivariate analysis

Network Collaboration

To what extent do NCTSN centers interact, and what is the nature of their interactions?


  • Governance/decision making

  • Information sharing and coordination of activities

  • Product development

  • Product adoption

  • Training and hosting conferences

  • Network Survey

  • Social network analysis

What factors facilitate and inhibit collaboration? What are some of the recommendations to improve collaboration?

  • Open-ended list of factors that facilitate and inhibit collaboration

  • Open-ended recommendations

  • Network Survey

  • Thematic analyses

TABLE 3 (continued)

Evaluation Questions, Indicators, Data Sources, and Analysis Techniques

Evaluation Questions

Indicators

Data Sources

Data Analysis

Network Collaboration

How are formal NCTSN collaboration structures (such as workgroups, committees, and consortia) organized?

  • Membership activities

  • Formalization

  • Leadership

  • Communication

  • Decision making

  • Resource allocation

  • CTPT

  • Univariate/multivariate analysis

What are the activities of the collaboration structures?

  • Membership activities

  • CTPT

  • Univariate/multivariate analysis

What are the impacts of the collaboration structures?

  • Accomplishments

  • Vision

  • Understanding/valuing

  • CTPT

  • Univariate/multivariate analysis

National Impact

To what extent are agencies familiar with or collaborate with NCTSN centers?

  • Agencies’ familiarity with NCTSN centers

  • Types of activities in which agencies have collaborated with NCTSN centers

  • National Impact Survey

  • Univariate/multivariate analysis

To what extent do agencies have knowledge about and use trauma-informed service approaches?

  • Knowledge about the consequences of trauma on child development

  • Knowledge about the special treatment needs of children exposed to traumatic experiences

  • Knowledge regarding trauma interventions

  • Extent to which agencies use trauma interventions

  • National Impact Survey

  • Univariate/multivariate analysis

To what extent do agencies have policies and procedures related to children exposed to traumatic experiences?

  • Existence of policies and procedures related to screening, assessing, and treatment

  • Whether information/ knowledge from or collaboration with NCTSN centers contributed to these policies and procedures

  • National Impact Survey

  • Univariate/multivariate analysis

TABLE 3 (continued)

Evaluation Questions, Indicators, Data Sources, and Analysis Techniques

Evaluation Questions

Indicators

Data Sources

Data Analysis

National Impact

To what extent do agencies provide specialized services for children exposed to traumatic experiences or have plans to develop such services?

  • Provision of specialized services

  • Use of evidence-based treatments

  • Existence of plans for developing specialized services

  • Whether information/knowledge from or collaboration with NCTSN centers contributed to these practices or plans

  • National Impact Survey

  • Univariate/multivariate analysis

To what extent do agencies have the infrastructure to support the use of trauma-informed practices?

  • Use of specialized training materials

  • Type of training materials used

  • Routine collection and management of data related to assessment, treatment, service utilization, and cost

  • Funding mechanisms

  • Advocacy and information dissemination channels

  • Whether information/knowledge from or collaboration with NCTSN centers contributed to these practices

  • National Impact Survey

  • Univariate/multivariate analysis

Are there significant associations between agencies’ exposure to NCTSN centers and their use of trauma-informed policies and practices?

  • Index of “Agency use of Trauma-informed Policies and Practices”

  • Index of “Agency Exposure to NCTSN”

  • National Impact Survey

  • Univariate/multivariate analysis

TABLE 3 (continued)

Evaluation Questions, Indicators, Data Sources, and Analysis Techniques

Evaluation Questions

Indicators

Data Sources

Data Analysis

National Impact

Are there significant associations between “In-Network” product development/ dissemination and diffusion of trauma-informed services and “Out-of-Network” diffusion of trauma-informed care?

  • Index of “Agency use of Trauma-informed Policies and Practices” (Out of Network)

  • Product development scores (In NCTSI Network)

  • Trauma-informed services scores (In NCTSI Network)

  • National Impact Survey

  • PDDS

  • TIS Survey

  • Univariate/multivariate analysis

Analyses conducted or planned for each of the study components are described below. These analyses are possible for centers that are able to implement the evaluation as designed, including collection of cross-sectional and longitudinal descriptive data on the census of children and families receiving formal trauma-related mental health services from NCTSN centers, the proper recruitment of an adequately sized sample, minimal missing data within and across data collection points, retention of families over time, and adherence to prescribed data collection procedures. In sites with constraints (e.g., insufficient size of target population), analyses will be tailored to meet the needs of the individual site.

Descriptive and Clinical Outcomes. Descriptive statistics are employed to summarize the characteristics of the children and families served in NCTSN-funded centers, both at the center-specific level (within each center) and aggregate level (across all centers) to provide a quick snapshot of the basic demographic characteristics of children and families and the clinical and functional status of children when they enter the programs. These data also provide the opportunity to conduct analyses that compare similarities and differences in the target population served across centers. In addition, descriptive statistics (e.g., mean, standard deviation, skewness, kurtosis) are essential information for determining whether certain statistical assumptions are met for subsequent multivariate analysis and whether and how statistical adjustment of data could be made if necessary. The cross-site evaluation team conducts subgroup analyses to assess potential differences in key indicators (e.g., presenting problems) among identified child groups (e.g., age, gender). As with the descriptive statistics, when sample size permits, subgroup analyses are conducted at the center-specific level, as well as at the aggregate level. A core set of key indicators are selected for subgroup analyses at the aggregate level. Additional indicators that are more population specific may be selected for the community-specific subgroup analyses, especially given the diversity in the centers’ target populations.

Multivariate analyses provides a more comprehensive understanding of the different characteristics and diverse needs of children and families served. Analytic strategies such as cluster analysis and latent class analysis can help to identify patterns of different characteristics of children and families participating in trauma treatment services. Unlike the separate subgroup analyses mentioned above, these types of analyses can statistically identify homogeneous subgroups on the basis of key indicators across multiple domains (e.g., behavioral and emotional problem indicators, functioning indicators), thus providing a more comprehensive picture of the differences (between subgroups) and similarities (within subgroups) in characteristics across multiple domains. These subgroups also may have significant implications for service needs and longitudinal outcomes.

Because there is a longitudinal study component and centers continue to enroll children into services during the funding cycle, the cross-site evaluation team also conducts analyses to evaluate whether and how the composition of children and families changes over time as centers mature. Analysis of change in such composition is done by (1) grouping children on the basis of the fiscal year in which they enter the programs and (2) comparing these groups to evaluate the extent to which drift in demographic, clinical, and historical characteristics occurs across the life of the program. These analyses can conducted using latent class analysis with covariates. The covariate of interest for these analyses is the grouping of the children enrolling in services in different fiscal years, which will be used as an indicator for center maturity over time. This allows the team to examine whether and how patterns of demographic and other characteristics change, depending on the year children and families enroll in the programs.

The clinical significance of change across time and its relationship to an individual’s return to the normal range of functioning has become important to the assessment of the effectiveness of mental health services (Kendall, 1999; Kendall, Marrs-Garcia, Nath, & Sheldrick, 1999). An individual child or family may change statistically from one assessment point to another, but this may not indicate that meaningful clinical change has occurred. To address this issue, the cross-site evaluation team conducts analyses to describe and test for differences in clinically significant change across time.

Reliable Change Index (RCI) scores (Jacobson, Roberts, Berns, & McGlinchey, 1999) are a method for assessing individual change, and they have clear utility in treatment outcome research and direct applicability in the social policy arena. RCI analyses for this evaluation rely on creating an individual metric for change using baseline and follow-up scores and adjusting for the reliability of measurement on outcome measures. These change scores are computed from baseline to each outcome data collection point to evaluate the degree of clinically significant change displayed by each individual. RCI scores can be combined with follow-up cutting scores (e.g., within the clinical or normal range) to determine whether or not children have (1) improved and recovered, (2) improved, (3) remained stable, or (4) deteriorated. Rates of clinically significant change can be modeled against demographic, clinical, and service utilization characteristics to evaluate differential effects. Logistic regression and discriminant function analyses are used to control for covariates before evaluating for differences in change rates.

This evaluation yields data at the individual level (children and their families) and data at the center level. In order to integrate information across different levels of data yielded from this study, hierarchical linear modeling (HLM) techniques will be employed (Bryk & Raudenbush, 1992) as more data are collected. HLM provides improvement in estimating individual effects, an opportunity to model cross-level effects (i.e., individuals within centers, over time), and greater precision in partitioning components of effects across multiple levels. The following provides an illustration of how HLM will be used in the evaluation. The children and families in the longitudinal study are located (or “nested”) within centers. The cross-site evaluation team assumes that children experience an intervention and that, as a result of that intervention, they experience change. It is expected that differential center development (degree of collaboration and participation in the Network) will mediate outcomes. HLM allows the estimation of growth curves (e.g., changes in the level of symptomatology) on the basis of repeated observations. These repeated measures are nested within the individual child. Using this three-level design, HLM permits the cross-site evaluation team to estimate how much of the variance found in the first level or dependent variable (e.g., changes in symptoms) is due to the second (e.g., individual receiving treatment), and how much of the variance can be attributed to the third level (e.g., variations in Network collaboration at the center level). These analyses can only be conducted for children in long-term treatment who have participated in three or more outcome data collection intervals (i.e., complete data at entry and two subsequent 3-month follow-up data collection intervals). Requiring consistent follow-up through multiple data collection intervals (i.e., every 3 months for up to 1 year) will improve the team’s ability to use HLM to investigate the relationships between variables at multiple levels of the Network.

Primary analyses of these data address the program evaluation objectives of the cross-site evaluation. This information has been and will continue to be disseminated via reports to SAMHSA, the NCCTS, and participating centers, and publications will be designed for policymakers. In addition, analyses of the data will be reported via professional and general public journals and periodicals. It is likely that data from the Descriptive and Clinical Outcomes Study will be used in conjunction with other data collected at individual centers to address specific topics of interest to the Network or can be used across all centers to examine to-be-determined topics.

Consumer Satisfaction. Data gathered via the Youth Services Survey for Families, a satisfaction measure, are analyzed using descriptive statistics. Survey items are tallied and scored, and these data are compared by demographics of the family consumers, funding cohort, target population, and clinical characteristics of the client. In addition, data analysis techniques that assess satisfaction with services across the developmental years of the centers will be employed as more data are collected. Once sufficient data have been collected, satisfaction also will be analyzed in relation to clinical outcomes and services data to determine whether different levels of satisfaction are associated with different services and outcomes.

Adoption of Methods and Practices. Data analysis of the GAAS is largely descriptive and consists of tabular displays of information. As data are collected over time, it will be possible to use the information to formulate models of adoption penetration rates for certain population segments, centers, or specific products or innovations.

Initial data analysis for the AIFI has been descriptive. In addition, qualitative analysis that utilizes evaluative coding categories or organizational features or that classifies groups of implementation experiences or trajectories will be derived and used as a basis for analysis.

The knowledge base regarding the program is informed by the literature and the qualitative analyses. As this knowledge base increases, it should be possible to use the data to test hypotheses in multivariate models including repeated measures and hierarchical techniques. These analyses will examine the relationships between factors associated with both adoption and implementation and center-based longitudinal outcomes.

Network Collaboration. This component of the evaluation focuses on measuring the extent and nature of collaboration among all centers. The Network Survey used in this study assesses collaboration by inquiring about the extent to which each NCTSN center interacts with every other center on select key Network activities (governance/decision making, information sharing, coordination of activities, product development, product dissemination and adoption, and training and technical assistance). This survey also contains items concerning factors that facilitate and inhibit collaboration. Using specialized social network analysis software such as UCINET (Borgatti, Everett, & Freeman, 1999), the cross-site evaluation team uses standardized social network analysis methods (Wasserman & Faust, 1995) to analyze the frequency and types of linkages between centers, as well as network characteristics, such as centrality and clustering of the most highly interacting players, and gaps in linkages. An index of collaboration is constructed to indicate strength of collaboration for any one center in the national Network. The general linear model repeated measures is used to analyze change in the frequency and type of network linkages in the Network as a whole between the repeated waves of data collection, and between and within the various NCTSN centers.

Open-ended questions that ask respondents to list factors that facilitate or inhibit collaboration are analyzed using standard qualitative analysis methods that involve identifying common themes and describing these by frequency and type. Collaboration indices are used in hierarchical statistical models to test the relationships between Network characteristics and other components of the cross-site evaluation, including the extent of participation in formal NCTSN workgroups, levels of trauma-informed services among human services providers, adoption of NCTSN innovations, characteristics of populations served, and longitudinal outcomes.

Formal collaboration structures (workgroups, committees, consortia) are used by the NCTSN as a key method for transferring information and technology related to specific trauma populations or programs. The Child Trauma Partnership Tool has been adapted for this evaluation. The CTPT asks respondents to select the workgroup in which they have been most active during the preceding 12 months and to respond to questions from their perspective in this workgroup. Using a 5-point agreement scale, respondents are asked to rate the workgroup’s activities and impact in the following domains: membership activities, accomplishments, formalization, leadership, communication, vision, decision making, resource allocation, and understanding/valuing. Descriptive analyses are conducted by domain for each workgroup and for all of the workgroups in aggregate. For collaboration structures that are maintained across time, the general linear model repeated measures will be used to identify significant changes that occur in the activities and impact of these formal structures between the repeated waves of data collection.

Provider Knowledge and Use of Trauma-informed Services. Data gathered via the TIS Survey are analyzed using descriptive and inferential statistics. To the extent possible, survey items are tallied and scored, and then compared as a function of funding cohort, target population of the center, and demographic characteristics of the providers. As more data are collected, longitudinal data analysis techniques will be used to assess change in the knowledge base, use, and satisfaction among providers over the developmental lifespan of the centers.

Product Development and Dissemination. Descriptive statistics and thematic qualitative analysis are used to analyze data collected from the PDDS, semistructured interviews, and case studies. Given the repeated measures design, as more data are collected, it will be possible to longitudinally evaluate changes across time in product/innovation development and dissemination activities both from the perspectives of individual centers and collaborative workgroup coordinators. This will be dependent on the stability of the items and constructs underlying the measures that will be evaluated at each data collection interval. In addition, the data from this component of the evaluation can be included in hierarchical models testing the relationships in variation at the center level and descriptive and outcomes information.

Qualitative analysis methods of collaborative workgroup coordinator interview and case study data are used to evaluate the context of product/innovation development and dissemination. A specific focus of these analyses is to identify center-level best practices that facilitate the development and dissemination of products/innovations.

National Impact. Data from the Web-based National Impact Survey are initially analyzed using descriptive statistics. Internal consistency reliability analysis, as well as analytic techniques to assess validity (i.e., confirmatory factor analysis), were performed prior to further analysis. Survey items are tallied and scored. The key items measuring the dependent variable (i.e., extent to which agencies use trauma-informed policies and practices) are rated by the respondent as a dichotomous value (Yes/No). These dichotomous values are totaled for each agency respondent to produce an index of “Use of Trauma-informed Policies and Practices.” Similarly, items measuring the major independent variable (i.e., whether information/knowledge from or collaboration with NCTSN centers contributed to the agencies’ policies or practices) are also assessed with dichotomous items. These dichotomous responses also are totaled for each agency respondent to produce an index of “Total Exposure to the NCTSN.” Data are aggregated at the service sector level (i.e., mental health, child welfare, education, juvenile justice) and at the State level.

Separate analyses will be conducted for the two sets of respondents (i.e., mental health organizations and other service sectors). Descriptive and inferential statistics are used to compare scores on the index of “Use of Trauma-informed Policies and Practices” over time and as a function of characteristics of the responding organizations (i.e., private or public, major functions of organizations), service sector, State, and exposure to the NCTSN.

Hierarchical statistical models are used to examine associations between scores on the index of “Use of Trauma-informed Policies and Practices” and independent variables such as State, service sector, characteristics of the responding organizations, the extent to which agency respondents have base knowledge/use of trauma-informed care, exposure to the NCTSN, and wave of administration.

A17. Display of Expiration Date

All data collection instruments will display the expiration date of OMB approval.

A18. Exceptions to Certification Statement

This collection of information involves no exceptions to the Certification for Paperwork Reduction Act Submissions. The certifications are included in this submission.

  1. Statistical Methods

B1. Respondent Universe and Sampling Methods

Descriptive and Clinical Outcomes. Descriptive and clinical outcomes data are collected on all children who enter outpatient or inpatient trauma-related mental health services, and a subset of these cases are targeted for subsequent 3-month follow-up intervals, for up to 1 year. Given the diversity of trauma target populations and geographic locations of the CTSs, the cross-site evaluation team enrolls children and families from each CTS and TSA into the outcome study. Although it is difficult to obtain an accurate count across currently funded centers, and the number of children served by each center varies, on average, the number enrolled and followed across time should minimally total 100 children over the 4 years of funding for an individual center (i.e., approximately 25 per year). For those centers serving a higher number of children per year, it is recommend that the descriptive and clinical outcome measures be collected on all children at entry into services and that a sampling plan be developed for determining the group of participants who will participate in the longitudinal outcomes study and be subsequently followed across time. The cross-site evaluation team provides technical assistance in developing the sampling plan on the basis of the projected numbers of children who will be enrolled into services yearly and will monitor the implementation of each sampling plan.

The target for all centers is 25 children per year for 4 years (i.e., a total of 100 children per center over the 4 year course of funding). Of the average 44 centers active during any given year, we have found that approximately 75% are eligible to participate in the Descriptive and Clinical Outcomes Study based on their grant-funded activities. This target should minimally result in approximately 2,475 participants (Table 4) over the course of the 3 years of data collection for which clearance is being requested. With an assumed follow-up data collection participation rate of 95% at each data collection interval, during the clearance period, data would be collected from 2,351 participants at 3 months, 2,234 participants at 6 months, 2,122 participants at 9 months, and 2,016 participants at 12 months. The overall participant retention rate in this design is estimated to be 81%.

TABLE 4

Aggregate Minimal Target Levels of Recruitment for the Outcomes Study: Participants by Center Funding and Year

Cohort

Fiscal Year 2009

Fiscal Year 2010

Fiscal Year 2011

Fiscal Year 2012

Total

2005 centers (19 CTSs, 8 TSAs)

500

n/a

n/a

n/a

500

2007 centers (10 CTSs, 5 TSAs)

275

275

275

n/a

825

2008 centers (8 CTSs)

150

150

150

150

600

Yearly totals

925

425

425

150

1,925

Note: The estimates presented in this table are based on centers already funded at the time of the submission of this reapproval package. If additional centers are funded in 2009 and 2010, recruitment numbers would increase in subsequent years. As mentioned above, all burden estimates are based on an average of 44 active centers per year, with an estimated 75 percent of centers participating in the Descriptive and Clinical Outcomes Study.

On an aggregate level, this design provides sufficient power to test hypotheses of interest for demographic and treatment variables and short-term outcomes on a yearly basis and across the multiple years of the evaluation. These analyses could be performed with a three-level HLM, where level 1 would represent linear change in clinical outcomes over time, level 2 would represent the influence of individual child characteristics (e.g., gender, age , race) on variability in change rates, and level 3 would represent the influence of center-level characteristics on variability in change rates. A power analysis was conducted for such a three-level model assuming the inclusion of 33 centers with 100 children per center and five waves of data collection (baseline and 3, 6, 9, and 12 months). Although the actual number of centers participating in each year will vary depending on future grant awards, this estimate of 33 centers is a good approximation. Parameters were estimated using CBCL Externalizing Problems T-scores as the outcome variable and assuming that 10% of the variation in change rates is between-center variation. The model would have power of 80% to detect an effect size of .33 at the .05 level of significance.

A power analysis was conducted to determine if sufficient power would exist at the individual center level to detect differences across time for individuals with scores at entry and 3 months or end of short-term treatment. Means and standard deviations estimated for the total score on the CBCL were used to conduct the analysis with a total number of 86 cases evenly split between a categorical factor (e.g., gender, trauma type). For the repeated measures difference, a mean T-score of 67 at entry into services and a mean T-score of 62 at 3 months with a within-subjects standard deviation of 8.85 were assumed. For the group difference, mean T-scores of 65.0 and 70.0 with a between-subjects standard deviation of 11.03 were assumed. With a 2 x 2 repeated measures analysis of variance design, power was estimated at 96% to detect a small to medium effect size difference across time at the .05 level of significance. Power was estimated at 84% to detect a small to medium effect size difference for grouping variables at the .05 level of significance, and power was estimated at 83% to detect a small to medium interaction effect at the .05 level of significance. Sufficient power should exist to detect small to medium effect size differences from pre- to posttest for grouping variables and for interactions at the center level.

Consumer Satisfaction. The satisfaction survey (YSS-F) is conducted with all caregivers who consented to participate in the Descriptive and Clinical Outcomes Study, which is expected to have 825 respondents per year. Justification for this sample size is provided in the Descriptive and Clinical Outcomes section of section B1. The survey is administered to caregivers at the point when their child exits services or at 6 months into service delivery (whichever happens first) once per year, with a total of 2,475 respondents during the clearance period.

Adoption of Methods and Practices. The target population for the GAAS includes human service providers of various types (e.g., mental health providers, police, teachers, child welfare workers) affiliated, through employment or training and outreach activities, with each of the NCTSN centers; NCTSN centers’ program directors or principal investigators; and centers’ program evaluators. The respondent universe is the total number of professionals who match this description identified through recruitment efforts each year.

Most of the professionals in the target respondent group are identified through communication with the centers and the Government Project Officer, with one exception. To recruit human service providers affiliated, through training and outreach activities, with funded NCTSN centers, the cross-site evaluation team implements a recruitment strategy that dovetails with the collection of the TIS Survey. As a part of the TIS Survey administration process, described above, survey respondents can choose to submit a contact information form with their completed survey. This form serves two purposes: the respondent is entered into a lottery for a $50 incentive, and the respondent can indicate willingness to be contacted for future cross-site evaluation surveys, such as the GAAS. All of the TIS Survey respondents who indicate a willingness to be contacted are included in the respondent pool for subsequent administrations of the GAAS. Although the actual number of service provider respondents and funded centers is currently unknown, the cross-site evaluation team estimates the identification of 9,750 service providers per funded year through the TIS Survey process, among which approximately 60% (i.e., 5,850) are likely to indicate a willingness to be contacted (based on data collected from February through July 2008). While all of these 5,850 service providers will be targeted to participate in the GAAS, we estimate that approximately 20% of the respondents’ contact information, particularly email addresses, will no longer be valid by the time of the annual survey administration. This yields 4,680 providers that will be invited to participate in the GAAS each year. In addition, GAAS data collected from program directors and evaluators are obtained from a census of such personnel from each of the 44 centers.

The AIFI is a separate and more intensive data collection process that will be implemented to collect detailed information from a subset of 33% of the funded centers about a subset of practices. A purposive sample of centers is developed on the basis of data collected through the GAAS determining the products in process of being adopted most frequently and the centers and individuals involved in adopting them. The target population for the AIFI telephone interview includes service providers and administrators employed by NCTSN centers or affiliated through training or outreach activities. The AIFI focuses on products that fall into three broad categories: (1) clinical interventions, (2) assessment measures, and (3) training or technical assistance materials. Three specific products, one from each broad category, are selected as the subject of the AIFI each year. In each year of the evaluation, a maximum of 75 respondents will be recruited to participate in the AIFIs overall, including approximately 25 respondents per AIFI category (i.e., clinical interventions, assessment measures, and training/technical assistance materials). In the case of each of the three types of AIFIs, up to five individuals who are in an administrative role at the centers will be recruited to participate. Administrators may include the center’s project director (PD), principal investigator (PI), clinical supervisor, or another administrator with direct involvement in the implementation of the product of interest and knowledge about center resources and processes related to the implementation. In addition, for each of the three types of interviews, up to 20 service providers (or other staff involved in conducting training, in the case of the interviews on training materials) will be recruited to participate. The total number of individual AIFI respondents would not exceed 75 (i.e., the sum of 15 administrators and 60 providers or trainers) in any given year of the evaluation (Table 5).

TABLE 5

Respondent Participation by Interview Type

Adoption and Implementation Factors Interview


Clinical Interventions

Assessment Measures

Training/TA Materials

NCTSN centers represented by respondents

5

5

5

Respondent Types

PDs/PIs and/or Other Admin

Providers/ Other Network Staff

PDs/PIs or Other Admin

Providers/ Other Network Staff

PDs/PIs or Other Admin

Providers/ Other Network Staff

Number of Network staff participating in the AIFI

5

20

5

20

5

20

Total individual respondents

25

25

25

Total interviews

75



Network Collaboration. The Network Survey instrument assesses collaboration by measuring the extent to which each CTS and TSA center interacts with every other center on select key Network activities. Therefore, key personnel who have knowledge of their centers’ relationships with all of the other centers are recruited to participate in the survey. This recruitment minimally includes the center director and a center associate director or project coordinator of each CTS and TSA center that is currently funded (N=88). As the number of alumni centers increase, it is expected that there will be a minimum of 40 alumni centers from which at least one key personnel (project director) will be recruited (N=40). The cross-site evaluation team recruits the universe of all possible respondents (i.e., two managers from every currently funded center and one from alumni centers, N=128) in order to get the most complete picture of the relationships among all of the centers. Omitting any centers from the network analysis could result in an inaccurate portrayal of the Network characteristics as a whole. Two respondents from each currently funded center and one respondent from alumni centers maximize the chances that all intra-Network linkages are identified for each center. An 80 to 90% response rate is expected because of the specialized targeting of respondents and the methods to be used to maximize response rates for this Web-based survey, which includes a four-stage approach composed of an advance invitation, a formal individualized invitation, and two follow-up reminders.

The CTPT assesses the activities and impact of the NCTSN’s formal collaboration structures (e.g., workgroups, committees, consortia). According to the most recent NCTSN reports, there are currently approximately 15 collaborative structures, with approximately 200 participating professionals. A listing of members of all formal collaboration structures was assembled by the NCCTS. Workgroups vary in size, depending on their purpose and scope of activities. To obtain the most comprehensive profiles of the various workgroups, the cross-site evaluation team contacts the universe of all members of each workgroup to participate in the survey. A more restricted sampling procedure would not be appropriate because each of the various collaborative structures has a unique purpose. A greater than 80% response rate is expected because of the specialized targeting of respondents and the methods to be used to maximize response rates for this Web-based survey, which includes a four-stage approach composed of an advance invitation, a formal individualized invitation, and two follow-up reminders.

The sample size for both sections of this study component should be sufficient for cross-sectional and longitudinal examinations of the frequency of linkages among centers and characteristics of the Network as a whole (Network Study) and for examination of the 12 aggregated domains of the CTPT across all workgroups.

Provider Knowledge and Use of Trauma-informed Services. The target population for the TIS Survey includes human service providers of various types (e.g., mental health providers, police, teachers, child welfare workers) affiliated, through training and outreach activities, with each of the NCTSN centers. The respondent universe is the total number of human service providers trained by Network centers each year through relevant training events. All NCTSN centers that provide trainings as part of their NCTSI grant-funded activities will be targeted for participation in distributing the TIS Survey at relevant training events. Although most NCTSN centers are involved in training activities as a part of their NCTSI grant, it is anticipated that several centers will be focused exclusively on other activities (e.g., service provision and data collection for clinical and evaluative purposes), would not host training events, and therefore would not participate in the distribution of the TIS Survey.

On the basis of a variety of data collection efforts undertaken separately by the NCCTS and Macro International Inc., the estimated average number of individuals trained annually by each NCTSN center is 500. Based on technical assistance contacts with centers, we estimate that approximately 50% of training activities are relevant for TIS Survey distribution (that is, the training addresses trauma-informed services, rather than a broader focus on trauma in general). So, our estimate of the number of individuals trained annually by each NCTSN center who will be targeted for TIS Survey administration is 250. In addition, according to data collected through cross-site evaluation monthly reports submitted to Macro from Network centers, as well as other cross-site evaluation information-gathering activities, 39 centers routinely host training events as a part of NCTSI grant-funded center activities. Therefore, over the next year of the cross-site evaluation and annually thereafter, the respondent universe for the TIS Survey is estimated to be 9,750 (the product of 39 [i.e., the total number of centers providing training] and 250 [i.e., the average number of individuals trained per center annually who match the target respondent group of the TIS Survey]).

Product Development and Dissemination. The PDDS is completed as part of centers’ quarterly progress and annual reports by project directors and staff from each center. Because the PDDS items are integrated into the NCTSN’s current required progress reporting process, a 100% response rate is expected. Collaborative workgroup coordinators from each of the estimated 15 active NCTSN workgroups are asked to participate in the workgroup coordinator telephone interview that is conducted in odd years of the evaluation. As part of the case studies conducted in even years of the evaluation, telephone or face-to-face interviews are conducted with key individuals involved in the development and dissemination of Network products and innovations. These individuals are identified following the selection of the products chosen as case studies.

National Impact. In odd years of the cross-site evaluation, the National Impact Survey is administered to organizational members of professional associations representing the mental health sector (including the National Council for Community Behavioral Healthcare, the National Association of County Behavioral Health Directors, and the National Association of State Mental Health Program Directors). The combined membership of these organizations results in a multistate sample of approximately 2,000 individuals.

In even years, the survey is administered to organizational members of professional associations representing child welfare, education, juvenile justice, health care, and crisis response sectors (e.g., the Child Welfare League, the National Association of Public Child Welfare Administrators, the National Association of State Directors of Special Education, the National Center for Juvenile Justice). The total sample anticipated from these listings is approximately 2,000 individuals.

Professional associations are being used as the portal for recruiting participants because they typically have access to the universe or a representative cross-section of organizations at State, county, and local levels. For example, the National Association of State Mental Health Program Directors has organizational members representing each State and Territory in the United States. Working in partnership with professional associations makes it more likely for organizations to respond, knowing that the survey is endorsed by their professional association. The universe of potential respondents (i.e., full membership of each professional association in mental health and non–mental health sectors) are recruited to solicit the broadest participation possible. An alternative sampling scheme in which mental health and other agencies across the United States were sampled at State, county, and local levels would be cost-prohibitive.

The survey invitation is directed to the executive directors or directors of children’s services who are knowledgeable about their agencies’ policies/practices and relationships with other organizations in their service systems.

An 80% response rate is targeted. To maximize response rates for this Web-based survey, the cross-site evaluation team is using a four-stage approach composed of an advance invitation, a formal individualized invitation, and two follow-up reminders. Additional strategies include offering respondents alternative ways of responding (i.e., via hard copy or telephone interview) and follow-up telephone contact with nonrespondents. The sample size should be sufficient for cross-sectional and longitudinal analyses of national impact.

B2. Information Collection Procedures

Descriptive and Clinical Outcomes. Data for the descriptive study are collected at entry into services for all children and families in the funded centers. Data for this component are collected by centers’ intake staff, who are trained by the cross-site evaluator to ensure standard collection of these data. For standard collection of these data across sites, the Core Clinical Characteristics Forms that have been developed specifically for use within the NCTSN and have been used for clinical evaluation purposes will be used. Data collection for the descriptive study component begins in the first year of funding for all centers.

The information collected in these questionnaires contains elements specific to the evaluation. The required descriptive information includes the following:

  • Demographic characteristics of the children and families

  • Presenting problems of the child

  • Insurance information

  • Indicators of severity of problems

  • Services received prior to entry into the program

An additional two instruments, related to refugee children who have experienced traumatic events, may be added to this study component. The NCTSN data core is currently developing two supplemental data collection instruments for use with immigrant children and families receiving services in the NCTSN. The instruments include additional domains/content areas identified by the Network’s Refugee Workgroup as important for meeting the assessment needs of agencies working with refugee children and families. These measures are still in the developmental phase. Once the instruments have been completed and incorporated into the current Core Data Set, the cross-site evaluator will submit the instruments to OMB by memorandum during the clearance period.

Because respondents’ reading levels will vary, the instruments are administered in interview format by site staff. Clinical outcome data are collected from a sample of children (25 per year per center) and their caregivers. Following children and families every 3 months for up to 1 year captures changes in outcome after initial entry into treatment and allows an assessment of longer term impact as the child transitions out of services. The TSCC-A and the UCLA-PTSD are administered in interview format to children 7 years of age and older. The rest of the measures for this study (the CBCL and the Core Clinical Characteristics Forms [Baseline Assessment Form, Follow-up Assessment Form, General Trauma Information Form, and Trauma Detail Form]) are administered to caregivers.

Clinical, intake, and data collection staff collect data in the funded centers. In these sites, the people who collect the data depend on the resources and needs of the sites. In some settings, this includes intake staff devoted to data collection for this project and multiple other projects. Other centers choose to hire flexible part-time staff to collect data specifically for this project. In other settings, clinical staff are detailed to support data collection efforts.

The cross-site evaluator documents and monitor data collection procedures in the funded centers to ensure the greatest possible uniformity in data collection across sites. In addition, evaluation staff and data collectors are trained using standard materials.

Consumer Satisfaction. The YSS-F, which assesses satisfaction with services, is administered to caregivers who have consented to participate in the Descriptive and Clinical Outcomes Study. Survey participants are given the option to have the survey administered via the phone or mail. The survey administration is conducted by the Macro survey center. The survey is administered to caregivers who have consented to participate in the longitudinal outcome study, and this administration occurs at the time the child exits services or at 6 months into services (whichever occurs first). Center staff distribute an invitation to participate to each caregiver enrolled in the longitudinal outcome study. In response to the invitation, potential respondents self-identify and provide contact information via a toll-free number or mail. Network centers send the evaluator the survey respondent ID from the invitation linked to the ID from the Descriptive and Clinical Outcomes Study, but they do not send any personal identifiers.

A mixed-method approach is used to obtain the data, including mail and telephone versions of the survey. Respondents select their preferred method of survey administration. All administration occurs through the Macro survey center. Phone interviews are implemented using a computer-assisted telephone interviewing protocol. Each number is called up to 10 times. Mail administration occurs through a systematic protocol utilizing Dillman (2000) procedures. The cross-site evaluation team expects to contact approximately 1,100 respondents per year, with an anticipated response rate of 80%. (The Descriptive and Clinical Outcomes section in section B1 provides details on sample size and sampling strategy.)

At the time of survey administration, a consent form is read, or provided to the respondent, and the data collection only continues if the respondent gives verbal consent (on phone interviews) or implied consent through completion and return of the surveys (on the mail surveys). For example, a consent form is included with the hard-copy version of the survey, which explains that returning the survey implies consent (Attachment 2.C.2). The survey, in both telephone and hard-copy formats, is anonymous (as described in section A10, in the Satisfaction Study section).

Adoption of Methods and Practices. For the GAAS, targeted participants include direct mental health service providers employed by NCTSN centers who work on grant activities; human service providers of various types (e.g., mental health providers, police, teachers, child welfare workers) affiliated, through training and outreach activities, with each of the NCTSN centers; NCTSN centers’ program directors or principal investigators; and centers’ program evaluators. Contact information, including current e-mail addresses for program directors or principal investigators, program evaluators, and service providers employed by the centers, are obtained through direct contact with the centers. Contact information for human service providers affiliated through training and outreach activities with the centers are obtained through the Trauma-informed Services Study. Specifically, respondents are identified via the TIS Survey contact information form, which also includes accurate contact information, including e-mail addresses.

All GAAS respondents are recruited to participate through an e-mail invitation (Attachment 4.C). The e-mail presentation and process occurs in four stages: (1) an advance invitation to participate, (2) a formal invitation that includes the Web site’s URL and unique user name and password, (3) a reminder to all respondents, and (4) a targeted reminder to nonresponders and those who have only partially completed the survey. The formal invitation explains the survey, including the voluntary nature of survey completion, privacy and anonymity of responses, and the risks, benefits, and rights as respondents. The invitation also advises the recipient that completion and submission of the survey indicate consent to participate. This invitation provides contact information for technical assistance in the case that the survey recipient has questions or desires clarification prior to participation.

To recruit respondents for participation in the AIFIs, the cross-site evaluator makes telephone calls to the project directors of centers targeted for AIFI participation to (1) explain the AIFI’s purpose and procedures, (2) invite the project director’s participation, and (3) ask that the project director identify individual center staff members who should participate. Following these telephone calls, an e-mail is sent to the project director providing background information about the purpose of the Adoption of Methods and Practices Study, the purpose of the AIFI telephone survey, the risks and benefits of participating, privacy and consent information, and respondent selection criteria (Attachment 2.C.4).

If possible, the interview with the project director will be scheduled at that time. If the project director agrees to identify additional staff members who are appropriate to participate, he or she is asked for contact information (telephone and e-mail addresses) for the individuals identified. A contact list of the AIFI respondents is developed. Telephone calls are then made to each of the potential respondents to (1) explain the AIFI’s purpose and procedures and (2) invite the potential respondent’s participation. Following these telephone calls, an e-mail is sent to each of the potential respondents providing background information about the purpose of the Adoption of Methods and Practices Study, the purpose of the AIFI telephone survey, the risks and benefits of participating, and privacy and consent information. For those agreeing to participate, the interview is scheduled.

The AIFI is a semistructured telephone interview limited to 60 minutes that is tailored to each product and the respondent type. (Interview guides are included in Attachment 3.J.) The primary topics of the interview include the means by which the respondent has been introduced to the product, factors that facilitate or hinder adoption and implementation, degree of implementation of the product, extent and nature of adaptation of the product, and approaches to evaluation of implementation processes and client outcomes.

Each of the AIFIs is conducted by telephone by a representative of the cross-site evaluation team at the scheduled time, and each is conducted with one respondent at a time. To promote consistency in the administration of the interview, interviewers are trained in the use of the interview guides, the data collection goals of the study, the products selected, privacy protocols, and data collection techniques. In advance of the interview, interviewers review relevant documents about the centers participating in the interview, including original grant applications and Network reporting forms, such as quarterly progress reports. This review ensures that the interview focuses on new information to be gathered rather than information easily accessed elsewhere. All interviewers are guided in their review of relevant material to facilitate informed discussions.

Interviews are conducted by an interviewer and a note-taker. Interviews are digitally recorded and transcribed to facilitate an accurate record. Interviewers take written notes during the interviews to provide an accurate record for subsequent review, analysis, and summary of the information gathered. Interviewers also produce a brief summary of notes following each interview to document impressions related to the quality of the interview, initial findings, and the data collection process. Before conducting the survey, the interviewer reviews the informed consent form sent to the respondent in advance of the interview. The interviewer reminds the respondent that completing the interview indicates consent.

Network Collaboration. Data collection with the Network Survey takes place in alternating odd years of the cross-site evaluation. Approximately 128 respondents are included at each measurement point, representing two key personnel from each of the funded and one key personnel from alumni CTSs and TSAs. Data for this portion of the Network Collaboration Study are collected by the evaluator through a Web-enabled survey.

For the CTPT, data collection takes place in alternating even years of the cross-site evaluation. Approximately 200 respondents are included at each measurement point, representing the universe of all members of the formal NCTSN collaboration structures (workgroups, committees, and consortia). Surveys are conducted by the evaluator through a Web-enabled survey.

For both surveys, respondents are recruited to participate through an e-mail invitation (Attachments 4.D and 4.E). The e-mail presentation and process occurs in four stages: (1) an advance invitation to participate, (2) a formal invitation that includes the Web site’s URL and unique user name and password, (3) a reminder to all respondents, and (4) a targeted reminder to nonresponders and those who have only partially completed the survey. The formal invitation explains the survey, including the voluntary nature of survey completion, privacy and anonymity of responses, and the risks, benefits, and rights as respondents. It also advises the recipient that completion and submission of the survey indicate consent to participate (Attachments 2.C.10 and 2.C.11). This invitation also provides contact information if the survey recipient has questions or desires clarification prior to participation.

Provider Knowledge and Use of Trauma-informed Services. Following each center-sponsored training or outreach activity focused on the dissemination of trauma-specific interventions, methods, or information for human service providers, NCTSN center trainers are asked to collect three types of information:

  • Training summary form: Center trainers are asked to complete a brief training summary form (Attachment 3.H) designed to collect information, including the training date, the number of training participants, the topic of the training, and the agency/organization affiliation of center trainees, which will reflect the type(s) of human service providers attending each training event. Because a consistent method for recording this information on a training-by-training basis did not previously exist, the training summary report fulfills a variety of needs among stakeholders (i.e., Network centers, cross-site evaluator, and SAMHSA) for data informing the reach of Network-sponsored training and outreach events.

  • TIS Surveys: Center trainers are asked to distribute the TIS Survey (Attachment 3.G) to all training participants at each training event. Trainers distribute the survey to each participant, and participants are instructed to complete the anonymous survey if they wish and to return it to the trainer. The cover letter to the survey indicates that returning the survey implies consent (Attachment 2.C.5).

  • Contact information forms: As described on the last page of the TIS Survey, the respondent is invited to participate in a lottery drawing for a $50 gift certificate from Amazon.com and to complete the contact information form if interested. The contact information form serves two purposes: it allows respondents to register for participation in the lottery drawing, and it provides an opportunity for respondents to authorize (or refuse to authorize) the use of their contact information in future national evaluation surveys regarding the implementation of trauma-informed services and practices. Center trainers are asked to collect the contact information forms separately from the TIS Surveys to protect privacy.

In summary, for each relevant training event hosted by an NCTSN center, center trainers are asked to collect the three types of information described above, ensuring that the training summary form and the TIS Surveys are collected together. The contact information forms are collected separately, and both sets of information are placed in a preaddressed, premetered envelope and mailed to Macro. The cross-site evaluator develops and provide the necessary materials (i.e., the training summary reports and the TIS Survey with contact information forms attached, premetered mailers, and instructions regarding procedures) to each participating NCTSN center.

Product Development and Dissemination. To ensure that the full range of product/innovation development and dissemination activities is captured, all project directors complete the PDDS as part of the current progress reporting process. At a minimum, these data are accessed by the cross-site evaluation team on a quarterly and an annual basis. With 44 centers completing these reports on a quarterly basis, this results in 176 completed PDDSs per year.1

The evaluation team works with the NCCTS to obtain a contact list of current collaborative workgroup coordinators (chairpersons) for interviews that are conducted in odd years of the evaluation. Structured telephone interviews are scheduled with each chairperson to assess product/innovation development and dissemination activities. Estimating that there are approximately 15 active workgroups in the Network, the cross-site evaluation team anticipates conducting 15 structured telephone interviews, one with each workgroup coordinator.

Data informing the case studies are obtained from multiple sources, including document review, findings from the PDDS, NCCTS national liaisons and other representatives, and semistructured stakeholder interviews conducted during site visits. Once case study products are selected, letters are sent to project directors asking for their assistance in identifying appropriate respondents for addressing each of the assessment domains. The semistructured guides include open- and close-ended questions and probes designed to elicit information pertaining to each of the stated research questions for this component of the evaluation. In addition, the guides end with summary questions designed to allow respondents a forum to express any opinions they would like to offer regarding product/innovation development and dissemination. Evaluation staff contact participants to schedule all interviews, and consent will be obtained before the interviews begin (Attachments 2.C.6 and 2.C.7).

National Impact. Data collection for this component takes place every 12 months throughout the cross-site evaluation. In odd years of the evaluation, the Web-based National Impact Survey is administered to members of professional associations representing the mental health sector (N=2,000). In even years, the survey is administered to members of professional associations representing child welfare, education, juvenile justice, health care, and crisis response sectors (N=2,000). Therefore, approximately 2,000 respondents are included at each measurement point, with an anticipated response rate of 80%. The primarily Web-based survey methods are supplemented by offering respondents options to complete the survey on paper or through a telephone interview.

Full contact information for respondents, including name, address, phone, and e-mail addresses, are assembled from the membership rosters of these professional associations representing mental health, child welfare, education, juvenile justice, health care, and crisis response agencies. The cross-site evaluation team coordinates with these professional associations to access their roster information.

Respondents are recruited to participate through an e-mail invitation (Attachment 4.F). The e-mail presentation and process occurs in four stages: (1) an advance invitation to participate, (2) a formal invitation that includes the Web site’s URL and unique user name and password, (3) a reminder to all respondents, and (4) a final targeted reminder to nonresponders and those who have only partially completed the survey. The formal invitation (stage 2) explains the survey, including the voluntary nature of survey completion, anonymity of responses, and the risks, benefits, and rights as respondents. This invitation also provides contact information if the survey recipient has questions or desires clarification prior to participation. The second page of the survey contains an informed consent form that asks potential respondents to certify (by checking a space for “agree” or “do not agree”) that they have read the informed consent form, understand its content, and freely agree to participate in the project (Attachment 2.C.12).

Table 6 summarizes the information collection procedures across all studies.

TABLE 6

Procedures for the Collection of Information

Measure

Indicators

Data Source(s)

Method

When Collected

Descriptive and Clinical Outcomes

Core Clinical Characteristics (Baseline Assessment Form)

  • Demographic information

  • Domestic environment

  • Insurance information

  • Indicator of severity of problems

  • Use of other services

  • Problems and symptoms

Caregiver

Interview

At entry into services

CBCL 1.5-5 and CBCL 6-18 (Achenbach, 2001; Achenbach & Rescorla, 2000)

  • Behavioral symptoms

  • Emotional symptoms

  • Social competence

Caregiver

Interview

At entry into services and 3, 6, 9, and 12 months after intake

TSCC-A (Briere, 1996)—abbreviated for NCTSN

  • Acute and chronic posttraumatic symptomatology

  • Posttraumatic stress symptoms and symptom clusters

  • Anxiety

  • Depression

  • Anger

  • Dissociation

Child

Interview

At entry into services and 3, 6, 9, and 12 months after intake

TABLE 6 (continued)

Procedures for the Collection of Information

Measure

Indicators

Data Source(s)

Method

When Collected

Descriptive and Clinical Outcomes

UCLA-PTSD (Rodriguez, Steinberg, et al., 1999)

  • Exposure to traumatic events

  • DSM-IV PTSD symptoms

Child

Interview

At entry into services and 3, 6, 9, and 12 months after intake

Core Clinical Characteristics (Baseline Assessment Form), Core Clinical Characteristics (Follow-up Assessment Form)

  • Inpatient and residential services

  • Outpatient therapy

  • Clinicians/
    providers

  • Techniques and activities

  • Primary treatment(s)

Caregiver

Interview

At entry into services and 3, 6, 9, and 12 months after intake

Core Clinical Characteristics (General Trauma Information Form), Core Clinical Characteristics (Trauma Detail Form)

  • Trauma type

  • Age experienced

  • Exposure type

  • Chronicity of exposure

  • Setting and perpetrator(s)

Caregiver

Interview

At entry into services and 3, 6, 9, and 12 months after intake

Satisfaction Study

YSS-F

  • Access

  • Participation in treatment

  • Cultural sensitivity

  • Appropriateness/client satisfaction

  • Perceived outcomes

Caregiver

Telephone survey/ mailed survey

Ongoing, after 6 months or completion of services, whichever comes first

Knowledge and Use of Trauma-informed Services

TIS Survey

  • Training satisfaction

  • Ratings of knowledge enhancement due to training

  • Providers’ predicted use of knowledge gained through training

Service providers

Web-based Survey

Ongoing, after center-sponsored training and outreach events

TABLE 6 (continued)

Procedures for the Collection of Information

Measure

Indicators

Data Source(s)

Method

When Collected

Product/Innovation Development and Dissemination

PDDS

  • Network products (type, audiences)

  • Development and dissemination stages

Project director/staff

Survey


Quarterly and as part of the combined fourth quarter/ annual report

Collaborative workgroup coordinator interviews


  • Network products (type, audiences)

  • Stage of development and dissemination and facilitators and barriers

  • Development process

  • Dissemination process

  • Membership and members involvement in development and dissemination

  • Role of NCCTS and NCTSN

Workgroup coordinators (chairpersons)

Telephone interview

Odd years of the evaluation

Case study interviews

  • Network products

  • Stage of development and dissemination and facilitators and barriers

  • Development process

  • Dissemination process

  • Role of caregivers, staff, and Network partners

Persons involved in product development and dissemination

Telephone or in-person interview

Even years of the evaluation

TABLE 6 (continued)

Procedures for the Collection of Information

Measure

Indicators

Data Source(s)

Method

When Collected

Adoption of Methods and Practices

GAAS

  • Number of unique products indicated by all respondents

  • Distribution of adopted products

  • Distribution and characteristics of type of staff

  • Time required to adopt practices, stages of practice adoption

  • Distribution of methods of implementation by product type

  • Ratings of degree of adoption

Direct service providers, direct service supervisors, program evaluators, program directors

Web-based survey

Annually

AIFI telephone interview

  • Practice implementation history and status

  • Organizational culture and characteristics

  • Resources

  • Internal support infrastructure

  • Network support

  • Past experience

  • Organizational readiness

  • Staff attitudes (appeal, likelihood of adoption, openness, divergence from current practice)

  • Identification of users of products outside the Network

Direct service providers, direct service supervisors, program evaluators, program directors, non-Network direct service providers, non-Network program directors

Telephone interview

Annually

TABLE 6 (continued)

Procedures for the Collection of Information

Measure

Indicators

Data Source(s)

Method

When Collected

Network Collaboration

Network Survey

Frequency and type of linkages among all NCTSN centers in following areas:

  • Governance/
    decision making

  • Information sharing and coordination of activities

  • Product development

  • Product dissemination and adoption

  • Training and technical assistance

  • Factors which facilitate and inhibit collaboration

NCTSN center director and center associate director

Web-based survey

Alternate odd years of the evaluation

CTPT

Activities and impact of the NCTSN formal collaboration structures (workgroups, committees, and consortia) in the following domains:

  • Membership activities

  • Accomplishments

  • Formalization, leadership

  • Communication

  • Vision

  • Decision making

  • Resource allocation

  • Understanding/
    valuing

All members of formal NCTSN workgroups

Web-based survey

Alternate even years of the evaluation

TABLE 6 (continued)

Procedures for the Collection of Information

Measure

Indicators

Data Source(s)

Method

When Collected

National Impact

Web-based National Impact Survey

  • Agencies’ familiarity with NCTSN centers, and types of activities in which agencies have collaborated with NCTSN centers

  • Knowledge about the consequences of trauma on child development, treatment needs, and interventions

  • Extent to which agencies use trauma interventions

  • Existence of polices and procedures related to screening, assessing, and treatment

  • Provision of specialized services and use of evidence-based treatments

  • Existence of plans for developing specialized services

  • Use of specialized training materials and type of training materials used

Executive directors of agency representatives in the mental health, child welfare, education, and juvenile justice sectors

Web-based survey

Odd years for mental health sector representatives; even years for other child-serving sector representatives

TABLE 6 (continued)

Procedures for the Collection of Information

Measure

Indicators

Data Source(s)

Method

When Collected

National Impact

Web-based National Impact Survey

  • Routine collection and management of data related to assessment, treatment, service utilization, and cost

  • Funding mechanisms

  • Advocacy and information dissemination channels

Whether information/knowledge from or collaboration with NCTSN centers contributed to agencies’ trauma-informed policies, programs, and practices

Executive directors of agency representatives in the mental health, child welfare, education, and juvenile justice sectors

Web-based survey

Odd years for mental health sector representatives; even years for other child-serving sector representatives

B3. Methods to Maximize Response Rates

Center evaluators are responsible for longitudinal data collection for the Descriptive and Clinical Outcomes Study in their community. The cross-site evaluator provides resources and technical assistance to aid local evaluators in maximizing response rates. This is done by providing the following: (1) a data collection procedures manual, (2) regional and individual site-level trainings, (3) evaluation workshops at annual national meetings, (4) one-on-one contact with cross-site evaluation liaisons, (5) regular teleconferences and site visits throughout the evaluation period, (6) forums for cross-site facilitated discussions, (7) reading materials, and (8) additional guidance and information, as questions arise. In addition, the cross-site evaluator offers support related to participant tracking to ensure that site evaluators are aware when an interview is due for completion.

For data collection efforts with children and caregivers, center evaluators are encouraged to collect extensive contact information from participants in the Descriptive and Clinical Outcomes Study to facilitate contacting them for their follow-up data collection interviews. This includes the names, telephone numbers, and addresses of close friends and family members who are likely to know where the participants are if they move. Efforts to contact respondents for follow-up data collection will begin by 1 month before the follow-up interview is due. At the time of follow-up data collection, staff are trained to attempt to contact respondents at different times of the day and week using a variety of methods (e.g., telephone calls, mailed postcards). This continues until contact has been made or it is determined that a family has refused further participation or cannot be found.

The cross-site evaluator encourages centers to use the following strategies in their data collection process in order to increase response rate:

  • Provide an incentive payment to caregivers and youth who participate in the Descriptive and Clinical Outcomes Study, including payments at each data collection point. These incentives will be provided by individual centers rather than the cross-site evaluator.

  • Provide an incentive to caregivers who participate in the Satisfaction Study.

  • Administer the instruments to children and their caregivers at times of their choice and administering multiple instruments at one time to reduce the number of interviews.

  • Develop a close working relationship between the data collection staff and providers at each center to facilitate tracking.

  • When available, administer instruments in English or Spanish to meet the needs of diverse communities and remove language barriers in completing the surveys.

  • Provide English- and Spanish-speaking interviewers to assist with administration of instruments; for other languages, when possible, link in an online interpreter after the interview has been initiated.

  • Conduct follow-up and informational mailings throughout the study period to maintain contact with study participants.

  • Employ proven tracking techniques (e.g., request address corrections from the post office for forwarded mail, use CD-ROM listings of names and addresses, employ locator services to search for respondents).

  • Provide families and center staff with useful feedback on data obtained through the evaluation activities that will provide insight into the progress and treatments of children in their center and assist them in planning and service delivery.

Data collection for the Provider Knowledge and Use of Trauma-informed Services Study is managed by the cross-site evaluation team, with help from center trainers in distributing and submitting surveys after training events. The cross-site evaluation team assists centers in maximizing response rates by doing the following:

  • Providing an incentive payment to one survey respondent in every training event (selected by lottery).

  • Providing thorough technical assistance and guidance to centers in the administration of the survey. This includes the provision of talking points, which the center trainers can use to introduce the survey after training events.

Data collection in all other studies (i.e., all studies except the Descriptive and Clinical Outcomes Study and the Provider Knowledge and Use of Trauma-informed Services Study) is conducted by the cross-site evaluation team. Efforts to maximize response rates in the remaining studies will include the following:

  • Informing center management and evaluators about the cross-site evaluation components through Network communications and the cross-site evaluation technical assistance trainings. These trainings will include information about the cross-site evaluation components, their importance, and the expectations regarding center participation in the evaluation.

  • Providing ongoing technical assistance in order to identify specific procedures that will improve participation of specific sites in all aspects of the evaluation.

  • Administering Web-based surveys and telephone interviews.

  • Using the Dillman method (2000) for mail and Internet surveys for recruitment for all Web-based surveys. This method involves mailing a presurvey notification explaining that recipients will be asked to participate in the survey, followed 1 week later by an invitation containing an incentive and directions for logging on to a Web site to complete the Web-enabled survey. The e-mail presentation and process will occur in four stages: (1) an advance invitation to participate, (2) a formal invitation that includes the Web site’s URL and unique user name and password, (3) a reminder to all potential respondents, and (4) a final targeted reminder to nonresponders or people who have not completed the survey.

  • Sharing, with center management and evaluators, nonidentifying site-specific data with preliminary evaluation results.

  • Incorporating preliminary evaluation findings into technical assistance efforts with sites.

It is expected that the PDDS will have a 100% response rate because this measure is integrated into the existing required quarterly and annual progress reporting system employed by the Network.

To maximize response rate for the Satisfaction Study, respondents with telephone numbers are called up to 10 times in order to increase the likelihood of reaching the respondent. If contact has not been made after the 10 telephone attempts, a hard copy of the survey is mailed to the respondent, using the Dillman method. The survey also is sent by mail to respondents who do not have a telephone. This mixed-method approach (telephone and mailout) with repeated contacts should provide a response rate of at least 80%.

B4. Tests of Procedures

Descriptive and Clinical Outcomes. The measures for the descriptive and outcomes portion of the evaluation were selected through a participatory process organized by the NCCTS, involving input from funded centers through surveys, conferences, and other activities, as well as the piloting of instruments across the NCTSN. Substantial information supporting the reliability and validity of the CBCL, TSCC-A, and the UCLA-PTSD is already available from the developers of these tools. The Core Clinical Characteristics Forms (Baseline Assessment Form, Follow-up Assessment Form, General Trauma Information Form, and Trauma Detail Form) were devised by the NCCTS to assist with the clinical evaluation of children. These forms are not structured to be amenable to formal psychometric testing. All of the measures for the descriptive and clinical outcomes portion of the evaluation are available in Spanish. Additional details regarding each of the standardized measures follow.

Child Behavior Checklist for Ages 1.5–5

The CBCL 1.5-5 is designed to provide a standardized measure of symptomatology for children ages 1.5–5. The CBCL 1.5-5 has been widely used in mental health services research as well as for clinical purposes. The checklist is a caregivers’ report of their child’s problems, disabilities, and strengths, as well as parental concerns about their child. Caregivers report on 99 problem items by indicating if statements describing children are not true, somewhat/sometimes true, or very/often true for their child. Caregivers are also asked three questions that allow them to describe problems, concerns, and strengths for their child. Using a national normative sample and large clinical samples to derive cross-informant syndromes, the checklist assesses children for seven conditions: (1) emotionally reactive, (2) anxious/depressed, (3) somatic complaints, (4) withdrawn, (5) attention problems, (6) aggressive behavior, and (7) sleep problems. Although it does not yield diagnoses, the CBCL 1.5-5 provides a profile of DSM-oriented scales that “experienced psychiatrists and psychologists from ten cultures rated as being very consistent with DSM diagnostic categories” (Achenbach System of Empirically Based Assessment, 2008a). Additionally, the checklist yields scores that measure children’s internalizing, externalizing, and total problems. The CBCL 1.5-5 is available in English and Spanish.

Achenbach (1991) has reported a variety of information regarding internal consistency, test-retest reliability, construct validity, and criterion-related validity. Good internal consistency was found for the internalizing, externalizing, and total problems scales (α≥.82). The CBCL demonstrated good test-retest reliability after 7 days (Pearson’s r at or above .87 for all scales). Moderate to strong correlation with the Connor Parent Questionnaire and the Quay-Peterson scale (Pearson’s r coefficients ranged from .59 to .88) suggested the construct validity of the CBCL. The CBCL was, for most items and scales, capable of discriminating between children referred to clinics for needed mental health services and those youth not referred (Achenbach, 1991). A variety of other studies also have shown good criterion-related or discriminant validity (e.g., Barkley, 1988; McConaughy, 1993).

Interobserver agreement was evident in a meta-analysis of 119 studies that used the CBCL and the form for adolescents, the Youth Self-Report (YSR). In 269 separate samples, statistically significant correlations (using Pearson’s r) were found among ratings completed by parents, mental health workers, teachers, peers, observers, and adolescents themselves (Achenbach, McConaughey, & Howell, 1987).

The instrument has been nationally normed on a proportionally representative sample of children across income and racial/ethnic groups. Racial/ethnic differences in total and subscale scores of the CBCL disappeared when controlling for socioeconomic status, suggesting a lack of instrument bias related to racial/ethnic differences.

The CBCL provides two broadband scores (i.e., internalizing, externalizing), seven narrow-band scores (e.g., emotionally reactive, withdrawn, aggressive behavior), and a total problems score. Scales are based on ratings of 1,728 children and are normed on a national sample of 700 children. Hand- and computer-scored profiles are available. The scoring programs developed by the authors should be used to generate the scores. All grantees will be provided with a copy of the scoring program and accompanying manual, if they do not already have them. Sites will be able to contact their cross-site evaluation liaisons for more information.

Child Behavior Checklist for Ages 6-18

The CBCL 6-18, formerly CBCL 4-18, is designed to provide a standardized measure of symptomatology for children ages 6–18. This new version of the checklist has been “updated to incorporate new normative data, include new DSM-oriented scales, and to complement the new preschool forms” (Achenbach System of Empirically Based Assessment, 2008b). The CBCL 6-18 has been widely used in mental health services research as well as for clinical purposes. The checklist is a caregiver report of social competence and behavior and emotional problems among children and adolescents. It consists of 20 social competence items and 120 behavior problem items, which include 118 specific problems and 2 open-ended items for reporting additional problems. The social competence section collects information related to the child’s activities, social relations, and school performance. The behavior problem section documents the presence of symptoms (e.g., argumentativeness, withdrawal, aggression). Caregivers rate their child for how true each item is now or within the past 6 months using the following scale: 0=not true, 1=somewhat/sometimes true, and 2=very/often true. The CBCL 6-18 scores on a number of empirically derived factors (Achenbach System of Empirically Based Assessment, 2008b). Although it does not yield diagnoses, the CBCL assesses children’s symptoms on a continuum and provides two broadband (i.e., internalizing and externalizing) syndrome scores, eight cross-informant syndrome scores (e.g., attention problems, depressive mood, conduct problems), six DSM-oriented scales, and percentiles for three competence scales (activities, social, and school). A total problems score can also be generated.

Achenbach (1991) has reported a variety of information regarding internal consistency, test-retest reliability, construct validity, and criterion-related validity. Good internal consistency was found for the internalizing, externalizing, and total problems scales (α≥.82). The CBCL demonstrated good test-retest reliability after 7 days (Pearson’s r at or above .87 for all scales). Moderate to strong correlation with the Connor Parent Questionnaire and the Quay-Peterson scale (Pearson’s r coefficients ranged from .59 to .88) suggested the construct validity of the CBCL. The CBCL was, for most items and scales, capable of discriminating between children referred to clinics for needed mental health services and those youth not referred (Achenbach, 1991). A variety of other studies also have shown good criterion-related or discriminant validity (e.g., Barkley, 1988; McConaughy, 1993).

Interobserver agreement was evident in a meta-analysis of 119 studies that used the CBCL and the form for adolescents, the YSR. In 269 separate samples, statistically significant correlations (using Pearson’s r) were found among ratings completed by parents, mental health workers, teachers, peers, observers, and adolescents themselves (Achenbach et al., 1987).

The instrument has been nationally normed on a proportionally representative sample of children across income and racial/ethnic groups, region, and urban-rural residence. The CBCL 6-18 scoring profile provides raw scores, T scores, and percentiles for three competence scales, total competence, eight cross-informant syndromes, and internalizing, externalizing, and total problems. The cross-informant syndromes scored are (1) aggressive behavior,
(2) anxious/depressed, (3) attention problems, (4) rule-breaking behavior, (5) social problems, (6) somatic complaints, (7) thought problems, and (8) withdrawn depressed. There are also six DSM-oriented scales, including (1) affective problems, (2) anxiety problems, (3) somatic problems, (4) attention deficit/hyperactivity problems, (5) oppositional defiant problems, and
(6) conduct problems. In constructing the DSM-oriented scales child psychiatrists and psychologists from 16 cultures rated the consistency of checklist items with DSM-IV categories. Scales are derived from factor analyses of caregiver ratings of 4,994 clinically referred children and are normed on 1,753 children ages 6–18. The scoring programs developed by the authors should be used to generate the scores. All grantees will be provided with a copy of the scoring program and accompanying manual, if they do not already have them. Sites should contact their liaisons for more information.

UCLA PTSD Index for DSM-IV

The UCLA-PTSD screens for exposure to traumatic events and for all DSM-IV PTSD symptoms in children who report traumatic stress experiences. The measure yields preliminary PTSD diagnostic information and is keyed to DSM-IV criteria. The UCLA-PTSD can be administered to caregivers; a self-report version of the instrument also exists (Rodriguez et al., 1999). The self-report version is included in the Core Data Set. The instructions and questions should be read aloud to children under the age of 12 or to youth with known reading comprehension difficulties. Children under the age of 7 are not required to complete the form. The UCLA-PTSD is administered at intake and every 3 months, up to 12 months, to all children and adolescents ages 7–18 who are enrolled in the outcome study.



Trauma Symptom Checklist for Children—Abbreviated

The TSCC-A evaluates acute and chronic posttraumatic stress symptoms in children’s responses to unspecified traumatic events across several symptom domains. The TSCC-A is a 44-item self-report measure in which the child indicates how often he/she experiences various thoughts, feelings, and behaviors. The measure provides a means of assessing stress symptoms that do not rise to the level of PTSD diagnosis.

The TSCC-A has been standardized on racially and economically diverse children in urban and suburban environments and normed on age and sex. The instrument yields two validity scales, six clinical scales (anxiety, depression, anger, posttraumatic stress, and two dissociation subscales), and eight critical items. The 10 items related to sexual issues are not included in the abbreviated version of the TSCC (Briere, 1996). The TSCC-A is administered at intake and every 3 months, up to 12 months, to all children ages 8–16 who are enrolled in the outcome study.

Consumer Satisfaction. The YSS-F was piloted with family consumers of the NCTSI and reviewed for appropriateness of questions and response formats. The YSS-F is a consumer satisfaction instrument developed by the Mental Health Statistics Improvement Program, endorsed by the National Association of State Mental Health Program Directors, and currently adopted in roughly 20 States. This survey was borne out of an initiative sponsored by CMHS and was developed as a collaborative effort by the Children’s Indicator Workgroup of Sixteen States Study and consumers. The survey instrument is designed to measure select indicators consistent with national standards for children’s mental health services, and the utility, reliability, and validity of the survey are well established. On the basis of reliability analysis of the State Indicator Pilot Project, which evaluated data from Colorado, Kentucky, Oklahoma, Texas, Virginia, and the District of Columbia, Cronbach’s alpha for the domain measuring access to services is .725, participation in treatment is .772, cultural sensitivity of staff is .907, satisfaction with services is .943, and perceived outcome of service is .905. Therefore, it will not be necessary to conduct any new tests of the measure. The YSS-F is available in Spanish.

Adoption of Methods and Practices. The GAAS was developed by the cross-site evaluator and, although based on frameworks for similar instrumentation, is designed for the unique requirements of the NCTSI cross-site evaluation. The GAAS was pretested with two centers that agree to assist in the evaluation of the instrument, and the instrument is pilot tested each year in advance of the administration of the survey. Pretest and pilot test results have been used to refine the instructions and instrumentation.

The AIFI also was developed by the cross-site evaluator, and it is intended to be less structured and focus on qualitative data. It also contains some highly structured components, depending on the product being assessed. Nevertheless, a similar pretest was performed before its first administration. Pretest results were used to obtain an accurate estimate of length of the interview and to refine the survey to improve data collection.

Network Collaboration. The Network Survey was developed using standardized social network analysis methods (Wasserman & Faust, 1995) that have been applied extensively in the field of health services research to describe and evaluate collaboration among health and mental health service organizations (Morrissey, 1999; Valente, 1995). No additional tests of the procedures are needed due to adherence to conventional social network surveying methods, as follows. The body of the survey asks respondents to select from a listing of all NCTSN centers the ones that they interact with in key NCTSN activities related to governance/decision making, information sharing and coordination of activities, product development, product dissemination and adoption, and training and technical assistance.

The Macro survey instrument, the Child Trauma Partnership Tool, was adapted from the Partner Participatory Assessment Tool (which was developed by the Centers for Disease Control and Prevention [CDC]) for the present evaluation to assess the NCTSN formal collaboration structures, such as workgroups, committees, and consortia. The CTPT was originally developed to assess workgroups in a CDC-sponsored program of community research involving universities and community agencies in the development and dissemination of culturally relevant products and messages designed to reduce the incidence of diabetes and its complications. Previous testing of the psychometrics of the instruments showed that 10 of the domains had a coefficient alpha of .700 or above, suggesting that their items were “good” measures of the domain construct. Two domains had alpha values between .600 and .699, an adequate score (Dawkins, Chervin, Kelly, Rivera, & Stewart, 2007). The adapted instrument has 56 items that ask respondents to select the workgroup in which they have been most active during the preceding 12 months, and to respond to questions from their perspective in this workgroup. Respondents are asked to use a five-point scale to indicate their agreement with the workgroup’s activities and impact in the following domains: membership activities, accomplishments, formalization, leadership, communication, vision, decision making, resource allocation, and understanding/valuing.

Provider Knowledge and Use of Trauma-informed Services. Data obtained from the knowledge and use of trauma-informed services interviews and discussion groups, conducted in previous years of the evaluation, were used to develop the TIS Survey, a quantitative measure designed to assess knowledge, attitudes, and behaviors of those providing frontline services to children and adolescents. Input also was obtained from SAMHSA and experts in the field of trauma-informed services. After the survey was developed, it was pilot tested in a workgroup of center representatives and by cross-site evaluation and SAMHSA staff.

Product Development and Dissemination. The instruments used to describe and assess Network product development and dissemination (i.e., PDDS, workgroup coordinator telephone interviews, and case study in-person and telephone interviews) were specifically developed for the cross-site evaluation. Each instrument has been reviewed internally by cross-site evaluation staff in addition to experts in trauma-informed services within and outside of the NCTSN.

National Impact. The Web-based National Impact Survey was developed specifically for the cross-site evaluation. The instrument has been reviewed by experts in trauma-informed services within and outside of the NCTSN. Pilot testing of the paper instrument and the Web-based instrument prior to the first administration of the survey in 2006 resulted in relatively minor modifications, such as adding some response categories to the items describing agency characteristics and adding two new items related to policies and practices on seclusion and restraint. Testing of the instrument, using the first survey administration data from 2006, showed high internal consistency (Cronbach’s alpha=.89) on scales measuring agencies’ knowledge and use of trauma-informed care, as well as policies and practices supporting trauma-informed care.

B5. Statistical Consultants

The cross-site evaluator has full responsibility for the development of the overall statistical design and assumes oversight responsibility for data collection and analysis for the cross-site evaluation. Training, technical assistance, and monitoring of data collection will be provided by the cross-site evaluator. The following individual is primarily responsible for overseeing data collection and analysis:

Christine Walrath, PhD

Macro International Inc.

116 John Street, Suite 800

New York, NY 10038

(212) 941-5555

The following individuals serve as statistical consultants to this project:

Megan Brooks, MA

Macro International Inc.

3 Corporate Square, Suite 370

Atlanta, GA 30329

(404) 321-3211



Yisong Geng, PhD

Macro International Inc.

3 Corporate Square, Suite 370

Atlanta, GA 30329

(404) 321-3211



John Gilford, PhD

Macro International Inc.

3 Corporate Square, Suite 370

Atlanta, GA 30329

(404) 321-3211



Robert Stephens, MPH, PhD

Macro International Inc.

3 Corporate Square, Suite 370

Atlanta, GA 30329

(404) 321-3211



Bhuvana Sukumar, PhD

Macro International Inc.

3 Corporate Square, Suite 370

Atlanta, GA 30329

(404) 321-3211



Tom Valente, PhD

University of Southern California

School of Medicine, Department of Preventive Medicine

1000 South Fremont Avenue, Building A, Room 5133

Alhambra, CA 91803

(626) 457-6678



Christine Walrath, PhD

Macro International Inc.

116 John Street, Suite 800

New York, NY 10038

(212) 941-5555

The following agency staff member is responsible for receiving and approving contract deliverables:

Jennifer Oppenheim

Public Health Analyst

Division of Prevention, Traumatic Stress and Special Programs

Center for Mental Health Services

Substance Abuse and Mental Health Services Administration

U.S. Department of Health and Human Services

1 Choke Cherry Road, Room 6-1132

Rockville, MD 20857

(240) 276-1862

[email protected]

LIST OF ATTACHMENTS

Attachment 1 Consultation

  1. Federal/National Government Consultants

  2. Expert Methodological Consultants

  3. Cultural Competence Review Committee

  4. Family Review Committee

  5. Site Visit List

Attachment 2 Guidelines for Obtaining Consent and Model Consent Forms

  1. Guidelines for Obtaining Consent

  2. Model Script for Consent to Contact

  3. Model Consent Forms

Attachment 3 Data Collection Instruments

  1. Core Clinical Characteristics Form (Baseline and Follow-up)

  2. Trauma Information/Trauma Detail Form

  3. Child Behavior Checklist 1.5-5/6-18 (CBCL 1.5-5/6-18)

  4. UCLA-PTSD Short Form (UCLA-PTSD)

  5. Trauma Symptoms Checklist for Children-Abbreviated (TSCC-A)

  6. Youth Services Survey for Families (YSS-F)

  7. Provider Trauma-informed Services Survey (TIS)

  8. Trauma-informed Services Training Summary Form

  9. General Adoption Assessment Survey (GAAS)

  10. Adoption and Implementation Factors Interview (AIFI)

  11. Product/Innovations Development and Dissemination Survey (PDDS)

  12. Workgroup/Taskforce Coordinator Interview

  13. PDD Case Study Interview Guide

  14. Network Survey

  15. Child Trauma Partnership Tool (CTPT)

  16. National Impact Survey

Attachment 4 Introductory and Follow-up Letters, E-mails and Phone/URL Scripts

Attachment 5 References Cited in Supporting Statement



1 The fourth quarter progress report is combined with each center’s annual report as one report (4th quarter/annual report).



File Typeapplication/msword
Authornatalie.j.henrich
Last Modified Byowner
File Modified2009-03-20
File Created2009-03-20

© 2024 OMB.report | Privacy Policy