Supporting Statement for the National Child Traumatic Stress Initiative Evaluation
Statistical Methods
1. Respondent Universe and Sampling Methods
Below is a summary of the respondent universe and sampling methods for the NCTSI Evaluation, organized by evaluation activities that are proposed to continue and expanded evaluation activities.
Evaluation Continuation
Under the currently approved OMB clearance for the CDS, descriptive and clinical outcomes data are collected on all children who enter outpatient or inpatient trauma-related mental health services. A subset of these cases is targeted for subsequent 3-month follow-up intervals whether or not the client is still receiving services for up to 1 year. The revision requests that centers administer the 3-month follow-up assessments to all clients served rather than a subset of clients and to limit the collection to the period of time while the client is receiving treatment. Of the estimated 62 active centers during any given year, we have found that approximately 75% are eligible to participate in the CDS based on their grant-funded activities.
For the TSF, a sampling plan is not necessary, as we are attempting to document every training event provided by funded NCTSI centers. The respondents are trainers who provide trainings for NCTSI centers.
The NCTSI National Reach Survey will be administered to members of professional associations representing the mental health, child welfare, education, juvenile justice, and health care sectors. Before administering the NCTSI National Reach Survey, the OPMR for each NCTSI center will be reviewed to identify state-level organizations with which centers partner, and a list of these organizations will be compiled. NCTSI centers and the NCCTS will also be asked to identify National-level organizations from the various child serving sectors that should as part of the respondent group for this survey. The National- and state-level organizations selected will then be contacted and asked to identify potential respondents for this survey. An estimated 2,000 individuals will be surveyed. To maximize response rates for this Web-based survey, the NCTSI evaluation team is using a $10 incentive and a four-stage approach composed of an advance invitation, a formal individualized invitation, and two follow-up reminders. Additional strategies include offering respondents alternative ways of responding (i.e., via hard copy or telephone interview) and follow-up telephone contact with nonrespondents.
Evaluation Expansion
The OPMR will be completed as part of centers’ quarterly progress and annual reports by project directors and staff from each center. Because the data are collected through the NCTSI’s current required progress reporting process, a 100% response rate is expected.
For the TSIS, a sampling plan is not necessary, as we are attempting to document every training event provided by funded NCTSI centers. All training participants will be invited to complete the TSIS.
The ESTC Survey will be administered to two broad types of respondent groups (administrators and human service providers) to assess the impact of NCTSI training and other dissemination activities on the respondent groups, particularly the extent to which services have become evidence-based and trauma-informed as a result of the trainings or educational activities. These surveys will be conducted twice over the grant period of each NCTSI center. The criteria for being included and recruitment method varies for the two respondent groups:
Administrators: As part of the process of creating our allocation sample, the OPMR and other center data will be used to identify the activities undertaken and services provided by each center, including training activities and other collaborative activities involving child-serving agencies. The NCTSI centers will also be asked directly about such interactions and partnerships, and they will be asked to identify a contact person working within such agencies. The contact person will be contacted and informed about the purpose of these surveys and asked to identify a suitable administrator. Data from the previous evaluation suggest that NCTSI centers usually work with at least two service systems. Assuming two administrators per NCTSI center (n=62), there will be 126 administrators overall for each administration. In addition, respondents will include administrators from the 62 currently funded NCTSI centers; thus, collectively, respondents will total 189. The survey will be administered in years 1 and 3 of an NCTSI center’s funding.
Human Service Providers: All professionals from child-serving systems that are trained by NCTSI centers (i.e., generally service providers of various types—mental health, child welfare workers, teachers, health care, etc.), will be administered the provider version of the ETSC Survey at the end of each training and at 12- and 24-month followups to assess the self-perceived increase in knowledge and impact on behaviors, supervision, consultation, and organizational supports for the effective delivery of evidence-based trauma treatment and trauma-informed practices. Currently, there is not a consistently maintained data source tracking trainee contact information or the average number of individuals trained by provider type. In order to avoid the additional burden it would place on centers to collect trainee contact information, maintain records of trainees by provider type, and secure consent to contact forms from the trainees, this study component will use a self-identification process (i.e., the TSIS) to gather the information required to establish a sampling frame, if needed. Respondents are NCTSI center employed clinicians and center trained providers. It is estimated that on average, for each of the 62 centers, four center-employed clinicians and four center trained providers will participate in this survey, resulting in a total of 504 respondents.
Respondents for the Sustainability Survey consist of project directors and evaluators for currently funded centers and project directors for affiliate centers. All center administrators in these roles will be selected to participate in the studies. The inclusion criteria for the respondents will be all current evaluators and project directors from centers funded in 2008, 2009, and 2010. Affiliate participants will include all active NCTSI centers as defined by SAMHSA from the 2001, 2002, 2003 and 2005 cohorts to include project directors. The potential number of respondents from currently funded centers will be 2 participants from 62 centers or 126 respondents. The potential number of respondents from affiliate centers will be 1 respondent per 45 affiliate centers, or a maximum of 45 respondents. The numbers of respondents for both surveys will be sufficient to run statistical analyses for descriptive, bivariate, multivariate analyses and between and within group comparisons.
2. Information Collection Procedures
Evaluation Continuation
CDS data are collected by individuals at the center level who may include trained data collectors or clinicians. Each center receives intensive training from the NCTSI Evaluator to ensure standard collection of these data. Because respondents’ reading levels will vary depending on age and other factors, the instruments can be either self-administered or administered in interview format by center staff, depending on the needs of the client. For example:
The TSCC-A is administered to children between the ages of 8-16
The UCLA-PTSD is administered to children 7 years of age and older
The CDI-2S is administered to children ages 7-17
The rest of the measures for this study (the CBCL, the TSCYC, the PSI-SF, and the Core Clinical Characteristics Forms [Baseline Assessment Form, Follow-up Assessment Form, General Trauma Information Form, and Trauma Detail Form]) are administered to caregivers.
In the case of the TSF, when NCTSI center trainers conduct a training activity, they complete a TSF form and submit the data electronically. If the training audience and training topics are appropriate for the NCTSI evaluation, the trainer will also invite the training participants to complete a TSIS (sign-in sheet), which is also submitted to the NCTSI Evaluator.
The NCTSI National Reach Survey will be administered by the NCTSI Evaluator through the NCTSI Evaluator’s online data collection system (see Section A.3 for more detail).
Evaluation Expansion
Similar to the NCTSI National Reach Survey, the OPMR, the ESTC Survey, and the Sustainability Survey will be administered electronically through the NCTSI Evaluator’s online data collection system (see Section A.3 for more detail). The OPMR can be accessed at any time by center administrators and there is an expectation that information will be updated on a quarterly, annual or one-time basis depending on the type of information being submitted. The ESTC Survey and the Sustainability Survey will be administered electronically by the NCTSI Evaluator on different data collection schedules (outlined in the section above). The Sustainability Survey for Funded Centers is accessible through a Web link that appears in the OPMR while the Sustainability Survey for Affiliate Centers is simply a Web-based survey. Respondents from funded centers will be invited to participate through the OPMR, while affiliate respondents will be sent an email invitation to participate. Respondents who prefer to submit a paper copy of any of the Web-based surveys will be provided the option of doing so.
Table 6 summarizes the information collection procedures for the forms and surveys included in the NCTSI Evaluation.
Procedures for the Collection of Information
Measure |
Indicators |
Data Source(s) |
Method |
When Collected |
|
Core Clinical Characteristics (Baseline Assessment Form) |
|
Caregiver |
Interview |
At entry into services |
|
CBCL 1.5-5 and CBCL 6-18 (Achenbach, 2001; Achenbach & Rescorla, 2000) |
|
Caregiver |
Interview/self-administered |
At entry into services and every 3 months through end of treatment |
|
TSCYC (Briere, 2005) |
|
Caregiver to children aged 3 through 7 |
Interview/self-administered |
At entry into services and every 3 months through end of treatment |
|
PSI-SF (Abidin, 1995) |
|
Caregiver to children aged 12 and under |
Interview/self-administered |
At entry into services and every 3 months through end of treatment |
|
TSCC-A (Briere, 1996)—abbreviated for NCTSI |
|
Children aged 8-16 |
Interview/self-administered |
At entry into services and every 3 months through end of treatment |
|
UCLA-PTSD (Rodriguez, Steinberg, et al., 1999) |
|
Children aged 7 and older |
Interview/self-administered |
At entry into services and every 3 months through end of treatment |
|
CDI-2S (Kovacs, 1992) |
|
Children aged 7 through 17 |
Interview/self-administered |
At entry into services and every 3 months through end of treatment |
|
GAIN-MSS (Dennis, Chan, & Funk, 2006). |
|
Children aged 12 and older |
Interview/self-administered |
At entry into services and every 3 months through end of treatment |
|
Core Clinical Characteristics (Baseline Assessment Form), Core Clinical Characteristics (Follow-up Assessment Form) |
|
Caregiver |
Interview |
At entry into services and every 3 months through end of treatment |
|
Core Clinical Characteristics (General Trauma Information Form), Core Clinical Characteristics (Trauma Detail Form) |
|
Caregiver |
Interview |
At entry into services and every 3 months through end of treatment |
|
Measure |
Indicators |
Data Source(s) |
Method |
When Collected |
|
EBP and Trauma-informed Systems Change Survey—Administrator Version |
|
Administrators of NCTSI centers and other child-serving systems/agencies |
Survey – online, by telephone, or pencil & paper |
At baseline (year 1 of the NCTSI center funding) and follow up (year 3 of the NCTSI center funding) |
|
EBP and Trauma-informed Systems Change Survey—Provider Version |
|
Providers at NCTSI centers and other child-serving systems/agencies |
Survey – online, by telephone, or pencil & paper |
At the end of each training and at 12 and 24 month follow up |
|
NCTSI National Reach Survey |
|
Administrators of agency representatives in the mental health, child welfare, education, and juvenile justice sectors |
Web-based survey |
Alternating years of the NCTSI evaluation |
|
Training Summary Form |
|
Trainers |
Paper & pencil |
At completion of all training events |
|
Training Sign-In Sheet
|
Participants provide:
|
Participants at NCTSI-sponsored trainings. |
Paper & pencil |
At beginning of all training events |
|
Sustainability Survey for Affiliate Centers |
|
Project Director
|
Web-Based Survey |
Annually |
|
Sustainability Survey for Currently Funded Centers |
|
Project Director
Evaluator |
Web-Based Survey |
Annually- OPMR form |
|
Online Performance Monitoring Report (OPMR) |
|
Project director/staff |
Web-based Survey
|
Quarterly and as part of the combined fourth quarter/ annual report |
3. Methods to Maximize Response Rates
Local center staff members are responsible for collecting CDS data in their community. The NCTSI evaluator provides resources and technical assistance to aid local evaluators in maximizing response rates. This is done by providing the following: (1) a data collection procedures manual, (2) regional and individual site-level trainings, (3) evaluation workshops at annual national meetings, (4) one-on-one contact with NCTSI Evaluation liaisons, (5) regular teleconferences and site visits throughout the evaluation period, (6) forums for NCTSI Evaluator-facilitated discussions, (7) reading materials, and (8) additional guidance and information, as questions arise. In addition, the NCTSI Evaluator offers support related to participant tracking to ensure that local data collectors are aware when an interview is due for completion.
The NCTSI evaluator encourages centers to use the following strategies in their data collection process in order to increase response rate:
Administer the instruments to children and their caregivers at times of their choice and administering multiple instruments at one time to reduce the number of interviews.
Develop a close working relationship between the data collection staff and providers at each center to facilitate tracking.
When available, administer instruments in English or Spanish to meet the needs of diverse communities and remove language barriers in completing the surveys.
Provide English- and Spanish-speaking interviewers to assist with administration of instruments; for other languages, when possible, link in an online interpreter after the interview has been initiated.
Conduct follow-up and informational mailings throughout the study period to maintain contact with study participants.
Employ proven tracking techniques (e.g., request address corrections from the post office for forwarded mail, use CD-ROM listings of names and addresses, employ locator services to search for respondents).
Provide families and center staff with useful feedback on data obtained through the evaluation activities that will provide insight into the progress and treatments of children in their center and assist them in planning and service delivery.
Data collection for the other Web-based surveys and forms implemented as part of the NCTSI Evaluation will be managed by the NCTSI Evaluator. The NCTSI Evaluator assists centers in maximizing response rates by:
Providing a modest incentive payment to non-NCTSI survey respondents based on research suggesting that modest noncontingent cash incentives significantly increase survey response rates among mental health professionals (Hawley, Cook, & Jensen-Doss, 2009).
Providing in-depth and ongoing technical assistance and guidance to NCTSI centers to support participation in the evaluation in general and build capacity to utilize the data center and online reporting system provided by the evaluation.
Sharing, with center management and evaluators, nonidentifying site-specific data with preliminary evaluation results
Incorporating preliminary evaluation findings into technical assistance efforts with grantees
It is expected that the OPMR will have a 100% response rate because this data collection is integrated into the existing required quarterly and annual progress reporting system employed by the Network.
4. Tests of Procedures
Core Data Set
The CDS measures were selected through a participatory process involving two phases of development: 1) the original development phase in 2003-2004, which was coordinated by the NCCTS and involved input from funded centers through surveys, conferences, and other activities, as well as the piloting of instruments across the NCTSI and 2) a more recent review in 2010, which was coordinated by the NCTSI Evaluator and involved a review by the NCTSI Evaluation Steering Committee, particularly of additional measures to include that are relevant to specific subpopulations previously missed by the original CDS assessment. Many of these instruments have also been endorsed by NCTSI workgroups as important to include in the CDS. Substantial information supporting the reliability and validity of the CBCL, TSCC-A, TSCYC, the UCLA-PTSD, the PSI-SF, the GAIN-MSS, and the CDI-2S, is already available from the developers of these tools. The Core Clinical Characteristics Forms (Baseline Assessment Form, Follow-up Assessment Form, General Trauma Information Form, and Trauma Detail Form) were created by the NCCTS to assist with the clinical evaluation of children. These forms are not structured to be amenable to formal psychometric testing. All of the measures for the CDS are available in Spanish. Additional details regarding each of the standardized measures follow.
Child Behavior Checklist for Ages 1.5–5
The CBCL 1.5-5 is designed to provide a standardized measure of symptomatology for children ages 1.5–5. The CBCL 1.5-5 has been widely used in mental health services research as well as for clinical purposes. The checklist is a caregivers’ report of their child’s problems, disabilities, and strengths, as well as parental concerns about their child. Caregivers report on 99 problem items by indicating if statements describing children are not true, somewhat/sometimes true, or very/often true for their child. Caregivers are also asked three questions that allow them to describe problems, concerns, and strengths for their child. Achenbach (1991) has reported a variety of information regarding internal consistency, test-retest reliability, construct validity, and criterion-related validity. Good internal consistency was found for the internalizing, externalizing, and total problems scales (α≥.82). The CBCL demonstrated good test-retest reliability after 7 days (Pearson’s r at or above .87 for all scales). Moderate to strong correlation with the Connor Parent Questionnaire and the Quay-Peterson scale (Pearson’s r coefficients ranged from .59 to .88) suggested the construct validity of the CBCL. The CBCL was, for most items and scales, capable of discriminating between children referred to clinics for needed mental health services and those youth not referred (Achenbach, 1991). A variety of other studies also have shown good criterion-related or discriminant validity (e.g., Barkley, 1988; McConaughy, 1993).
The instrument has been nationally normed on a proportionally representative sample of children across income and racial/ethnic groups. (Please note that the race variable from the CBCL instrument is not used to score the instrument for the NCTSI evaluation. The race variable from the Core Clinical Characteristics Form is used. Please see Attachment B for more information regarding this.) Racial/ethnic differences in total and subscale scores of the CBCL disappeared when controlling for socioeconomic status, suggesting a lack of instrument bias related to racial/ethnic differences.
The CBCL provides two broadband scores (i.e., internalizing, externalizing), seven narrow-band scores (e.g., emotionally reactive, withdrawn, aggressive behavior), and a total problems score. Scales are based on ratings of 1,728 children and are normed on a national sample of 700 children. Hand- and computer-scored profiles are available. The scoring programs developed by the authors should be used to generate the scores. All grantees will be provided with a copy of the scoring program and accompanying manual, if they do not already have them. Sites will be able to contact their NCTSI Evaluation liaisons for more information.
Child Behavior Checklist for Ages 6-18
The CBCL 6-18, formerly CBCL 4-18, is designed to provide a standardized measure of symptomatology for children ages 6–18. This new version of the checklist has been “updated to incorporate new normative data, include new DSM-oriented scales, and to complement the new preschool forms” (Achenbach System of Empirically Based Assessment, 2008b). The CBCL 6-18 has been widely used in mental health services research as well as for clinical purposes. The checklist is a caregiver report of social competence and behavior and emotional problems among children and adolescents. It consists of 20 social competence items and 120 behavior problem items, which include 118 specific problems and 2 open-ended items for reporting additional problems. The social competence section collects information related to the child’s activities, social relations, and school performance. The competence items had not been collected as a part of the CDS in the past, though many grantees had opted to collect the data for local use. Going forward, the CDS will include these competence items as a measure of resilience, while additional resilience measures are being explored. The behavior problem section documents the presence of symptoms (e.g., argumentativeness, withdrawal, aggression). The CBCL 6-18 scores on a number of empirically derived factors (Achenbach System of Empirically Based Assessment, 2008b). Although it does not yield diagnoses, the CBCL assesses children’s symptoms on a continuum and provides two broadband (i.e., internalizing and externalizing) syndrome scores, eight cross-informant syndrome scores (e.g., attention problems, depressive mood, conduct problems), six DSM-oriented scales, and percentiles for three competence scales (activities, social, and school). A total problems score can also be generated.
Achenbach (1991) has reported a variety of information regarding internal consistency, test-retest reliability, construct validity, and criterion-related validity. Good internal consistency was found for the internalizing, externalizing, and total problems scales (α≥.82). The CBCL demonstrated good test-retest reliability after 7 days (Pearson’s r at or above .87 for all scales). Moderate to strong correlation with the Connor Parent Questionnaire and the Quay-Peterson scale (Pearson’s r coefficients ranged from .59 to .88) suggested the construct validity of the CBCL. The CBCL was, for most items and scales, capable of discriminating between children referred to clinics for needed mental health services and those youth not referred (Achenbach, 1991). A variety of other studies also have shown good criterion-related or discriminant validity (e.g., Barkley, 1988; McConaughy, 1993).
The instrument has
been nationally normed on a proportionally representative sample of
children across income and racial/ethnic groups, region, and
urban-rural residence. (Please note that the race variable from the
CBCL instrument is not used to score the instrument. The race
variable from the Core Clinical Characteristics Form is used. Please
see the Attachment B for more information regarding this.)
The CBCL 6-18 scoring profile provides raw scores, T scores, and
percentiles for three competence scales, total competence, eight
cross-informant syndromes, and internalizing, externalizing, and
total problems. The cross-informant syndromes scored are (1)
aggressive behavior,
(2) anxious/depressed, (3) attention
problems, (4) rule-breaking behavior, (5) social problems, (6)
somatic complaints, (7) thought problems, and (8) withdrawn
depressed. There are also six DSM-oriented scales, including (1)
affective problems, (2) anxiety problems, (3) somatic problems, (4)
attention deficit/hyperactivity problems, (5) oppositional defiant
problems, and
(6) conduct problems. In constructing the
DSM-oriented scales child psychiatrists and psychologists from 16
cultures rated the consistency of checklist items with DSM-IV
categories. Scales are derived from factor analyses of caregiver
ratings of 4,994 clinically referred children and are normed on 1,753
children ages 6–18. The scoring programs developed by the
authors should be used to generate the scores. All grantees will be
provided with a copy of the scoring program and accompanying manual,
if they do not already have them. Sites should contact their liaisons
for more information.
UCLA PTSD Index for DSM-IV
The UCLA-PTSD screens for exposure to traumatic events and for all DSM-IV PTSD symptoms in children who report traumatic stress experiences. The measure yields preliminary PTSD diagnostic information and is keyed to DSM-IV criteria. The UCLA-PTSD can be administered to caregivers; a self-report version of the instrument also exists (Rodriguez et al., 1999). The self-report version is included in the Core Data Set. The instructions and questions should be read aloud to children under the age of 12 or to youth with known reading comprehension difficulties. Children under the age of 7 are not required to complete the form. The UCLA-PTSD is administered at intake and every 3 months, up to 12 months, to all children and adolescents ages 7–18 who are enrolled in the outcome study.
Trauma Symptom Checklist for Children—Abbreviated
The TSCC-A evaluates acute and chronic posttraumatic stress symptoms in children’s responses to unspecified traumatic events across several symptom domains. The TSCC-A is a 44-item self-report measure in which the child indicates how often he/she experiences various thoughts, feelings, and behaviors. The measure provides a means of assessing stress symptoms that do not rise to the level of PTSD diagnosis.
The TSCC-A has been standardized on racially and economically diverse children in urban and suburban environments and normed on age and sex. The instrument yields two validity scales, six clinical scales (anxiety, depression, anger, posttraumatic stress, and two dissociation subscales), and eight critical items. The 10 items related to sexual issues are not included in the abbreviated version of the TSCC (Briere, 1996). The TSCC-A is administered at intake and every 3 months, up to 12 months, to all children ages 8–16 who are enrolled in the outcome study.
Trauma Symptom Checklist for Young Children
The TSCYC (Briere, 2005) was developed to be the first fully standardized and normed broadband trauma measure for children as young as 3 years of age. Tested by clinicians and researchers throughout North America, the TSCYC is a 90-item caretaker-report instrument with separate norms for males and females in three age groups: 3-4 years, 5-9 years, and 10-12 years. Caretakers rate each symptom on a 4-point scale according to how often the symptom has occurred in the previous month. Unlike most other caretaker-report measures, the TSCYC contains specific scales to ascertain the validity of caretaker reports (Response Level and Atypical Response) and provides norm-referenced data on the number of waking hours the caretaker spends with the child in the average week (0-1 hours to Over 60 hours).
The TSCYC contains eight Clinical scales: Anxiety, Depression, Anger/Aggression, Posttraumatic Stress-Intrusion, Posttraumatic Stress-Avoidance, Posttraumatic Stress-Arousal, Dissociation, and Sexual Concerns, as well as a summary posttraumatic stress scale (Posttraumatic Stress-Total). These scales provide a detailed evaluation of posttraumatic stress, as well as information on other symptoms found in many traumatized children. The PTSD Diagnosis Worksheet incorporates information from the TSCYC to assist the user in evaluating PTSD criteria in younger children and provides a possible PTSD diagnosis in children 5 years of age or older (sensitivity = .72, specificity = .75). The TSCYC is appropriate for English-speaking caretakers, including those who have a relatively low reading level (Flesch-Kincaid score = 6.8).
Parenting Stress Index Short Form
The Parenting Stress Index (PSI) (Abidin, 1995) is designed for the early identification of parenting and family characteristics that fail to promote normal development and functioning in children, children with behavioral and emotional problems, and parents who are at risk for dysfunctional parenting. It can be used with parents of children as young as one month. Although its primary focus is on the preschool child, the PSI can be used with parents whose children are 12 years of age or younger. The PSI Short Form (PSI-SF) is a direct derivative of the PSI full-length test. All 36 items on the Short Form are contained on the Long Form with identical wording and are written at a 5th-grade reading level, for parents of children 12 years and younger. The PSI-SF yields a Total Stress score from three scales: Parental Distress, Parent-Child Dysfunctional Interaction, and Difficult Child. Principal components factor analysis with a varimax rotation was conducted, and items were retained based on the criteria of having factor loadings >.4 on only 1 factor (although some exceptions were made to this criteria). The PSI-SF has been found to correlate with the Full-Length form: Total Stress and Total Stress=.94, Parental Distress and Parent Domain=.92, Difficult Child and Child Domain=.87.
Children’s Depression Inventory-2 Short
Modeled on the Beck Depression Inventory and designed for school-aged children and adolescents (ages 7-17 years), the CDI (Kovacs, 1992) is a self-report, symptom-oriented depression scale with a 1st-grade reading level. It has 27 items, each of which consists of three choices. The child or adolescent is instructed to select one sentence for each item that best describes him/her for the past 2 weeks. The CDI provides a Total score, as well as five empirically developed factor scales that have been normed according to gender and age: Negative Mood, Interpersonal Problems, Ineffectiveness, Anhedonia, and Negative Self-Esteem. The CDI is appropriate to use when factor scale scores are desired, a more complete description of the child's depressive symptoms is needed, or more extensive clinical information is required. The CDI can be used for clinical and research purposes. Because it assesses various areas of functioning, the CDI facilitates the multifaceted evaluation of the child or adolescent. Follow-up administrations can help in the evaluation of remediation programs or to measure treatment effectiveness. The normative sample used for scoring the CDI was divided into groups based on age (ages 7–11,12–17) and gender. The normative sample includes 1,266 public school students (592 boys, 674 girls), 23%of whom were African-American, American Indian or Hispanic in origin. Twenty percent of the children came from single-parent homes. The internal consistency coefficients range from .71 to .89 and the test-retest coefficients range from .74 to .83 (time interval two-three weeks).
For the Core Data Set, the CDI-2 Short Form will be used. The CDI-2S is an efficient screening measure that contains 12 items and takes about half the time of the full-length version to administer. The CDI-2S has excellent psychometric properties and yields a Total Score that is generally very comparable to the one produced by the full-length version.
Global Appraisal of Individual Needs (GAIN) Modified Short Screener – 5 minutes
The 5-minute GAIN-Short Screener (GAIN-SS) is designed primarily as a screener in general populations, ages 12 and older, to quickly and accurately identify clients who have 1 or more behavioral health disorders (e.g., internalizing or externalizing psychiatric disorders, substance use disorders, or crime/violence problems). It also serves as an easy-to-use quality assurance tool across diverse field-assessment systems for staff with minimal training or direct supervision, and serves as a periodic measure of change over time in behavioral health. For the Core Data Set, the substance abuse scale from the Short Screener will be used, in combination with several GAIN items on types of substances used, to make up the GAIN-MSS.
Dennis, Chan, and Funk (2006) found that for both adolescents and adults the 20-item total disorder screener (TDScr) and its 4 5-item sub-screeners (internalizing disorders, externalizing disorders, substance disorders, and crime/violence) have good internal consistency (alpha of .96 on the total screener), were highly correlated (r = .84 to .94) with the 123-item scales in the full GAIN-I, had excellent sensitivity (90% or more) for identifying people with a disorder, and excellent specificity (92% or more) for correctly ruling out people who did not have a disorder. A confirmatory factor analysis of the structure of the GAIN-SS shows that it is also consistent with the full GAIN model after allowing adolescent and adult path coefficients to vary and cross-loading paths between conduct disorder items with crime/violence items.
Other NCTSI Evaluation Forms and Surveys
The NCTSI National Reach Survey and the TSF have been implemented as part of the NCTSI cross-site evaluation in the past and thus, information has been gathered regarding the utility of these resources, the quality of the data collected and the need for revisions and reframing. With input and feedback from the NCTSI Evaluation Steering Committee, the survey and form were revised, pilot tested with NCTSI staff members and revised slightly again. Feedback from the pilot testers was used to estimate length of time on average required to complete the data collection in each case.
The OPMR, ETSC Survey, TSIS, and Sustainability Surveys were each newly developed for the revised NCTSI Evaluation based on the stakeholder feedback obtained through the steering committee consultation process. While the Sustainability Survey is entirely new and has been added in response to stakeholder requests, the ETSC incorporates prioritized elements of two currently OMB-approved data collection efforts (GAAS and AIFI). The OPMR incorporates elements of five currently OMB-approved cross-site evaluation instruments (PDDS, Network Survey, CTPT, GAAS, and AIFI). Highlights of such elements that have remained in the OPMR are described in Section A.2.d. These surveys represent a distillation of items that stakeholders identified as most important based on evaluation priorities, while outdated items from the previous evaluation have been eliminated. Following the development of these forms and surveys, each was pilot tested with center representatives to assess length of time needed to participate in the data collection and to conduct cognitive testing. This testing resulted in relatively minor modifications, such as adding some response categories to some items and simplification of instructions.
5. Statistical Consultants
The NCTSI Evaluator has full responsibility for the development of the overall statistical design and assumes oversight responsibility for data collection and analysis for the NCTSI Evaluation. Training, technical assistance, and monitoring of data collection will be provided by the NCTSI evaluator. The following individual is primarily responsible for overseeing data collection and analysis:
Christine Walrath, PhD
ICF Macro
116 John Street, Suite 800
New York, NY 10038
(212) 941-5555
The following individuals serve as statistical consultants to this project:
Megan Brooks, MA
ICF Macro
3 Corporate Square, Suite 370
Atlanta, GA 30329
(404) 321-3211
Donna S Condron, M.A.
ICF Macro
3 Corporate Square, Suite 370
Atlanta, GA 30329
(404) 321-3211
Yisong Geng, PhD
ICF Macro
3 Corporate Square, Suite 370
Atlanta, GA 30329
(404) 321-3211
Robert Stephens, MPH, PhD
ICF Macro
3 Corporate Square, Suite 370
Atlanta, GA 30329
(404) 321-3211
Bhuvana Sukumar, PhD
ICF Macro
3 Corporate Square, Suite 370
Atlanta, GA 30329
(404) 321-3211
(626) 457-6678
The following agency staff member is responsible for receiving and approving contract deliverables:
Maryann Robinson, R.N., M.S., M.A.
Project Officer
Center for Mental Health Services
Substance Abuse and Mental Health Services Administration
U.S. Department of Health and Human Services
1 Choke Cherry Road, Room 6-1148
Rockville, MD 20857
(240) 276-1883
Any questions related to the documents or the NCTSI evaluation should be directed to the following agency staff member:
Ken Curl, MSW, LCSW-C
Public Health Advisor
Center for Mental Health Services
Substance Abuse & Mental Health Services Administration
1 Choke Cherry Road, #6-1148
Rockville, MD 20857
(240) 276-1779
LIST OF ATTACHMENTS
Attachment A NCTSI Evaluation: Overview of Components and Instruments
Attachment B NICON Screen Shots
Attachment C Core Data Set
Attachment D Evidence-based Practice and Trauma-informed Services Change (ETSC) Survey
Attachment E Online Performance Monitoring Report (OPMR)
Attachment F NCTSI National Reach Survey
Attachment G Training Summary Form
Attachment H Training Sign-in Form
Attachment I Sustainability Survey
Attachment J NCTSI National Reach Survey: Informed consent form
Attachment K NCTSI National Reach Survey: Email invitation
Attachment L Evidence-based Practice and Trauma-informed Services Change (ETSC) Survey: Email invitation
Attachment M Evidence-based Practice and Trauma-informed Services Change (ETSC) Survey: Informed consent form
Attachment N Sustainability Survey: Email invitation
Attachment O Sustainability Survey: Informed consent form
Page
File Type | application/msword |
Author | natalie.j.henrich |
Last Modified By | DHHS |
File Modified | 2011-09-07 |
File Created | 2011-09-07 |