Evaluation of the CBC Supporting Statement B - May 19_fnl clean

Evaluation of the CBC Supporting Statement B - May 19_fnl clean.docx

Evaluation of the Child Welfare Capacity Building Collaborative

OMB: 0970-0576

Document [docx]
Download: docx | pdf

Evaluation of the Child Welfare Capacity Building Collaborative



OMB Information Collection Request

0970-0576




Supporting Statement

Part B



May 2022




Submitted By:

Children’s Bureau

Administration on Children, Youth and Families

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201






Part B


B1. Objectives

Study Objectives

The purpose of this information collection is to facilitate, track, monitor, and evaluate the activities of the Center for States, Center for Tribes, and Center for Courts (the Collaborative) as they deliver national child welfare expertise and evidence-informed training and technical assistance services to state, tribal, and territorial public child welfare agencies and Court Improvement Programs (CIPs). The collective goal of the Centers is to build the capacities of child welfare systems to successfully undertake practice, organizational, and systemic reforms necessary to implement federal policies, meet federal standards, and achieve improved outcomes for the children, youth, and families they serve. The Centers are being evaluated through three Center-specific evaluations and a Cross-Center evaluation. The objectives of these four interconnected studies are to understand Center service provision, child welfare agency and CIP utilization of services, service quality and satisfaction with services, collaboration among Centers and with federal staff, and service outcomes.

Generalizability of Results

The Cross-Center and Center-specific studies are intended to present an internally valid description of the services provided by the Collaborative to jurisdictions (states/territories, tribal programs, and CIPs), not to promote statistical generalization to other service populations.

Appropriateness of Study Designs and Methods for Planned Uses

The Cross-Center and Center-specific evaluation designs apply mixed-method, longitudinal approaches to achieve the study objectives described above and the study design outlined in Supporting Statement A. The studies were carefully orchestrated to build on the wealth of knowledge gained in the prior evaluations of the Collaborative1 and to apply many of the lessons learned. The study designs are appropriate for the evaluation of the Collaborative in that they capture developmental processes and are well-aligned to the Collaborative’s stage of implementation and the Children Bureau’s (CB’s) prioritized evaluation questions and needs. The proposed designs answer descriptive evaluation questions with methods and measures that assess common core components and outcomes, applicable across the Collaborative’s diverse service approaches. The methods and measures are carefully sequenced to produce learning about service delivery and outcomes and change over time. The Cross-Center study features participatory development with Centers and the CB, and is being implemented collaboratively, which is essential to streamlined and coordinated data collection.

The Center-specific and Cross-Center studies measure interactional processes enacted by many agents in complex and evolving environments at specific points in time, with a cross-section of informants. This context makes it challenging to isolate precise and distinctive impacts of the Collaboratives’ service interventions. Therefore, the outcomes that accrue to jurisdictions receiving Center services can not necessarily be attributed to those services, and this limitation will be noted in written products associated with these studies.

As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.

B2. Methods and Design

The planned methods and design are consistent with that described in the original information collection request.



Target Population

Information is being collected from (1) child welfare and judicial professionals receiving Center services; (2) child welfare agency leaders (i.e., state and tribal child welfare directors or their designees and CIP directors); and (3) Center staff (i.e., Center directors, staff, and consultants).

Sampling

Child welfare and judicial professional respondents are recruited2 from among those who participate most directly in Center services (i.e., those who use the Centers’ webpages, products, and online courses; participate in virtual or in-person trainings or peer events; and receive or lead brief or intensive tailored services).

For the data collected from agency leaders, we invite responses from all child welfare directors (or designees) from the 50 states, the District of Columbia, and Puerto Rico. Information is also collected from leadership at child welfare agencies in U.S. territories that receive capacity building services from Center for States (CBCS). The purposive sample of tribal child welfare directors and CIP directors includes only those jurisdictions that received tailored services from Center for Tribes (CBCT) and Center for Courts (CBCC), respectively.

Data from Center staff are collected from the directors of each of the three Centers, and from staff and consultants directly involved in the delivery of services.

The research teams use non-probability, purposive sampling to identify potential respondents from the above groups who can provide information on the studies’ key constructs. Because child welfare, judicial professional, and federal staff participants are purposively selected (as described in B4), they are not representative of the population of these groups (with the possible exception of state child welfare directors and Center directors, who will all be invited to provide data).

B3. Design of Data Collection Instruments

Development of Data Collection Instruments

The following instruments are new to the request, all other instruments were previously approved and the descriptions have not changed:

  • Instrument 31: Cross-Center – Tailored Services Focus Group Guide (for states)

  • Instrument 32: Cross-Center – Tailored Services Focus Group Guide (for CIPs)

  • Instrument 33: Cross-Center – Liaison/Child Welfare Specialist Interview Protocol

  • Instrument 34: Cross-Center – Tailored Services Jurisdiction Staff DEI Interview Protocol

  • Instrument 35: CBCT – Tribal Child Welfare Staff Interview/Focus Group Guide

  • Instrument 36: CBCC – CIP Capacity Building Services Feedback Survey


Cross-Center Instruments. The instruments designed by the Cross-Center team were pilot tested to confirm survey item validity and to identify possible procedural or methodological challenges in need of attention or improvement. Pilot tests were conducted using samples of fewer than 10 informants in positions similar to intended respondents (i.e., former state and tribal child welfare directors, former CIP directors, former child welfare agency personnel, and current Center staff). These tests were instrumental in determining the burden estimates. Following the pilot tests, the instruments were refined to minimize burden and improve utility. The instruments were subject to review and feedback by key stakeholders, including staff from all three Centers. Table B-1 identifies the study objectives (see section B1) served by each instrument.

Two instruments were designed to measure satisfaction with and outcomes of tailored services. The Outcomes of and Satisfaction with Tailored Services Survey (Intensive Services) is based on a previously OMB-approved instrument (see table B-1 for names and OMB numbers of the respective previously approved instruments). This instrument, which assesses jurisdiction staff perceptions of the outcomes of intensive tailored services and satisfaction with services, features items to measure capacity increase that are based on the research literature on operationalizing and assessing organizational capacity. For state child welfare respondents only, the survey also includes 21 additional items about the CBCS practice model; the Cross-Center team collects these data on behalf of the CBCS. The Outcomes of Tailored Services Survey (Brief Services) comprises four items measuring outcomes of satisfaction with brief tailored services, based on a previously approved instrument developed by the CBCS.

The Leadership Interview protocols are based on previously approved and field-tested instruments. These protocols are used by the Cross-Center evaluation team to conduct semi-structured interviews with child welfare and CIP directors regarding their experiences with Center services. For tribal respondents only, the interview protocol also includes 36 additional items about CBCT-specific service provision; the Cross-Center team collects these data on behalf of CBCT.

Two new Cross-Center instruments measure different aspects of collaboration. First, the Collaboration and Communication Survey is used to collect data on collaboration among Centers and with federal staff. The second, the Collaborative Project Team Survey, includes the Collaborative Health Assessment Tool (CHAT), developed by Salignac, Marjolin, Noone, & Carey (2019)3, and is designed to understand whether collaborative and/or communication teams for specific projects exhibit signs of healthy collaboration. Both collaboration surveys were pilot tested with fewer than 10 respondents for comprehension and burden.

The Tailored Services Team Focus Group Guides (one version for state child welfare teams and another version for CIP teams) are used to collect data from select teams of child welfare and CIP staff implementing tailored service projects with Center support. The questions focus on the use of the Change Management Approach used to deliver tailored services and the perceived effectiveness of those services.

The Liaison/Child Welfare Specialist Interview Protocol is a new protocol used to interview Center service providers, known as Liaisons or Child Welfare Specialists, to collect data about how they function to support Center service delivery. The Tailored Services Staff DEI Interview Protocol, also newly designed, is used to guide interviews with select teams of state and tribal child welfare staff and CIP staff who received tailored services for projects that include a focus on increasing capacity to address diversity, equity, and inclusion in child welfare.

Center for States Instruments. The Event Registration form is based on a previously approved and administered form.4 It was designed to enable child welfare professionals to register for CBCS events.

Several of the proposed instruments are used to obtain information from child welfare professionals who have participated in CBCS universal service events and peer events. The Brief Event Survey, Event Follow-up Survey, and Event Poll are all based on previously approved and field-tested instruments. Adaptations made to the original versions were intended to refine the survey items, streamline data collection, and reduce respondent burden. The Peer Learning Group Survey is based on a previously approved instrument, adapted to streamline data collection and reduce burden, and the Learning Experience Satisfaction Survey is a consolidation of two previously approved instruments. The Peer-to-Peer Event Survey is based on a previously approved instrument, adapted to focus the items on goals and outcomes of service (e.g., peer to peer interaction between jurisdictions).

Other instruments were designed to collect information from child welfare jurisdictions receiving CBCS tailored services. The Tailored Services Brief Project Survey is based on a previously approved instrument, revised by adding additional survey items to better understand project outcomes. The Jurisdiction Interview Protocol is based on previously approved sets of questions. The Longitudinal Ethnographic Sub-study Jurisdiction Interview is a new instrument developed in collaboration with jurisdiction service staff to ensure validity; it has been pilot tested with fewer than 10 respondents for burden and comprehension.

Center for Tribes Instruments. Two data collection tools used by CBCT to screen and intake tribal child welfare programs requesting services are based on previously approved and field-tested instruments: Inquiry Form and Tribal Demographic Survey. The third, the Request for Services Form, is newly developed. The phase 1 and phase 2 versions of the Needs and Fit Exploration Tool are based on data collection instruments that were designed and OMB-approved previously and that have been field tested. The Tribal Child Welfare Leadership Academy Training Self-Assessment (pre- and post- versions) and the Universal Services Webinar Feedback Survey are new instruments developed by CBCT content experts; they were piloted with fewer than 10 respondents for usability and comprehension. Finally, two newly developed protocols – the Tribal Child Welfare Jurisdiction Staff Interviews protocol and the Tribal Child Welfare Jurisdiction Staff Focus Group guide – will be used to conduct interviews and/or focus groups with tribal child welfare program staff who received tailored services from the Center for Tribes, at the close of those services, for the purposes of program improvement.

Center for Courts Instruments. Three instruments will be used to provide feedback on CBCC services: the CQI5 Workshop Feedback Survey, the Academy Feedback Survey, and the Court Improvement Program Capacity Building Feedback Survey. The CQI survey and the Academy survey are based on a previously approved and administered survey. Burden estimates are based on the average time participants took to complete the surveys in the last use of the instruments. The Court Improvement Program Capacity Building Feedback Survey is a new instrument used to collect data from CIP directors/coordinators about their experiences with and satisfaction with Center services, for program improvement purposes.

A fourth data collection instrument, the Pre/Post Academy Assessment, was developed by CBCC content experts as part of an online learning experience that tailored content to the instructional needs of the participants. Burden estimates are based on the average time prior academy participants took to complete the online assessment.

Table B-1. Study Objectives Served by Data Collection Instruments

Data Collection Instruments

Study Objectives

Service provision and utilization

Service quality/ satisfaction

Collaboration

Outcomes

Cross-Center Evaluation

Outcomes of and Satisfaction with Tailored Services Survey (Intensive projects)

Adapted from Capacity Survey; Tailored Services Satisfaction Survey [both OMB 0970-0494]6


Outcomes of Tailored Services Survey (Brief projectstribes and CIPs only)

Adapted from CBCS Brief Tailored Services Survey [OMB 0970-0484]



Leadership InterviewStates and Territories

Adapted from Leadership Interview—States and Territories [OMB 0970-0484]


Leadership InterviewCIPs

Adapted from Leadership Interview—CIPs [OMB 0970-0484]


Leadership InterviewTribes

Adapted from Leadership Interview—Tribes [OMB 0970-0484]


Collaboration and Communication SurveyCenter Staff




Collaborative Project Team Survey




Tailored Services Team Focus Group Guide (for states)



Tailored Services Team Focus Group Guide (for CIPs)



Liaison/Child Welfare Specialist Interview Protocol



Tailored Services Jurisdiction Staff DEI Interview Protocol


Center for States (CBCS) Evaluation

Event Registration

Adapted from Webinar Registration [OMB 0970-0494]




Brief Event Survey

Adapted from Webinars, Events, and In-Person Meetings Satisfaction Survey [OMB 0970-0484]



Event Follow-up Survey

Adapted from Webinars, Events, and In-Person Meetings Satisfaction Survey; Webpages and Products Satisfaction Survey [both OMB 0970-0484]




Event Poll

Adapted from Webinars, Events, and In-Person Meetings Satisfaction Survey [OMB 0970-0484]



Peer Learning Group Survey

Adapted from Center for States Constituency Groups Survey [OMB 0970-0484]



Learning Experience Satisfaction Survey

Adapted from Learning Experiences Satisfaction Survey—Single; Learning Experiences Satisfaction Survey—Intensive [both OMB 0970-0484]




Jurisdiction Interview Protocol

Adapted from CBCS Tailored Services Interview [OMB 0970-0494]

Tailored Services Brief Project Survey

Adapted from CBCS Brief Tailored Services Survey [OMB 0970-0484]



Peer-to-Peer Event Survey

Adapted from Webinars, Events, and In-Person Meetings Satisfaction Survey [OMB 0970-0484]



Longitudinal Ethnographic Sub-study: Jurisdiction Interview




Center for Tribes (CBCT) Evaluation

Request for Services Form *NEW*




Inquiry Form

Adapted from CBCT Contact Form [OMB 0970-0484]




Tribal Demographic Survey

Adapted from CBCT Demographic Survey [OMB 0970-0484]




Needs and Fit Exploration Tool Phase 1

Adapted from CBCT Needs and Fit Exploration Tool Phase 1 [OMB 0970-0484]




Needs and Fit Exploration Tool Phase 2 (Process Narrative)

Adapted from CBCT Needs and Fit Exploration Tool Phase 2 [OMB 0970-0484]




Tribal Child Welfare Leadership Academy Pre-Training Self-Assessment




Tribal Child Welfare Leadership Academy Post-Training Self-Assessment



Universal Services Webinar Feedback Survey




Tribal Child Welfare Staff Interview/Focus Group Guide


Center for Courts (CBCC) Evaluation

CQI Workshop Feedback Survey

Adapted from CBCC CQI Workshops Survey [OMB 0970-0484]



Academy Feedback Survey

Adapted from CBCC CQI Workshops Survey [OMB 0970-0484]




Pre/Post Academy Assessment




CIP Capacity Building Services Feedback Survey



B4. Collection of Data and Quality Control

The following describes data collection and quality control plans for the studies. Plans for previously approved information collection activities have not changed. Details have been added to this section specific to new activities. Please refer to table A-1 in Supporting Statement A for a summary of all data collection details.

Cross-Center Data Collection. All Cross-Center data are collected by representatives of the Cross-Center evaluation team who are trained in survey design/administration and interview techniques. All referenced outreach and recruitment materials can be found in Appendix 3.

Survey Data: These data will be collected online via the Qualtrics platform. Potential respondents will be contacted via an email sent by the Cross-Center team, explaining the purpose of the data collection activity and the voluntary and private nature of participation. The email will include a link to the online survey. Potential respondents will receive up to two reminder emails as needed. To ensure quality, data will be checked for accuracy and completeness.

Interview Data: These data will be collected from Center leadership, child welfare jurisdiction leadership, and federal staff who are identified as appropriate informants. Potential interviewees will first receive a letter from the CB project lead for the Cross-Center evaluation, to let them know they will be contacted by the evaluation team. This will be followed by an email invitation from the Cross-Center evaluation team inviting participation in the interview. Interviews will be conducted by phone and facilitated by trained interviewers from the Cross-Center team. The interviewers will be trained in the interview protocol and knowledgeable about interviewees’ contextual factors (e.g., their jurisdiction/Center/region, the services they received/provided) to ensure consistent facilitation of these interviews. To further ensure data quality, primary interviewers will have a second researcher on the line and/or record the interview (with the consent of the interviewee). All transcripts derived from recorded interviews will be reviewed for accuracy by the interviewers and de-identified before the content is analyzed.

Focus Group Data: These data will be collected from teams of state child welfare and CIP staff who received Center tailored services. (The cross-Center evaluation team will not conduct focus groups with tribal child welfare program staff; instead, comparable questions are included in the Center for Tribes’ Tribal Child Welfare Staff Interview/Focus Group Protocol and those data will be shared with the cross-Center evaluation team.) The cross-site evaluation team will work with the COR to identify inclusion criteria for a total of 15 tailored service projects. Working with the Center evaluation team, the cross-Center team will first send an email to jurisdiction leadership (i.e., the director of the state child welfare program or the CIP director/coordinator) to let them know they have been selected for inclusion in the study, to describe the purpose of the evaluation and the focus groups, and requesting a phone call to answer questions and determine their willingness for the evaluation team to conduct the focus group with their staff. Focus groups will be held online via videoconferencing (e.g., using Zoom). Focus groups will typically include 4-8 jurisdiction staff and will be moderated by a JBA evaluator who has been trained in the focus group protocol and is knowledgeable about the project team’s tailored service project. To further ensure data quality, primary interviewers will have a second researcher on the line and/or record the session with the consent of the focus group participants. All transcripts derived from recorded focus groups will be reviewed for accuracy by the moderator and de-identified before the content is analyzed.

Center for States Data Collection. Data for the CBCS evaluation are collected by CBCS evaluation staff with expertise in data collection techniques. Potential informants will be public child welfare agency staff who access, use, or participate in Center resources and services. All referenced outreach and recruitment materials can be found in Appendix 4.

Survey, Poll, and Registration Data: Respondents to the Event Registration, Brief Event Survey, Event Follow-up Survey, Event Poll, and Learning Experience Satisfaction Survey will be child welfare professionals who engage directly with services, including those who attend events, access e-learning through the Collaborative’s learning management system (CapLEARN), and/or access resources through the CBCS website. For the Event Registration, participants will be notified of the opportunity to register for each event through marketing and outreach efforts. Respondents to the Event Poll will be recruited via the event platform (e.g., Adobe Connect) immediately following an event and will complete the poll there. Respondents to most surveys will be recruited via email invitations that include a link to the survey in Qualtrics. The exception is the Learning Experience Satisfaction Survey, for which respondents will be recruited directly in CapLEARN and will complete the survey there.

Respondents recruited to participate in the web-based (i.e., Qualtrics) Peer Learning Group Survey will come from among individuals who participate in CBCS peer learning groups. The Center evaluation team will work with Center service delivery staff to identify potential respondents based on group activity level and goals of the peer learning group. Respondents to the Brief Project Survey and Peer-to-Peer Event Survey will be recruited from among those who receive tailored services. The CBCS evaluation team will work with the CBCS service delivery staff to identify and recruit potential respondents, who will be key child welfare staff involved in brief services and peer-to-peer meetings. Once respondents are identified, these surveys will be administered online via Qualtrics as well.

For all surveys and polls the CBCS evaluation team will conduct internal quality control reviews while creating instruments in survey software or CapLEARN, to ensure the survey items are free of errors. Collected data will be checked for accuracy and completeness.

Interview Data: Potential respondents to the Longitudinal Ethnographic Sub-study Jurisdiction Interview will come from project teams of the subset of intensive service projects selected for the sub-study. Interviews will primarily be conducted virtually, using conferencing platforms (e.g., Microsoft Teams) or in person in conjunction with service delivery activities. The CBCS evaluation team will train data collectors to ensure they have adequate familiarity with the jurisdictions’ contextual factors and with the instrument questions and protocol to ensure consistent facilitation. Evaluators will write field notes during interviews to document key insights. Interviews will be recorded (with interviewee consent) and transcribed. Both will be used to verify accuracy of notes and correct any errors.

Center for Tribes Data Collection. Data for the CBCT evaluation are collected by CBCT evaluation staff who have expertise in culturally appropriate data collection techniques. Potential informants will be tribal child welfare professionals who access, use, or participate in Center resources and services. All referenced outreach and recruitment materials can be found in Appendix 5.

Screening and Intake Data: Data to screen and intake tribal child welfare programs requesting services will be collected from jurisdiction representatives. Request for Services data are provided by child welfare program representatives to the CBCT over the phone or via videoconference; the data are then entered electronically by the CBCT Coordination Specialist. Inquiry Form data and Demographic Survey data are provided verbally or electronically by the jurisdiction requesting services.

Qualitative Assessment Data: Data from the Needs and Fit Exploration Tool-Phase 1 are requested and documented by one of the CBCT’s Child Welfare Specialists via phone or videoconference with the tribal program. The data from the Phase 2 version of the instrument are supplied by staff of the child welfare program and gathered verbally (in person or virtually) and documented by the CBCT’s Child Welfare Specialist and other appropriate subject matter experts or consultants. The information is verified through a debrief with the contract study team and is shared with the Federal Project Officer and Regional Office Specialist.

Survey Data: The Tribal Child Welfare Leadership Academy Training Self-Assessment (pre- and post- versions) and the Universal Services Webinar Feedback Survey are administered electronically by a member of the CBCT evaluation team via an email containing a link to the Qualtrics survey. The CBCT evaluation team will conduct internal quality control reviews while creating instruments in survey software, to ensure the survey items are free of errors. Collected data will be checked for accuracy and completeness.

Interview/Focus Group Data: The Tribal Child Welfare Interview/Focus Group Guide will be used to collect data from tribal child welfare staff who received tailored services. Child welfare project teams will typically have 2-5 members, and these staff will be given the option of participating in a focus group for their team or providing their data in an individual interview. Interviews and focus groups will be conducted via videoconference (e.g., Zoom) or in-person, onsite depending on the preference and capacity of the tribe and their local COVID protocols. Interviews will be conducted and focus groups will be facilitated by a member of the Center for Tribes’ evaluation team who is trained in the protocol and very familiar with the tribe’s service project and cultural context. Interviews and focus groups will be audio recorded and the audio files will be professionally transcribed. Transcripts will be checked for quality and completeness before analysis.

Center for Courts Data Collection. Data for the CBCC evaluation are collected by CBCC evaluation staff with expertise in data collection techniques. Potential informants will be CIP staff and court professionals (e.g., judges, attorneys) who access, use, or participate in Center resources and services. All referenced outreach and recruitment materials can be found in Appendix 6.

Survey Data: Participants in CQI workshops will be invited to complete the CQI Workshop Feedback Survey anonymously at the close of the workshop. Workshop participants may include CIP project team members and other workshop participants (e.g., CB staff). Participants in the CBCC Academy will be invited to take the Academy Feedback Survey at the close of the academy. Participants may include judges, attorneys, other officers of the court, and other participants (e.g., CB staff). For both surveys, when the workshop or academy is held in person, a hard copy survey will be included with meeting materials, and when it is conducted online a designated CBCC team member will provide a link to an online survey at the close of the learning experience.

The CIP Capacity Building Services Feedback Survey will be used to assess the services, products, and capacity building assistance provided by the CBCC to CIPs. The survey will be administered online via Survey Monkey to CIP directors/coordinators to collect their feedback about their CIP’s experiences and satisfaction with capacity building services they received from the CBCC. Potential respondents will receive an email invitation from the CBCC evaluation team, which will include a link to the survey.

The Pre/Post Academy Assessment will be collected by the CBCC from academy participants. Prior to attending the academy, all registrants will receive an invitation to participate in an online learning experience (via CBCC’s academy website), which includes the pre-assessment. Registrants will receive up to two reminder emails prior to the academy if they have not completed the pre- assessment. At the close of the academy, participants will be invited to return to the academy website to complete the post- assessment.

These surveys are designed to limit response options to provided values for closed-ended (i.e., scaled) questions, which eliminates the possibility of multiple answers for a single item when participants complete the online version. For the hard copy, respondents circle their response, which reduces issues with illegibility. The evaluation team will analyze survey data and identify and document any issues with data collection quality and consistency, for discussion with the CBCC team and amelioration.

B5. Response Rates and Potential Nonresponse Bias

Response Rates

Maximizing response rates is critical to the administration of these data collection efforts. The content and format of the instruments were developed in close consultation with key stakeholders and were informed by previously developed, OMB-approved instruments. Though these data collection activities are not designed to produce statistically generalizable findings and participation in the data collection activities is wholly at the respondents’ discretion, response rates are collected when applicable and possible for quality improvement purposes.

Data collection strategies that emphasize flexibility, privacy, and a respect for the respondent’s time facilitate timely participation. The following strategies are being implemented to maximize participation in the data collection:

  1. Introduction and notification: Strategies to introduce and notify respondents about data collection are used for several instruments.

  2. Timing of data collection: Individualized discussions were held with stakeholders to determine optimal periods for data collection to minimize respondent burden and to facilitate recall, and to coordinate timing of data collection between the Cross-Center and Center-specific evaluation teams.

  3. Pre-interview preparation: A copy of the interview protocols is sent to respondents in advance of interviews. Background information for certain questions are “pre-filled” in the interview protocols using information obtained from semi-annual reports or agency websites and with service data from CapTRACK. Prior interviewer knowledge or familiarity with each state or tribe’s child welfare system or CIP expedites administration of the interview.

  4. Administration: For surveys, reminder emails are sent (per discussion above) to promote participation and a high response. For interviews, evaluators are flexible and accommodating with respect to rescheduling as necessary and setting up interviews to meet participants’ availability.

  5. Alternate response methods: Respondents are given the option to use an alternate method for responding to surveys or interviews, such as submitting a paper version with written responses to questions, if this method helps to increase participation.

  6. Assurances of data privacy: Respondents to all surveys and interviews are assured that reported data are aggregated and not attributable to individuals or specific child welfare jurisdictions or federal offices.

Non-Response

As participants are not randomly sampled and findings are not intended to be representative, non-response bias is not being calculated. The Cross-Center team does, however, track refusal rates and refusal demographics, to gain an understanding of potential patterns in data collection participation and refusal. For some data collections, respondent demographics will be documented and reported in written materials associated with the data collection.

B6. Production of Estimates and Projections

The data will not be used to generate population estimates, either for internal use or dissemination.

B7. Data Handling and Analysis

Data Handling

The Cross-Center and Center-specific teams will be responsible for collection, storage, and maintenance of their respective data. Exceptions include data collected by the Cross-Center team on behalf of CBCS (via the Outcomes of and Satisfaction with Tailored Services Survey) and CBCT (via the Leadership Interview—Tribes); those data will be passed along to their respective Center evaluation teams for handling and analysis. All sensitive and personally identifiable information will be stored and maintained in accordance with ACF requirements; the Cross-Center and Center-specific teams have capabilities for the safe storage of sensitive information meeting federal guidelines.

Quantitative Data. Across studies, quantitative data (e.g., survey data, registration form data) are typically collected via online survey software (e.g., Qualtrics, Survey Monkey) or another online platform (e.g., Adobe Connect). Data collection platforms are programmed to limit response options to valid values, to mitigate data entry error. In cases where online data entry by the respondent is not possible (e.g., for tribal or rural respondents or others with limited Internet access), data may be collected by phone or through a hard copy of a form or survey and entered by evaluation staff into the database. In these cases, a second team member reviews and verifies entries and corrects any errors as needed. Data sets will be cleaned, missing data will be assigned consistent numeric codes, responses will be assessed for completeness, and missing data will be evaluated and addressed as needed.

Some data collected by one evaluation team will be shared with other teams. The Cross-Center team will collect data from the Outcomes of and Satisfaction with Tailored Services Survey and securely transfer the data to the three Center-specific teams. The Cross-Center team will also collect data through the Leadership Interviews about CBCT-specific service provision and securely transfer those data to the CBCT evaluation team. The CBCS team will securely transfer data from select items of their Tailored Services Brief Project Survey to the Cross-Center team. The CBCT team will share data from four follow-up items under question number 4 in their Tribal Child Welfare Staff Interview/Focus Group Guide. The items focus on jurisdiction staff’s perceptions of capacity change and child welfare practice change that may have resulted from tailored services. These questions are comparable to questions that appear in the cross-Center’s Tailored Services Focus Group Guide, which is used with project team members from state and CIP projects. The CBCT will collect the responses to those four questions from tribal project teams and share (only) those data with the cross-Center teams, so that comparable data on perceptions of capacity/practice change are available for cross-Center analysis. Data sharing and usage agreements have been drawn up between the Cross-Center team and the Center-specific teams detailing the data to be released, data security protocols, and guidelines for data reporting.

Qualitative Data. Qualitative data includes the content of interviews and meeting debriefs and data from open-ended fields in surveys. Interview data typically come from professionally transcribed audio recordings. Evaluators also write field notes during interviews to document key insights, and the two data sources are used to verify the accuracy of each. Brief analytic memos are developed to summarize key findings from each interview and identify items for follow-up. The qualitative data from open-ended fields in surveys are retained verbatim in analysis files (if answers are collected by hard copy or by phone, responses are entered verbatim into the analysis files by an evaluator).

Data Analysis

The Cross-Center and Center-specific teams primarily will be responsible for analysis of their respective data. As noted in Data Handling above, data collected by the Cross-Center team on behalf of CBCS and CBCT will be securely transferred to their respective Center evaluation teams for analysis.

Cross-Center Analyses. Quantitative data collected by the Cross-Center evaluation team through the surveys are cleaned and nonnumeric data are designated a numeric code. Missing data are reviewed and addressed when possible; otherwise, missing data are coded with a consistent numeric code. These data are analyzed by generating descriptive statistics (i.e., frequencies, percentages, means) and variations in reported capacity increase will be explored by Center, region (if sample sizes are sufficient), and data collection period. It will be possible to also combine data from the present evaluation and the 2014-2019 evaluation of the Collaborative (see Supporting Statement A) to make comparisons over time across 10 years of data. As appropriate, the significance of any differences across Centers or regions will be tested using ANOVA. On data from the Outcomes of and Satisfaction with Tailored Services Survey, additional analyses will be conducted to investigate the relationships between various predictors (e.g., type of tailored service, hours of service) and capacity outcomes, using multiple regression models.7

Transcribed data from Leadership interviews, Tailored Services Team Focus Groups, Liaison/Child Welfare Specialist Interviews, and Tailored Services Jurisdiction Staff DEI Interviews are de-identified, cleaned and finalized, and then uploaded into Dedoose software. Initial codes are developed in relation to key content areas (e.g., collaboration) and sub-topics (e.g., challenges to collaboration) covered in the interviews and applied deductively to the text. A second round of open coding by content area is used to identify themes and patterns within and across respondent types. Analysts then meet to discuss the coding process and emergent themes. Data from any quantitative items in the interviews are coded and entered into SPSS. Those data will be inspected for missing data, and analysts will then examine frequency distributions and variability and prepare appropriate tabulations.

Center for States Analyses. Quantitative data about service satisfaction and outcomes are analyzed by generating descriptive statistics and will mostly be used for internal reporting activities to support CQI and contract monitoring. For the  Longitudinal Ethnographic Study–Jurisdiction Interview, additional analyses will be conducted to support specific research questions. Inferential statistics will explore possible relationships between key variables (e.g., fidelity ratings and service dosage) and statistically meaningful differences in fidelity ratings over time or across practice model phases.

The Center’s analysis of tailored services outcomes will include use of qualitative comparative analysis (QCA) for an exploratory analysis of what contextual factors support the achievement of tailored service outcomes.8

Analyses of data from open-ended survey questions, interviews, and focus groups are designed to produce contextual and in-depth findings that complement quantitative analysis to include themes related to facilitators and barriers to services, knowledge transfer and outcome achievement, and suggestions for improvement. Qualitative data are entered into software (e.g., Microsoft Excel, Dedoose) for storage and thematic analysis, which will focus on exploring relationships among themes, including whether specific themes are associated with particular stakeholders, time frames, and other key variables during implementation. Analyses will include documentation of outlying data, such as contradictory data or other divergent participant responses to interview or focus group prompts. Triangulated data will be analyzed to confirm or identify discrepant findings. Conflicting findings from different data sources will warrant further investigation.

Center for Tribes Analyses. Analytic approaches for CBCT data will be selected to match program design and evaluation questions, using culturally appropriate and rigorous methods. Qualitative data – the primary type of data used by CBCT – will be recorded and transcribed, then analyzed using ATLAS.ti software following established practices for qualitative data analysis. Analysts will conduct an initial round of coding, refine the code list as needed, then conduct more focused analysis, while allowing themes to emerge based on responses.

The Center does not collect large amounts of quantitative data. The survey data that are collected (via the Webinar Feedback Survey and Leadership Academy Pre/Post Self-Assessment) will typically only be analyzed descriptively. If sufficient data exist for the Leadership Academy Assessment, paired samples t-tests may be conducted to determine whether individuals experienced pre-post changes in knowledge and behavior.

Center for Courts Analyses. The CBCC evaluation team analyzes data from the CQI Workshop Feedback Survey and Academy Feedback Survey following each workshop or academy to inform CQI efforts. In addition, data collected over time will be aggregated to assess effectiveness of the workshop and academy services. These data will be analyzed by generating descriptive statistics to describe participant satisfaction with services, perceptions of knowledge gain, awareness of resources, and other perceived outcomes to help assess effectiveness of services. Open-ended questions regarding usefulness of services and areas for improvement will be analyzed using qualitative coding and thematic analysis. The CBCC evaluation team will present the frequency of themes and select quotes to present salient findings (ensuring that presented quotations have no information that risks the release of identifying information).

For data from the Pre/Post Academy Assessment, a statistical test (i.e., paired samples t-test) will be used to determine if there are significant differences in total score between the pre- and post- versions of the instrument, to assess academy participant knowledge gain. Multiple regression analyses may be used to explore whether the presentation format (e.g., virtual vs in-person) is related to knowledge gain. Open-ended questions that include vignettes will be analyzed using qualitative coding and thematic analysis. The frequency of themes will be presented, along with select (de-identified) quotations that illustrate salient findings. For several open-ended items in the assessment that ask the respondent to read a brief set of facts about a case and describe how they would respond in the context of a court hearing, a coding scale will be applied to identify and analyze response patterns. Findings will be compared between pre- and post- versions of the instrument.

To analyze data from the CIP Capacity Building Services Feedback Survey, the CBCC evaluation team will generate descriptive statistics to describe participant satisfaction with CBCC services (including tailored, targeted, and universal services); experiences with services; perceptions of the expertise, accessibility, and support of service providers; perceived impact on CIP capacity; and areas for improvement. Open-ended questions regarding usefulness of services and areas for improvement will be analyzed using qualitative coding and thematic analysis, where appropriate. The CBCC evaluation team will present the frequency of themes and select (deidentified) quotes to present salient findings

Data Use

Data collected through this proposed information collection will be used by the Centers and the CB to improve the development and delivery of Center services and to assess the impact of services on jurisdictions’ ability to achieve their intended outcomes. Evaluation findings will help to inform future decision making about service delivery. Some evaluation findings will also be shared with other providers and service recipients to increase knowledge about technical assistance strategies and approaches. Some of the information collected and the conclusions derived from their analyses will be shared with the child welfare field and the public through several outlets to help develop the evidence base for what works in child welfare organizational capacity building.9

  • The Cross-Center team will develop two brief evaluation reports in project years 3 and 4 that focus on one or more sources of data or specific evaluation questions. In the last stage of analysis, beginning in project year 5, data will be merged from multiple sources to enable final summative analyses to address major questions on the cumulative results of the three Centers. At the end of project year 5, a final synthesis report of the findings for all years will be submitted to CB for dissemination to federal, state, tribal and CIP stakeholders. In addition to the final report, the Cross-Center team will write two peer reviewed journal articles and conduct three conference presentations by the end of the 5-year project period.

  • The CBCS evaluation will develop ongoing data reporting and visualization for service planning and improvement. In project year 3, the Center will pilot the outcome analysis for tailored services and in year 5, the Center will develop a final summative analysis and report to address the Center’s research questions. The Center will also conduct briefings, conference presentations and/or other reporting as directed by the CB.

  • The CBCT evaluation team will develop biannual summary reports of findings to share with Center leadership, Cross-Center evaluators, and CB. Additionally, reports are generated as services are provided (peer groups, new product evaluations, webinars, learning events). In project year 5, the team will begin analysis and write-up of the formal final report for CB, addressing evaluation of the Center’s efforts across all service areas.

  • The CBCC evaluation team will develop three brief evaluation reports in project years 2, 3, and 4 of the 4-year CBCC project period, to summarize findings from the process evaluation and share preliminary findings from one or more sources of data or specific events. In year 4, the evaluation team will aggregate and analyze compiled data, triangulate qualitative and quantitative data, and synthesize findings to address evaluation questions regarding the effectiveness of Center services. Findings will be submitted in the final report at the end of the 4-year project period.

B8. Contact Persons

Cross-Center Evaluation

Center for Tribes

Dr. James DeSantis, Project Director

[email protected]

James Bell Associates

3033 Wilson Boulevard Suite 650

Arlington, VA 22201

(703) 247-2628

Dr. Suzanne Delap, Lead Evaluator

[email protected]

Butler Institute for Families

University of Denver, Graduate School of Social Work

2148 S. High St.

Denver, CO 80208

303-871-6813

Center for States

Center for Courts

Ms. Christine Leicht, Lead Evaluator

[email protected]

ICF

9300 Lee Highway

Fairfax, VA 22031

(703) 225-2208

Dr. Kristen Woodruff, Lead Evaluator

[email protected]

Westat

1600 Research Boulevard, RB 3137

Rockville, MD 20850

301-315-5921




Attachments

Appendices

Appendix 1: Legislation

Appendix 2: Evaluation Questions

Appendix 3: Cross-Center Recruitment and Reminder Language

Appendix 4: CBCS Recruitment and Reminder Language

Appendix 5: CBCT Recruitment and Reminder Language

Appendix 6: CBCC Recruitment and Reminder Language

Appendix 7: Cross-Center Recruitment Language

Appendix 8: CBCT Recruitment Language – Tribal Staff Interview-Focus Group

Appendix 9: CBCC Recruitment Language – CIP Survey


Instruments

Instrument 1: Cross-Center – Outcomes of and Satisfaction with Tailored Services

Instrument 2: Cross-Center – Brief Tailored Services Survey

Instrument 3: Cross-Center – Leadership Interview for States and Territories

Instrument 4: Cross-Center – Leadership Interview for CIPs

Instrument 5: Cross-Center – Leadership Interview for Tribes

Instrument 6: Cross-Center – Collaboration and Communication Survey

Instrument 7: Cross-Center – Collaborative Project Team Survey

Instrument 8: removed from ICR

Instrument 9: CBCS – Event Registration

Instrument 10: CBCS – Brief Event Survey

Instrument 11: CBCS – Event Follow Up Survey

Instrument 12: CBCS – Event Poll

Instrument 13: CBCS – Peer Learning Group Survey

Instrument 14: CBCS – Learning Experience Satisfaction Survey

Instrument 15: CBCS – Jurisdiction Interview Protocol

Instrument 16: CBCS – removed from ICR

Instrument 17: CBCS – Tailored Services Brief Project Survey

Instrument 18: CBCS – Peer to Peer Event Survey

Instrument 19: CBCS – Longitudinal Ethnographic Sub-study Jurisdiction Interview

Instrument 20: CBCT – Tribal Request for Services Form

Instrument 21: CBCT – Inquiry Form

Instrument 22: CBCT – Tribal Demographic Survey

Instrument 23: CBCT – Needs and Fit Expl Tool – Phase 1

Instrument 24: CBCT – Needs and Fit Expl Tool – Phase 2

Instrument 25: CBCT – Academy Pre-Training Self-Assessment

Instrument 26: CBCT – Academy Post-Training Self-Assessment

Instrument 27: CBCT – Universal Services Webinar Feedback Survey

Instrument 28: CBCC – CQI Workshop Feedback Survey

Instrument 29: CBCC – Academy Feedback Survey

Instrument 30: CBCC – Academy PrePost Assessment

Instrument 31: Cross-Center – Tailored Services Focus Group Guide (for states)

Instrument 32: Cross-Center – Tailored Services Focus Group Guide (for CIPs)

Instrument 33: Cross-Center – Liaison/Child Welfare Specialist Interview Protocol

Instrument 34: Cross-Center – Tailored Services Jurisdiction Staff DEI Interview Protocol

Instrument 35: CBCT – Tribal Child Welfare Staff Interview/Focus Group Guide

Instrument 36: CBCC – CIP Capacity Building Services Feedback Survey


1 This data collection builds off two prior Cross-Center requests which were part of the 2014-2019 evaluations of the Collaborative, OMB Number 0970-0484 (exp. 11/30/22) and OMB Number 0970-0494 (exp. 2/28/23), and one prior Center-specific request for CBCS, OMB Number 0970-0501 (exp. 9/30/2023).

2 Language used to recruit informants to the study are provided in Appendices 3-6.

3 Salignac, F., Marjolin, A., Noone J, & Carey, G. (2019). Measuring dynamic collaborations: Collaborative health assessment tool. Aust J Publ Admin, 78(2), 1–23.

4 Most Center-specific instruments are based on previously approved instruments; burden estimates are based on the estimates for those previous instruments.

5 Continuous Quality Improvement

6 OMB 0970-0484 expires 11/30/22; OMB 0970-0494 expires 2/28/23; OMB 0970-0501 expires 9/30/23

7 If model assumptions are not met for the variables of interest, the Cross-Center team will consider alternative non-parametric models to investigate these relationships.

8 QCA is a method to analyze the causal contribution of different conditions (e.g., aspects of an intervention, contextual factors) to an outcome of interest. For more information about QCA see https://www.betterevaluation.org/en/evaluation-options/qualitative_comparative_analysis

9 As Tribal Nations are sovereign, the CBCT does not plan to release any documentation to the public related to the data collected as part of their evaluation. The data collected are intended for use by the tribal child welfare programs and to improve Center services.


29


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorHeidi Melz
File Modified0000-00-00
File Created2022-05-20

© 2024 OMB.report | Privacy Policy