Supporting Statement B Revised COVID- CWCC Cross Site Eval_May 19 2020_cln

Supporting Statement B Revised COVID- CWCC Cross Site Eval_May 19 2020_cln.docx

OPRE Evaluation: Building Capacity to Evaluate Child Welfare Community Collaborations to Strengthen and Preserve Families (CWCC) Cross-Site Process Evaluation [Process Evaluation]

OMB: 0970-0541

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes




Building Capacity to Evaluate Child Welfare Community Collaborations to Strengthen and Preserve Families (CWCC) Cross-Site Process Evaluation


OMB Information Collection Request

0970-0541





Supporting Statement

Part B

Approved February 2020

Revised May 2020 (COVID-19 Changes)




Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers: Mary Mueggenborg and Laura Hoard


Part B


B1. Objectives

Study Objectives

This information collection intends to capture data around the design, execution, and sustainability of projects funded by the Children’s Bureau’s Child Welfare Community Collaborations (CWCC) grant initiative. Specifically, our objectives are:


1. To describe in detailed case studies, the rationale for, planning associated with, activities undertook, challenges experienced, successes attained, and community served by, each CWCC grantee.

2. To document what elements of collaboration each grantee demonstrated, and the extent to which those elements changed over the grant period.

3. To compare and contrast the approaches grantees employed in their grant implementation to better understand the variety and consistency of community approaches to prevent child abuse and neglect.


Generalizability of Results

This study is intended to present an internally-valid description of Child Welfare Community Collaborations grantee program implementation amongst the first two cohorts of grantees, not to promote statistical generalization to other sites or service populations.


Appropriateness of Study Design and Methods for Planned Uses

As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.

Process evaluations typically describe the specific services, activities, policies, and procedures that are developed and implemented through an initiative. This type of evaluation also provides insight about the lifecycle of an initiative from conception to sustainability, including deviations from the plan, changes in the stakeholders involved, and perceived successes and failures. A cross-site process evaluation, which can provide insights about implementation successes and challenges as well as lessons learned across multiple CWCC grantee sites, will help ACF to better understand the factors associated with the successful ongoing implementation of community-based strategies and activities aimed at preventing child abuse and neglect. As such, this evaluation design is appropriate for addressing ACF’s information needs.



B2. Methods and Design

Target Population

The study’s target population is staff members of organizations participating in a Children’s Bureau CWCC grant (both awardees and sub awardees/partner organizations). We estimate the population to be around 845 people (65 individuals per grant, 13 total grantees). This sample of 845 people will be asked to complete the collaboration survey (Instrument 2). A subset of those 845 people will be interviewed using Instruments 4 through 7 (including Instruments 4a and 5a to collect COVID-19 specific related information during the relevant time period). We expect each project director to complete Instruments 1 and 3 or to confirm the information provided by the study team to the project director using the Appendix F email if virtual site visits are to replace in-person visits.


Sampling

We are collecting information from all thirteen Child Welfare Community Collaboration grantees. Each grantee is using a specialized combination of activities to address their specific community contexts (such as rates of child abuse and neglect, types of local social services in the community, state/local policies, and size and urbanicity of community).


We are sampling the individuals associated with the implementation of each CWCC grant. Each grantee will be asked to provide the project with the names and contact information of individuals involved in grant activity execution and leadership. This is a convenience sample and we cannot calculate what percent of total individuals the grantee provides contact information for, or the representativeness of that sample. We expect the sample of grant leadership will be representative of all leadership of CWCC grants. Please see A12 for a table that describes the different populations for each Instrument.



B3. Design of Data Collection Instruments

Development of Data Collection Instrument(s)

After we finalized our research questions, we then determined what data collection efforts were needed to answer each of the questions. Initial interview items were developed to capture data about each element of the relevant research questions. Interview items were reviewed by two expert consultants. We then constructed interview protocols that contained the relevant items for each type of respondent. We then cross-walked the protocols with our research questions and other available data sources, such as the semi-annual progress reports, to ensure that any items that could be answered by documents, or that were not directly related to a research question, were removed. Finally, interview protocols were tested for usability and cognition with 8 people. As a result of our pilot test, we:

  • Added additional clarifications for certain terminology (e.g., data linking, data sharing) for interviewer use if needed; and

  • Rephrased one item to solicit an open-ended response rather than a “yes or no” answer.


The online collaboration survey employs a standardized instrument, the Collaboration Assessment Tool (CAT). We first conducted a literature search to identify collaboration instruments used in similar research. We augmented that list with an environmental scan of colleague, grantee, and federal staff knowledge of instruments. We then cross-walked the elements collected by each of these instruments with the study’s goals, research questions and grantee approaches. We also looked for instruments that included information about reliability and validity, and asked questions about collaboration that were relevant to the types of collaboration efforts we expect CWCC grantees to undertake. After this exploratory work, we collectively agreed that the CAT best meets our needs.


Instruments 4a and 5a are alternate versions of interview protocols, Instruments 4 and 5. We adapted these instruments to explicitly collect interviewee experiences related to the COVID-19 pandemic. To accomplish this, our team reviewed the original instruments to identify questions that might elicit COVID-19 responses (such as questions about implementation challenge), and added explicit COVID-19 probes.


B4. Collection of Data and Quality Control

The project team (staffed by Abt Associates and Child Trends) will collect all interview data. Interviewees will be identified by each grant’s project director (using the Site Visit Planning Template, Instrument 3). As documented in A2, we expect each project director, or their designee, will spend 2 hours setting up the interview schedule at their site for each site visit. We will cross-check the list of proposed interviewees with our lists of known grant-involved staff. All interviewers will be trained before going on site, and we will provide refresher trainings for subsequent visits under the project. We will digitally audio-record interviews, which will be conducted onsite during our site visits. Complete study visit notes will be reviewed by study leadership for thoroughness, accuracy, and clarity. Unclear elements will be shared back with the interviewee for clarification.


The project has revised this methodology for the duration of the COVID-19 pandemic. Until it is safe to conduct in person site visits, the project will conduct virtual site visits via WebEx phone interviews. We will work with each grant’s project director to identify a list of potential interviewees. The data collectors will communicate directly with each of those potential interviewees to establish a mutually agreed-upon time for the interview. Interviews will be conducted using WebEx conference lines, which have audio-recording capability. Instead of conducting up to 9 staff interviews per grantee, we will interview up to 5 staff per grantee if we are interviewing by telephone instead of in-person.


The project team will collect all collaboration survey data. We will obtain a list of survey invite recipients from each grant’s project director (using the Survey Invitee Template, Instrument 1). The survey instrument relies on a well-tested, standardized measure of seven collaboration factors, the CAT. The instrument has been programmed into the online platform, Survey Gizmo, and project staff have tested the coding, skip patterns, and data summary functions of the survey. Each element of the survey is a required field, which will minimize missing data. We will ask grantee project directors to send to all of their invitees: 1) an introductory message alerting individuals that they will soon get an invitation to complete the survey and 2) up to two general follow up emails asking invitees to complete the survey if they haven’t already and thanking those who have already responded. Our system will also resend the survey link up to 3 times to nonrespondents. Finally, the Abt team will call nonrespondents during the final days of the survey to encourage their completion. We will send up to three reminder emails to increase response rates (Appendices C, D, E).


Instrument

Type of Respondent

Number of Respondents per Grantee per data collection

Number of data collections per 3-year OMB clearance

Survey Invitee Template (Instrument 1)

Project director or designee

1

3

Online Annual Collaboration Survey (Instrument 2)

All staff at grantee organization and partner organization involved in grant activities

65

3

Site Visit Planning Template (Instrument 3)

Project director or designee

1

3

Site Visit Discussion Guide for Project Directors and Leaders from Partner Organizations – Interview #1 (Instrument 4)


COVID Site Visit Discussion Guide for Project Directors and Leaders from Partner Organizations – Interview #1 (Instrument 4a)

Project director and a sample of leaders of partner organizations

3

1

Site Visit Discussion Guide for Staff from Lead and Partner Organizations – Interview #1 (Instrument 5)


COVID Site Visit Discussion Guide for Staff from Lead and Partner Organizations – Interview #1 (Instrument 5a)

A sample of staff at the grant organization and partner organizations involved in grant activities

9







5

1







1

Site Visit Discussion Guide for Project Directors and Leaders from Partner Organizations – Follow-Up Interviews (Instrument 6)

Project director and a sample of leaders of partner organizations

3

2

Site Visit Discussion Guide for Staff from Lead and Partner Organizations – Follow-Up Interviews (Instrument 7)

A sample of staff at the grant organization and partner organizations involved in grant activities

9

2





B5. Response Rates and Potential Nonresponse Bias

Response Rates

We are aiming for an 80% response rate for Instrument 3, our annual collaboration survey (208 respondents for the 260 survey invitees a year for FY18 CWCC grantees and 468 respondents for the 585 survey invitees a year for FY19 CWCC grantees). Unit rates will be calculated per fielding with the number of survey completers in the numerator and the number of active emails we received from the grantees’ project directors as the denominator. All items of the CAT are coded as required responses on the survey, so item response rates will equal unit response rates. Based on other surveys we have conducted with similar professional audiences, we believe an 80% response rate is achievable. Our approach to ensuring a strong response rate is multi-faceted.


We expect 100% response rate for individuals scheduled for site visit interviews (Instruments 4-7, including Instruments 4a and 5a). As described in A12, interview subjects will be a subset of the survey invitees. During each in-person site visit, for each grantee, we expect to interview 1 project director, 2 partner organization leaders, and 9 staff across the grantee and partner organizations. During virtual site visits while COVID-19 precautions are in effect, for each grantee, we expect to interview 1 project director, 2 partner organization leaders, and 5 staff across the grantee and partner organizations.


NonResponse

As participants are not randomly selected, and our study’s findings will not be generalized, we will employ a complete case analysis. We will not institute any strategies to impute any missing data.



B6. Production of Estimates and Projections

In terms of community efforts to prevent child abuse and neglect generally, our data are not representative and are not generalizable. We will not make any estimates or projections. Within the CWCC initiative, however, we are conducting a census and our findings will generalize to that set of grants. We will share our findings with CWCC grantees, and with the general public through a final report and special topic deliverables, such as a research brief or conference presentation. Limitations around generalizability will be noted in publicly shared information.



B7. Data Handling and Analysis

Data Handling

Our project will check data collected through the survey invitee template (Instrument 1) and site visit planning template (Instrument 3) against the information we have about the grantees from other sources. For example, the project maintains an email listserv for CWCC grantees. We will make sure each email we have for individuals associated with the grant are represented on Instrument 1. We will also follow up with project directors regarding any “bounce-back” messages we receive from sending out the online survey link. For Instrument 3, our project maintains a list of organizations and leaders mentioned in grantee applications and Semi Annual Progress Reports as instrumental to grant activities. We will also coordinate with the project director to ensure we are scheduled to interview those leaders.



Qualitative data collected through Instruments 4 – 7 (including Instruments 4a and 5a) will be compared to data reported in grantees’ Semi Annual Progress Reports, and through other interviews with individuals associated with the grant. As a reminder, this data is largely around interviewee perceptions of grant activities, and are as such, subjective.



The annual collaboration survey, Instrument 2, will collect quantitative data through the online service Survey Gizmo. Data are automatically coded upon collection, so we do not have opportunities to err in coding. The survey is coded to require responses to minimize errors caused by missing data. Finally, while quantitative, this survey is also subjective, documenting participant perspectives of collaboration activities.



Data Analysis

This study is largely comprised of qualitative data, with quantitative data being produced by the annual collaboration survey (Instrument 2). We will carefully manage and systematically analyze the cross-site process evaluation data to ensure we produce valid and reliable results. To maximize our ability to answer the evaluation’s overarching research questions and related sub-questions, the cross-site process evaluation team will conduct descriptive analyses using interview transcripts and collaboration survey data. These data will yield important details about how CWCC initiatives are being designed and implemented by the thirteen grantees.


Qualitative Data: All members of the site visit teams will participate in coding observation forms, interview transcripts, and program documents. We will review these documents to identify themes, patterns, key activities, and practices. The team will have a pre-analysis meeting to discuss themes and patterns in the data. A senior analyst, supervised by the cross-site process evaluation task lead, will develop a code book for use in NVivo. Coders will participate in a training on how to use the code book to code the interview and observation notes in NVivo.


Each interview document will be coded by two individuals, and a reconciler (the cross-site process evaluation task lead or another senior member of the site visit team) will get involved if there are any discrepancies in coding. The cross-site process evaluation task lead will oversee a senior analyst in summarizing the descriptive analyses from the collaboration survey—by individual grantee site and across the thirteen sites.


We will use content analysis to review the qualitative data collected for this evaluation to determine the presence of specific words or concepts. We will use those findings to enumerate and categorize findings from the grantees’ program documents and from our interview and site visit observation notes. Our inductive methods will allow us to describe the activities and implementation of each grantee, and to make descriptive comparisons across the sites. We will also be able to identify common and unique implementation challenges and lessons learned by grantees from program documents, site visit observations, and interviews.


We will use the grounded theory approach to qualitative analysis, which entails a systematic approach to coding and thematic analysis to search for patterns, exemplary events, key activities, collaborative approaches, practices, goals, and processes recorded in notes from interviews and site visits. Patterns and themes are identified through a simultaneous process of deduction and induction involving the comparison of responses and grouping similar concepts together in categories (e.g., key successes and key challenges in implementation). As additional data are collected over time, initial categories will be elaborated further and augmented.


Consensus coding will be used to yield common themes and threads related to the perceptions and experiences of staff from partner organizations/agencies involved in the implementation of the CWCC initiative. The evaluation team will hold analytic meetings throughout the coding process to review the data, discuss emerging themes, and agree upon coding strategies. Coding categories will be constructed in NVivo at a macro level and refined in preparation for creating an outline to present the individual grantee and cross-site case studies. The team will also explore the use of data visualization tools to convey the most commonly used words/phrases related to implementation processes and activities employed by each grantee’s collaborative.


Quantitative Data: All data collected through the annual collaboration survey will be cleaned and analyzed to generate descriptive statistics (i.e., counts, ranges, frequencies, means, and standard deviations) using Excel or SAS. Analyses of these data will include a detailed summary that utilizes appropriate descriptive statistics for each of the collaboration factors assessed by the Collaboration Assessment Tool (CAT). We will generate an average score for each factor by using the following steps: (1) add together all the ratings for the questions or statements related to each factor; and (2) divide by the total number of ratings for those questions (this number is equal to the number of raters multiplied by the number of questions for the factor).1

Each year, survey data will be analyzed and summarized at the individual grantee level. Because the CAT is designed to be administered anonymously, respondents will not be asked to provide their name or the name of their organization on the survey.2 The survey will, however, include a set of optional background questions for respondents, including the type of organization they work at, the type of role they hold within their organization, how long they have been involved in the CWCC initiative, and how frequently they attend collaborative meetings. We will encourage respondents to answer these optional background questions, and if a large enough number of respondents do, the cross-site process evaluation team will be able to conduct subgroup analyses of the survey data based on these background characteristics. If we are unable to conduct subgroup analyses by respondent type, we will use the survey data to report on overall responses to the items within each of the seven factors on the CAT survey.

To the extent possible, the cross-site process evaluation team will compare and contrast scores on collaboration factors within each site by organization type (lead organization, direct service provider, child welfare agency, etc.) and by respondent type (frontline staff vs. manager, for example).3 The team will also compare and contrast scores on collaboration factors across grantee sites to identify common themes related to organization/group processes employed by the collaborative, implementation of collaborative activities, and perceptions of progress towards meeting the goals and expected outcomes of the collaborative initiative. Yearly administration of the collaboration survey will allow the cross-site process evaluation team to examine patterns and trends in the data as they evolve over time, both at the individual grantee and cross-site levels and will provide insight about research questions 1-3.


Data Use

The study data may be archived for restricted data-use. In the event that it is archived, we will develop a public-facing codebook, including variable names, and describe the major analysis codes used.



B8. Contact Person(s)


Name

Organization

Role on Contract

Phone/email

Mary Mueggenborg

OPRE, ACF, HHS

Co-Contracting Officer’s Representative

(202) 401-5689 [email protected]

Laura Hoard

OPRE, ACF, HHS

Co-Contracting Officer’s Representative

(202) 401-4561

[email protected]

Allison Hyra

Abt Associates

Project Director

(301) 347-5058

[email protected]

Carolyn Layzer

Abt Associates

Cross-site Process Evaluation Task Lead

(617) 520-3597

[email protected]



Attachments – Added May 2020

Instrument 4a: COVID Site Visit Discussion Guide for Project Directors and Leaders from Partner Organizations – Interview #1

Instrument 5a: COVID Site Visit Discussion Guide for Staff from Lead and Partner Organizations – Interview #1


Appendix C-2: COVID Email from Project Directors to Survey Invitees Introducing the Data Collection Effort

Appendix D-2: COVID Email from Project to Survey Invitees including the Survey Link


Appendix F: Virtual Site Visit Planning Email

Appendix G: COVID Email from Project Directors Inviting Interviewees

Previously Approved Attachments

Instrument 1: Survey Invitee Template

Instrument 2: Online Annual Collaboration Survey

Instrument 3: Site Visit Planning Template

Instrument 4: Site Visit Discussion Guide for Project Directors and Leaders from Partner Organizations – Interview #1

Instrument 5: Site Visit Discussion Guide for Staff from Lead and Partner Organizations – Interview #1

Instrument 6: Site Visit Discussion Guide for Project Directors and Leaders from Partner Organizations – Follow-up Interviews

Instrument 7: Site Visit Discussion Guide for Staff from Lead and Partner Organizations – Follow-up Interviews

Appendix A: Federal Register Notice

Appendix B: Institutional Review Board Approval

Appendix C: Email from Project Directors to Survey Invitees Introducing the Data Collection Effort

Appendix D: Email from Project to Survey Invitees including the Survey Link

Appendix E: Reminder Email(s) from Project Directors to Survey Invitees to Increase Response Rate

1 These steps are also used in analyses of the Wilder Collaborative Factors Inventory.

2 The cross-site process evaluation team will create a unique survey link associated with each grantee’s project to be included in an email sent to each invitee’s email address.

3 To minimize the risk that respondents could be identified in summaries of survey data, analyses by organization and/or stakeholder type will only be conducted if there are more than 6 respondents within each subgroup category.

9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAllison Hyra
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy