Part B IMLS Comments to OMB Feedback 2019-04-22 Comments

Part B IMLS Comments to OMB Feedback 2019-04-22 Comments.docx

Community Catalyst (CCI) Program Evaluation: The Roles of Libraries and Museums as Enablers of Community Vitality and Co-Creators of Positive Community Change

OMB: 3137-0113

Document [docx]
Download: docx | pdf


SECTION B. DESCRIPTION OF STATISTICAL METHODOLOGY


Study Design Overview


The evaluation study will assess both implementation and outcomes of the 24 grantee projects in IMLS’s Catalyzing Communities Initiative (CCI). Table B.1 (attached as a separate file) shows a cross-walk of the research questions to specific evaluation constructs/indicators and proposed data sources. As seen, this table is divided into three major areas of inquiry: (1) questions about changes in museums/libraries and their local partners and communities; (2) questions about the quality of asset-focused, community-driven collaboration strategy implementation; and (3) questions about the contribution/relationships between asset-focused, community-driven collaboration strategy and outcomes.

7

To answer the evaluation questions in Table B.1, we propose a mixed methods evaluation study design that includes three sequential stages: (Stage 1) within-project assessment of each of the 24 grantees using descriptive and rubric analysis of administrative, survey and interview date; (Stage 2) selected case studies for deeper inquiry and testing the links between asset-focused, community-driven collaboration strategies and outcomes using process tracing; and (Stage 3) cross-grantee portfolio analysis and summary using Qualitative Comparative Analysis and mixed methods thematic analysis across all data sources collected in the earlier stages. The evaluation design will be staggered by the two cohorts of grantees (12 project grantees respectively in FY 17 and FY 18 respectively). Retrospective data for Cohort 1 and baseline data for Cohort 2 comprise the primary focus of the first year of data collection, and follow-up data to assess change over time in Cohort 2 is the primary data collection focus in the second year of data collection.


B. 1. Respondent Universe


The 24 CCI projects comprise the universe for this evaluation study. Within this universe, we are collecting data from five sets of individuals.


Set 1: The first respondent set includes core project team members from 24 CCI-funded projectsi. The sample for project team interviews and case studies will be drawn from this universe. Core project teams include individuals who work most closely on the project/receive technical assistance (TA)/have a sense of accountability toward grant expectations.


Set 2: The second respondent set includes all museum and library organizations that are involved in the 24 CCI-funded projects. This respondent set will respond to the Museum and Library Survey and includes up to 60 organizations in each of the two cohorts. Many members of this respondent set will overlap with the first respondent set (Project team members), and we anticipate that this overlap will consist of up to 40 respondents. We will be asking respondents representing organizations that overlap in Respondent Sets 1 and 2 to participate in both surveys and project team interviews because they address different evaluation questions — the project team interviews address implementation, challenges, and enabling conditions while the survey addresses changes in capacity and practice.


Set 3: The third respondent set includes all local community partners (non-museum and/or library) for each CCI-funded project that are involved in implementation of the project in some way. These respondents will participate in the Local Community Partner Survey. These local community partners will be classified into “traditional” (other institutions) and “non-traditional” (community organizations, associations, and leaders) partners. Museums and libraries are NOT included in this response set. Some members of this respondent set will overlap with the first respondent set (Project team members), however, not all of these partners serve on the core project teams. We anticipate that this overlap will consist of up to 30 respondents. We will be asking organizations that overlap in Respondent set 1 and 3 to participate in both surveys and project team interviews which address different evaluation questions— the survey addresses changes in practice and strength and breadth of local networks, while the project team interviews address implementation, challenges, and enabling conditions.


Set 4: The fourth respondent set is comprised of community members in all 24 CCI project communities who participated in creating or implementing the CCI project plan or who participated in a CCI project-related activity. As it is impossible to identify these individuals a priori and the number of community members who are touched by a given CCI project is likely to vary based on the focus of the project and size of the community being served, the universe is conservatively estimated as the size as the total population size who are likely to be touched by the local CCI project in some way. A sample of these respondents will participate in focus groups as part of the case studies in the eight selected case sites (See Section B.2 below for more detail). We also addressed issues related to any potentially sensitive information in Sections A.10 and A.11 in Part A of this narrative.


Set 5: The fifth and final respondent set is made up of CCI site consultants, IMLS’s third-party TA providers—members of IMLS’s cooperator (ABCD) and individuals from EPA’s unit who worked on this project in an interagency agreement – as well as IMLS staff involved with this initiative to assess the contribution of the site consultants— who will participate in consultant interviews and focus groups. There are approximately 15 individuals in this set.



B.2. Potential Respondent Sampling and Selection Methods


Our proposed sampling and selection methods vary for each respondent set within the universe of 24 local CCI projects described above in B.1.


Set 1: Project team members from 24 CCI-funded projects

We plan to include up to five core members from each of the 24 project teams participating in CCI (24 x 5 = 120 participants; Cohort 1 = 60 individuals and one interview and Cohort 2 = 60 individuals interviewed twice, once at baseline and once at follow-up). While some project teams have more than five project team members, we are limiting the size of group interviews to a maximum of five per team to better allow participants to contribute responses in the limited time for the focus group and to ensure that interview responses are not biased by the inclusion of variation in group size. These members will be purposively selected based on the criteria of (1) knowledge of the work; (2) role/type of partner represented, (3) area of expertise (e.g., community engagement, operations, partnership development), and (4) availability. Each project team interview will include at least one museum or library representative.


Set 2: Museum and library representatives

We plan to administer the Museum and Library Survey to the universe of all museums and libraries involved in the 24 CCI-funded project grants. Working with the project lead and their third-party technical assistant consultant, we will identify which museums and libraries are (1) participating as a project team member or (2) receiving benefit from the local CCI project. For each identified organization, we will work with the local CC-project team to identify an individual at that institution who has been involved in the project and can speak to the organization as a whole. That individual will be the point of contact for completing the survey but may reach out to other members of the organization for responses. The unit of measurement is the organization, not the individual.


We have identified a total of 21 museums/libraries in Cohort 1 and 22 museums/libraries in Cohort 2.


Set 3: Local Community partners

We plan to administer the Local Community Partner Survey to the entire universe of project team-identified local partners involved in CCI-funded projects. Working with the project lead and their third-party site consultant, we will identify which local community partners are (1) participating as a project team member or (2) receiving benefit from the local CCI project. For each identified partner organization, we will work with the local CC-project team to identify an individual at that institution who has been involved in the project and can speak to the organization as a whole. That individual will be the point of contact for completing the survey but may reach out to other members of the organization for responses. The unit of measurement is the organization, not the individual.


As referenced in A.12, we have identified 22 partners in Cohort 1 from administrative records and reports and 31 partners for baseline data collection in Cohort 2 from grant applications and conversations with the third-party TA provider. Since Cohort 2 projects are running, we do not have an exact number of partners for follow-up data collection, particularly because one of the intents of the CCI project is to expand the number of new and non-traditional partners. Using the change in number of partners for Cohort 1 as a starting point for estimation, we conservatively estimate that the number of Cohort 2 partners will increase over the two years of the CCI project or to approximately 40 total partners. In addition, we anticipate that we might lose up to 10 partners across Cohort 2 grantees from Year 1 to Year 2. Consequently, based on these numbers, we conservatively estimate that we will be collecting data from approximately 100 total partner organizations across Cohort 1 and Cohort 2.


Community Members. We will work with local project teams to recruit individual community members for a structured focus group during the case study onsite data collection. The evaluation plans to conduct case studies in eight project sites. We anticipate that each site will include focus groups with up to 10 community members (8 sites x 10 participants per site = 80 participants). We will ask local project teams to nominate ten participants for the focus group and two alternates for replacement if one of the original 10 recruited cannot participate. Participants will be identified in consultation with the grantee project team and their third-party TA consultant. Community members will be considered for inclusion in the focus group if they helped develop the project plan, they helped implement the project plan, or they participated in a project-generated activity. We will work with site consultants and project teams to intentionally identify participants who have a range of reactions to the CCI-funded project and the library/museum—from positive to negative in order to mitigate positive response bias. All participants in the community focus groups must also be age 18 or older so that we are not collecting sensitive data from minors. We have addressed issues related to any potentially sensitive information in Sections A.10 and A.11 in Part A.


Set 5: CCI consultants

The universe of technical assistance (TA) providers includes IMLS program staff, staff from ABCD (IMLS’s cooperator) and staff from EPA (through IMLS’s interagency agreement). We plan to interview all site consultants from ABCD and EPA as well as IMLS’s program staff to assess the quality of third-party TA and local CCI implementation. We anticipate that we will interview all site consultants and IMLS program staff for an estimated 15 such individuals in total.


B.3. Response Rates and Non-Responses


Table B.3 shows the anticipated response rates for each of the data collection tools included in the burden estimates.


Table B.3 | Anticipated Response Rates for Data Collection

Data Collection

Respondents

Timing of Data Collection

Universe

Anticipated Response Rate

Cohort 1 (C1) Museum/ library/Grantee survey (retrospective)

All participating C1 museums and libraries

April 2019

N = 21

100%

Cohort 2 (C2) Museum/ library /Grantee survey (baseline)

All participating C2 museums and libraries

April 2019

N = 22

100%

C2 Museum/ library / Grantee survey (follow-up)

All participating C2 museums and libraries

April 2020

N = 22

100%

C1 Local Partner survey (retrospective)

All participating partners identified by C1 project teams

April 2019

N = 22

70%

C2 Local Partner survey (baseline)

All participating partners identified by C2 project teams

April 2019

N = 31

70%

C2 Local Partner survey (Follow-up)

All participating partners identified by C2 project teams

April 2020

N = 40

70%

C1 Project Team Interviews (retrospective)

All C1 project teams

May/June 2019

N = 12 teams

N = up to 60 individuals

100%

C2 Project Team Interviews (Year 1)

All C2 project teams

August/Sep 2019

N = 12 teams

N = up to 60 individuals

100%

C2 Project Team Interviews (Year 2)

All C2 project teams

August/Sep 2020

N = 12 teams

N = up to 60 individuals

100%

Case Studies

Selected project sites

Oct-Dec 2019 (R1)


Oct-Dec 2020 (R2)


N = 8 sites

N = 16 interviews

N = ~200 participants in focus groups

100%

Provider Interviews

All providers

Apr 2019 (C1)

Oct 2019 (C2)

Oct 2020 (C2)

N = 15

100%


As shown in Table B.3, the evaluation team’s goal is to achieve a minimum response rate of 80% for all museum/library and local partner surveys—including Cohort 1 retrospective survey and Cohort 2’s baseline and follow-up surveys. Our response rate is equivalent to the OMB-recommended 80% response rate threshold as our goal following OMB guidelines to minimize bias. The 80% response rate is also consistent with other similar evaluation efforts we have conducted.


To reach this overall rate of 80%, we expect higher response rates (near 100%) for direct project team members as they are receiving grant funding and have expectations for participating in evaluation activities. We expect that response rates will be lower for partners who are not receiving grant funding. Our previous experience suggests we can expect these rates to average around 70%.


We will implement several strategies to maximize survey response rates. First, we will collaborate with the third-party site consultants assigned to each grant project who are working closely with each project to build capacity and support the local CCI-funded efforts. As part of their ongoing support, site consultants are working with project teams to develop data and local third-party project evaluation capacity. We also expect that site consultants will play an important role in relaying the importance of the study and in helping us obtain the support among the project team members and project community partners. We have included copies of cover letters and reminder messages for the surveys.


Second, to avoid survey fatigue, we kept the length of the Museum/Library/ Grantee Surveys to about 20 minutes and the Local Partner Surveys to about 10 minutes. We will pilot the surveys with site consultants prior to administration to verify survey time and clarity of items. We also used open ended items sparingly and spread throughout the survey to reduce the cognitive burden associated with responding to these kinds of questions (Couper, Traugott, and Lamias, 2001).


If survey response rates fall below 80%, as we expect might happen with the partners taking the Local Partner survey 1, we will conduct missing data analyses and correct for non-response bias. Our non-response bias analysis will determine the impact of non-response by comparing the characteristics of those who responded to the survey with the universe of potential respondents on various demographic characteristics (e.g., partner type). The universe of potential organizational respondents will be drawn from project team-identified contact lists received for each grantee project. Through this comparison, we will determine whether there are meaningful differences between the actual and potential survey respondents. Depending on the results, we will determine if there is need to address non-response using additional statistical procedures such as weighting; the application of the methods and all results will be subject to peer review from those in IMLS and on the EST with statistical methodological expertise.


With respect to attrition for Cohort 2 follow-up surveys, we will conduct standard analyses to assess the level of selection bias introduced (Hochhiemer, Sabo, and Wolff, 2016). These analyses include assessing descriptive statistics to determine if the group of respondents who dropped out differ from those that did not. Depending on the results, we will determine if there is need to address degree of bias using additional statistical procedures such as weighting; the application of the methods and all results will be subject to peer review from those in IMLS and on the EST with statistical methodological expertise.


B.4. Tests of Procedures and Methods


This section describes the different analytic methods we plan to use to assess initiative-level outcomes, document CCI implementation and lessons learned, and evaluate the contribution of an asset-focused, community-driven collaborative approach on outcomes for museums/libraries and local networks and communities. As stated previously in A.2, our proposed mixed-methods design attempts to identify contributions of the CCI approach to these outcomes but is not causal in nature.


Our analytic strategy relies on building a body of evidence from the data collected during the three sequential stages. Table B.4 shows the analytic strategies for each stage, along with the evaluation questions answered and the sources of evidence to be used to answer them.


Table B.4 | Analysis Stages

Stage

Analysis Method

Evaluation Question(s) (from Table A.1)

Data source/ Evidence

Stage 1: Within-grantee analysis

Descriptive analysis of survey data

Coding of qualitative data (interviews and documents)

Application of analytic rubrics

Social network analysis

RQ 1-4 and 6-9

Analysis of project work products (asset maps, self-assessments, network data, power ladders)

Surveys

Project team interviews

Grantee reports

Provider documentation (notes and resources)

Provider interviews

Stage 2: Case studies

Local theories of change

Process tracing

Additional qualitative analysis

RQ 5 and 10


Case study follow-up interviews

Facilitated process tracing dialogues

Additional documentation (if applicable)

Stage 3: Cross-grantee analysis

Mixed methods thematic analysis

Qualitative Comparative Analysis

RQ 4-5, 9-10, and 11-15

All Stage 1 and 2 data


Stage 1: Within Grantee-Project Analysis


Analysis in the first stage focuses on within-case (Grantee Project) analysis using descriptive statistical analysis of quantitative data and thematic analysis of narrative data contained in the administrative documents, survey questionnaires, along with a structured coding scheme for analyzing narrative data in the qualitative interviews, and then applying a set of analytic rubrics (see below for more details) for each of the 24 CCI grantees. In addition, a social network analysis is planned for this stage as well.


1.A. Analysis of administrative data. As part of ongoing consultation with project teams, the third party TA providers will be producing a body of work products that are designed to support capacity building on key aspects of the asset-based community development model. We will include a number of these in our within-case analysis, including:

  • Thematically analyzing a sample of consultant notes for each grant project using the procedures described under 1.C below.

  • Descriptively analyzing the number and types of community assets the project identifies and activates and how those distributions shift over time using asset-mapping tools;

  • Thematically coding strategies and outcomes on local grant projects’ theories of change and how those shift over time; and

  • Summarizing self-assessments and third party consultant assessments of capacity and its development over time.


1.B. Analysis of Survey Data - Incorporate responses into the supporting statement

The analysis of the survey data serve three distinct purposes:

  1. Create quantitative profiles of museums, libraries, and their partners and conduct descriptive correlational analyses to examine patterns and differences across subgroups.

  2. Construct a social network analysis of museum/library and local network partners

  3. Create indicators for input into analytic rubrics (described below in B.4)


Data preparation. Survey responses will be collected using the online tool, Survey Monkey. After data cleaning, we will conduct simple item analyses on survey scales to assess internal consistency reliability and convergent and divergent validity of scale scores. For internal consistency estimates, we will calculate Cronbach’s alpha for all survey scales (note, not all survey items are anticipated to load on a composite scale). Scales will be used in further analysis if Cronbach’s alpha = .70 or higher (Tewee and Bot, et. Al, 2003). For scales with lower than .70 alphas, we will conduct further item analysis to refine the scale or report on individual items.


In addition, we will correlate each scale item with its own scale (removing that item from the scale) to assess convergent validity and will correlate it with other scales from the survey to assess divergent validity. We would expect convergent item-scale correlations to exceed divergent item-scale correlations by at least 1.5x to demonstrate initial construct validity of the scores for our sample (Gliem and Gliem, 2003).


Descriptive and correlational analyses. We will conduct initial descriptive analysis of survey data to understand response patterns on the items/scales for the whole sample and by specific subgroups. These descriptive analyses will examine means, standard deviations and frequencies for each item for the overall sample. We will also conduct a series of cross-tabs to examine patterns for different categories of respondents, such as museums/libraries, Cohort 1/Cohort 2, and types of partners. We will present these results using a series of data tables and graphical representations.


Social Network Analysis. On the two surveys, we will be asking museums, libraries and institutional partners how they are connected and about how they interact with one another. These questions focus on the following dimensions:


  • Value of the collaborative network— includes power and influence, level of involvement, and resource contribution of network organizations;

  • Trust—including reliability, openness to discussion/compromise, and support of the collaborative mission; and

  • Network characteristics—including density of the network (degree of cohesion/inter connectivity in the network), degree of centralization (degree to which activity is centered in a few organizations (high) or spread across organizations (low)), and size of the network.


To analyze these data, we will use a network mapping tool, Kumuii to calculate network scores for the dimensions described above and to create visual maps of the local networks.


Indicators for Analytic Rubrics. The survey items are designed to provide indicators of CCI implementation and outcomes that museums, libraries, and partners are experiencing. These indicators will be used in applying the analytic rubrics (described below).


1.C. Analysis of Qualitative Interview Data


Data preparation and coding. To prepare interview and document data for analysis, we will create a codebook guided by the evaluation questions. The codebook will contain codes that are identified a priori to align to the analytic rubrics, as well as emergent codes based on analysis of the data to capture important themes. Each code in the codebook will include a brief definition of the code as well as examples. We will refine the codebook based on its application to the first few sites coded.


To ensure the coding is rigorous and reliable, we will conduct inter-rater reliability training with all members of the team. During the training, the team will engage in iterative rounds of coding the same data sources and using discussion to resolve differences. Coders will be deemed reliability when they reach 80% agreement with the other team members. We anticipate two to three cycles of training to reach the inter-rater reliability.


We will conduct calibration checks at the half-way point of coding. Calibration checks require a second coder to code a sample of documents and/or transcripts already coded, and to calculate inter-rater agreement (IRA) for the doubly coded data. If IRA remains 80% or higher, no further action will be taken. If IRA is less than 80%, additional calibration training will be conducted until the coders reach 80%.


Coders will code across data sources within a Grantee to maximize consistency within cases. To organize the large amount of data that will be coded this process, we will use Dedooseiii, an online qualitative software tool to code, store, and organize the data.


1.C. Analytic Rubrics

To compare implementation of asset-focused, community-driven collaboration practices and principles in all 24 Grantee projects, we will utilize an analytic rubric that includes evaluative criteria, quality definitions for those criteria at different levels of achievement, and a scoring strategy. This rubric will allow for grantees to be compared on common indicators and for judgment calls to be made about success of implementation of key asset-focused, community-driven collaboration concepts. The analytic value in the rubric lies in the ability to balance site differences in types of outcomes, capacity, and implementation elements while avoiding exclusive reliance on qualitatively assessing the overall cases and comparing them. By having a clearly defined rubric, each grantee project will be analyzed against the rubric criteria and compared at that level, limiting risks of variable rigor and precision in the analysis.


The analytic rubric will be developed based on rubrics we have used in assessing other collective community transformation initiatives as well based on the literature around community engagement, capacity-building, and collective action (Stachowiak, Lynn, Akey, and Gase, 2018). The rubric will include a set of constructs with clearly defined, measurable indicators and a set of performance levels for each construct ranging from low to high on each one. Data from surveys, interviews and documents will provide evidence sources for completing the rubric. The team will develop a set of descriptors and examples for each construct assessed on the rubric. Table B.5 shows potential dimensions that will be assessed, their purpose, and a summary of indicators for each.


Table B.5 | Assets-focused, community-driven collaborative practice and principles rubric dimensions

Rubric Dimension

Intended purpose

Example Indicators

Collaboration and partnership

Assesses the degree to which the local CCI initiative is building and strengthening community partnerships and networks in service of promoting authentic asset-focused, community-driven collaboration

Depth and strength of partnerships and network

Effectiveness of partnership efforts

Shared vision and approach to work among community partners

Asset-focused, community-driven collaboration implementation

Assesses the degree to which different elements of asset-focused, community-driven collaboration are being implemented or manifested in local community programs.

Co-creation with community partners

Authentic asset-focused, community-driven collaboration

Power dynamics

Assesses the key aspects of shifting power and inclusion that CCI is intended to build in museums, libraries, and community partners

Orientation toward shifting power dynamics

Capacity to engage diverse stakeholders

Inclusive practice

Sustained Practice and systems changes

Assesses evidence that asset-focused, community-driven collaboration practices and principles are being integrated into organizational culture beyond the project and that systems are being created or refined to support community change

Organization-wide policies and practices

Explicit leadership commitment

Formal collaborations, programs, and interventions

The process of conducting within-case analysis using the rubric will mirror the process for coding the qualitative data described above. First, we will conduct iterative calibration training to ensure analysts are reliably completing the rubric using the indicators and criteria for each rubric. We will also conduct a mid-point review and recalibration if needed. As a final step in the rubric analysis, we will conduct a team review of all rubric scores. This process involves discussing rubric ratings, coming to consensus about and assigning a final score. Significant changes to scores will be documented.


At the end of this analysis process, we will have developed rich within case summaries and indicators which will feed into the portfolio levels analyses in Stage 3. These analyses will also serve as the foundation for selecting grantee communities for case studies.


1.D. Thematic Analysis

Some evaluation questions and data elements are designed to elicit emergent themes, rather than elucidate constructs already identified a priori. Some examples of these emergent constructs might include challenges or unintended consequences to asset-focused, community-driven collaboration work. To allow for these emergent themes, we will also add codes to our codebook to capture them.


1.E. Network Analysis

One of the intended outcomes for CCI is that the initiative fosters a learning community that is continuously sharing information, lessons learned, new ideas. The evaluation aims to understand the extent to which members of the network are exchanging learnings that have a real impact on their CCI project or other library/museum involved projects. A systematic network analysis will be conducted using network analysis software Kumu which calculates different dimensions of networks (Acosta and Chandra, et. al, 2015). The data for this network analysis will come from administrative data from group activity facilitated by the third-party TA providers at the annual grant convenings. This activity asks grantees to identify the relationships and interactions they have with other grantees in their cohort. Table B.6 shows the dimensions of the network that will be assessed.


Table B.6 | Grantee Cohort Network Characteristics

Network Dimension

Analysis

What Analysis Says

Network density

# of connections in the cohort / total number possible

  • By any type of connection in the cohort

  • By type of connection in the cohort


The extent to which cohort members are learning from each other and the extent to which the learning was used to support their own CCI project or another community-change effort a library/museum is involved in.

Types of connections

Shared information

Applied information to own CCI project

Applied information to work beyond CC

Extent to which learning is applied to CCI work and transferred to other non-CCI community engagement efforts


Seek verification on the learning statement in the two boxes – Look at the nature of the questions.



The activity will elicit additional implications of the network learning and partnerships through discussion of the mapping exercise. A thematic analysis of third-party TA providers’ summary notes will identify stories of learning that had an impact on a CCI project or another community-change effort a library/museum is involved in. The analysis will also identify what aspects of the CCI initiative contributed most to network learning and partnerships. In addition, the entire social network analysis will include a comparison of changes over time for Cohort 2.


Stage 2: Case Studies

Our analytic approach in Stage 2 is four-part. First, using the within-case analyses in Stage 1, we will use rubric ratings as an input into selecting the eight case study sites (see above under B.2 for description of case study site selection). Second, once we have selected case study sites, we will use the within-case analysis to develop local theories of change. Third, we will update the within-case analyses to reflect any additional data collected during the case study process. Finally, we will conduct process-tracing analyses to identify contributions of CCI to outcomes for museums, libraries, and local communities.


2.A. Creating local theories of change

As a first step in preparing for the case study site visits, we will review and refine local theories of change (TOCs) for each of the eight project sites. These local TOCSs are created with technical advisor (TA) consultants in assets-based community development to guide the projects’ efforts. These local TOCs will align to the initiative level TOC (see reference in Part A, TOC is separate attached document), but will include local examples of strategies and outcomes. Local TOCs provide the foundation for the hypotheses to be tested during process tracing analysis and are the material that case study respondents will react to during data collection. We will review the local TOC with each site, and in collaboration with the site leads and TA consultants, we will refine the TOC to more accurately reflect how the project team views the CCI work in their community. The refined TOC is embedded in the case study data collection protocols.


2.B. Updating coding of qualitative data and analytic rubrics

After case study data collection, we will update the analytic rubric scores and qualitative analysis for the eight case sites based on new data. Our coding and analysis procedures will mirror those described above in Stage 1.


2.C. Process tracing

One of the questions this study seeks to answer was whether there is a direct relationship between CCI capacity-building and implementation and the contribution of asset-focused, community-driven collaboration efforts to outcome changes for museums/libraries and local networks and communities. We will answer these questions by applying process tracing methodology in exploring competing hypotheses about plausible explanations of the causes of a given outcome (see Table B.7; Beach, 2018; Bennet and Checkel, 2014). The hypotheses include both the contribution of asset-focused, community-driven collaboration as well as other drivers of change identified and prioritized by case study sites. The analysis assesses and quantifies the degree of contribution that can be connected to each hypothesis or cause.


Specifically, we will use each project’s local theory of change to first identify salient, plausible explanations for the outcomes. We will then these potential explanations are then vetted using multiple sources of data to assess the extent to which each of the hypotheses identified are supported or not supported by the available evidence. The data for process tracing will come from the documents, interviews, and surveys in the first evaluation stage as well as the additional data collected during case study site visits. Figure B.7 shows the different kinds of hypotheses we might expect to assess.


Table B.7 | Types of Hypotheses Analyzed in Process Tracing

Type of Hypothesis

CCI grantee supports lead to Increased capacity of grantee project teams

Increase capacity for asset-focused, community-driven collaboration leads to practice changes

Asset-focused, community-driven collaboration changes lead to stronger local ecosystems

Asset-focused, community-driven collaboration changes lead to organization culture and systems changes beyond the great-funded project

Local ecosystem changes lead to systems changes that are community-led or co-created

Systems changes lead to changes in community social well-being


One benefit of process tracing is standardization in how to assess the strength of the relationships based on two facets: the certainty or necessity of the hypothesized relationship and the uniqueness or sufficiency of the elements of the relationship in fully explaining the outcome. Each hypothesis is evaluated according to these two criteria of necessity and sufficiency based on the strength of available evidence (see Table B.8 showing different levels of evidence strength).


Table B.8 | Levels of Inferential Strength Assessed through Process Tracing

Level of Inferential Strength

Strength of Evidence

The hypothesis is plausible but neither proven or disproven

Evidence is suggestive of a relationship, but insufficient to draw a definitive conclusion as to the contribution to the outcome relative to other rival explanations

The hypothesis is certain but not unique.

Evidence is sufficient to conclude a relationship exists, but not to rule out the possibility that the outcome would have also occurred due to rival explanations

The hypothesis is plausible and unable to be explained by a rival explanation.

Evidence is sufficient to conclude that a relationship exists and that the outcome would not have occurred due to rival explanations

The hypothesis is deemed to be “doubly decisive”

Evidence provides high certainty of contribution and there is no alternative explanation.  This level of strength is extremely unlikely when talking about complex systems change initiatives.


Using the refined theory of change and the data from the structured focus groups with project teams and key community partners conducted during the case study site data collection, we will highlight the hypotheses that have been identified by the respondents as important drivers of change, including alternate drivers besides asset-focused, community-driven collaboration that contribute to change. This updated theory of change will then be translated into a set of hypotheses statements and a narrative describing how change occurs. The analyst will then review the data and evidence to make three judgments about each hypothesis: is the hypothesis a necessary explanation for change? Is this sufficient? What is the strength of evidence for both of these?


Each hypothesis will be rated as “necessary” or not (i.e., were the elements required for change to have occurred?), “sufficient” or not (i.e., is the achievement of the first part of the hypothesis enough to fully explain the achievement of the second half of the hypothesis?). Evidence and rationale will be provided to explain the ratings. Analysts will also rate the plausibility of an alternative scenario that provided a stronger explanation of the relationship than the site hypothesis, providing additional evidence and rationale. Each hypothesis will also be rated as high, medium or low for the strength of each relationship based on all the data, considering strength of evidence, the ability to triangulate data, and the strength of the alternative. Finally, in addition to rating each hypothesis, analysts will make a summary judgement about the strength of the relationship between asset-focused, community-driven collaboration and local outcomes


To assure that the hypotheses for individual cases are reliably rated, the analysis team will collectively vet the entire set of hypotheses ratings as a group. Discrepancies will be resolved through group discussion and consensus. Once the set of hypotheses is vetted, the analysis will the move to examine patterns in hypotheses across sites. In this cross-site analysis, we will be looking at strength of relationships across different types of hypotheses as well as overall for the theory of change. To the degree that there is consistency in high ratings for CCI-related hypotheses, and in low ratings for alternative explanation, we can make stronger conclusions about how asset-focused, community-driven collaboration contributed to local outcomes. To the degree that the strength of relationships related to CCI are not stronger than alternative explanations or there is significant variation in the CCI related hypotheses, there is less evidence for the unique contribution of asset-focused, community-driven collaboration.


Stage 3: Cross-Grantee Analysis

The purpose of Stage 3 analyses is to look across all data collected in Stages 1 and 2 and to build on the within-case analyses to draw conclusions at the cross-grantee level about asset-focused, community-driven collaboration, the outcomes that have been achieved, and the contribution of the approach to those outcomes. Our analytic approach for Stage 3 falls into two areas:


  • Mixed methods thematic analysis of rubric and qualitative data to unpack variation in themes

  • Qualitative comparative analysis to triangulate process tracing results from Stage 2 across the entire set of 24 grantees.


Mixed Methods Thematic Analysis

To draw out important themes across Grantee projects from interviews, documents, surveys, and rubrics, we plan to conduct a two-step, descriptive analysis following Stachowiak, Lynn, Akey, and Gase (2018). In the first step, the data get coded categorically based on the asset-focused, community-driven collaboration rubric and descriptive data (such as museums/libraries, size of network, focus area); these data will be initially explored using quantitative techniques (chi-square tests and k-means cluster analysis) to identify patterns in relationships (or themes) across Grantee projects. From these exploratory analyses, we then plan to examine patterns in the rubric and conduct additional thematic analyses of qualitative data with respect to their alignment to the evaluation questions. We plan to select a subset of these exploratory relationships for a further deep-dive analysis of the qualitative data. For the selected relationships, the analysis will focus on pulling out themes within the set of data associated with those relationships, including exemplars and exceptions to the themes. These additional thematic analyses will be used to draw lessons learned and implications across the 24 Grantee projects about asset-focused, community-driven collaboration, implementation, challenges and barriers, and enabling conditions.


Qualitative Comparative Analysis

To draw conclusions about the contribution of asset-focused, community-driven collaboration to museum, library, and local community outcomes, we will be conducting Qualitative Comparative Analysis (QCA)--a means of analyzing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest (Ragin, 2009; Rihoux, 2006; Schneider and Wegamann, 2010; 2012).


We have selected QCA for analyzing causal complexity using qualitative data. The unit of analysis is the 24 projects or “cases.” Our proposed use of QCA in this study is confirmatory in nature. We will be using the results of the within-case analysis in Stage 1 and the process tracing in Stage 2, along with the CCI theory of change to set up our initial QCA analyses. QCA produces tests of “necessity” and “sufficiency” for the various linkages across the CCI theory of change including individual links, as well as the entire theory overall.



B.5. Contact Information for Statistical or Design Consultants


The contact information for the individuals responsible for statistical and design for the current study is:


ORS Impact

Terri Akey, Ph.D., [email protected]

Jennifer Beyers, Ph.D., [email protected]


IMLS

Marvin Carr, Ph.D, [email protected]

Matthew Birnbaum, Ph.D., [email protected]




1 We anticipate potential challenges in achieving 90% response rates for Cohort 1 local partner surveys because the projects will be completed by the time data collection occurs.

i IMLS has funded 24 Community Catalyst grants in FY 2017 (12) and FY 2018 (12). Grants range from 12 months to 24 months.

ii https://kumu.io/jeff/social-network-analysis

  • iii https://www.dedoose.com


11



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSection B
AuthorBarbara Smith
File Modified0000-00-00
File Created2023-08-27

© 2024 OMB.report | Privacy Policy