Evaluation of the Communities for Immunity PROJECT
SUPPORTING STATEMENT B
Description of Statistical Methods
Study Design Overview
The proposed evaluation study will examine implementation and outcomes of the Communities for Immunity (C4I) project, led by the Institute of Museum and Library Services (IMLS) and the Centers for Disease Control and Prevention (CDC) and administered by the Association for Science and Technology Centers (ASTC). C4I supports libraries, museums, and their community partners to engage local communities with the aim of increasing vaccine confidence and ultimately improving community vaccination rates. The project also aims to increase library and museums’ organizational capacity to partner in addressing critical local issues.
The purpose of this evaluation is to provide insights into C4I project management and implementation and outcomes supported by the project. This is not an audit of awardees or their individual performances. The evaluation will be guided by these primary objectives:
Assess whether and the degree to which project design enables effective performance measurement and evaluation.
Characterize C4I funded projects (including in terms of target audiences, partnering strategies, and engagement strategies) and IMLS and ASTC efforts to engage awardees in a community of practice.
Evaluate project implementation and whether and how awardee activities influenced participants’ vaccine confidence and vaccine-seeking behaviors
Evaluate changes in awardees’ and participants’ understanding of awardee organizations and their partners as trusted sources for timely, relevant information and community resources.
B1. Respondent universe and selection methods
The evaluation study will use a mix of secondary and primary data collection methods to study project activities funded in Round 1. The primary data collection methods include a survey of awardees/partners, a survey of participants, and semi-structured interviews with awardees/partners and participants. C4I is a one-time project to address an urgent national need; these data collections have not been conducted previously, so no prior response rates achieved are available.
The universe for Round 1 data collection includes two types of respondents: (1) awardee organization staff and partner staff associated with 52 Round 1 awards and, (2) participants in activities of an estimated 42 of 52 Round 1 projects (i.e., excluding an estimated 10 projects whose activities will conclude prior to the start of data collection and/or with activity types that make participant data collection infeasible. Exhibit 3 provides the universe of awardee, partner, and participant respondents, the number of respondents that will be selected to participate in each data collection activity, and the expected response rates.
Exhibit 3. Universe of respondents and sample selection
Data collection activity |
Universe of respondents |
Sample |
Expected response rate |
Expected respondents |
Awardee/Partner Survey |
Estimated 1 C4I project lead from each awarded institution and an average of 2 partner leads, for each of the 52 Round 1 awards (156 awardee and partner leads total) |
Census of 156 awardee and partner leads |
90 percent |
140 people |
Participant Survey |
All participants in an estimated 42 of 52 projects (those expected to have a feasible means of collecting participant data and with project activities still occurring during data collection). |
Convenience sample of an average of 60 participants per project in 42 projects (2,520 participants total) |
Estimated 40% response rate (average of 24 people per project in 42 projects) |
1,032 people |
Awardee and Partner Staff Interviews |
Estimated 3 project leads (1 awardee, 2 partners) for each of an estimated 10 projects that have participant contact information (30 people total) |
Estimated 3 people associated with each of 2 purposely sampled projects (6 people total) |
100 percent |
6 people |
Participant Interviews |
Estimated 50 participants with contact information associated with 10 (of 52) projects that have participant contact information (500 people total) |
8 participants in activities of each of 2 purposely sampled projects (16 people total) |
50 percent |
8 people |
Awardee/partner survey universe
SRI will administer the awardee/partner survey to the project lead at each participating organization—an estimated average of three organizational leads, one awardee and two partners, for each of 52 projects, or 156 total people. Conducting a census of all project leads will aid SRI in characterizing project implementation and outcomes across Round 1 awards, and support grouping of projects by salient characteristics, such as lead institution type, target audience characteristics, engagement strategies, changes in participant vaccine confidence, alignment of activities to organization’s missions, and levels of trust between participating organizations and participants.
Participant survey universe
SRI estimates that participant surveys will be possible to administer in about 42 of 52 C4I projects (about 80%). In approximately 10 of 52 Round 1 projects, project activities will conclude prior to the start of data collection and/or the type of project activities makes data collection infeasible. The universe is all participants across the 42 not excluded for either reason; however, quantifying the total number of participants is difficult because activities vary greatly across projects, from distribution of informational bookmarks and yard signs to exhibits or events to webinars or television segments. Accordingly, project target audiences range in size from a few hundred people to large and mass audiences.
Universes for awardee/partner and participant interviews
SRI will conduct limited interviews in Round 1 to gather in-depth awardee and participant perspectives to elucidate survey findings and also with a formative lens, to inform Round 2 interview sampling and procedures. SRI will sample for interviews at the project level since interviewing both project leads and participants in the same project will enable triangulation of different perspectives on the same engagement strategies and project activities. To do this, SRI will rely on project leads’ ability to contact participants.
This logistical constraint narrows the possible universe to people associated with an estimated 10 of 52 projects with such contact information (e.g., of people who pre-registered for an online event or of a known set of parents who attended a project reunion), based on SRI’s review of Round 1 applications.1 SRI estimates an average of three project leads associated with each of these 10 projects (30 people total, 1 awardee and 2 partners for each of 10 projects. Since known audiences (e.g., alumni of a certain project, people who pre-register for an event advertised to members) will be at the smaller end of the range stated in awarded Round 1 applications, SRI estimates the 10 projects will have contact information for approximately 50 people each, or 500 people total across the 10 projects.
B2. Potential Respondent Sampling and Selection
Different methods will be used to sample participants for surveys and interviews, as described below. Stratification will not be used during sample selection.
Awardee/Partner Survey Sample
While the timing and nature of project activities will make it impossible to collect participant survey data for some projects, this limitation does not apply to the project leads, whom SRI can still contact during the award period even if projects have ended. The awardee/partner survey will therefore be a census of project leads (awardee and partner organization staff). Staff will be identified using the research team’s database of awarded Round 1 applications. The database will contain awardee contact information from ASTC administrative records and partner contact information provided by awardees. A screening question will be used to ensure a knowledgeable person (about the award and the project that it funded) is completing the survey.
The survey will be administered online using SRI’s Qualtrics survey software. with access to data limited to the SRI evaluation team. SRI will alert awardees and partners about the survey by posting a message in the project’s online community and will send unique links to all leads via email. The survey is designed to be completed online and is user-friendly from a range of devices (computer, tablet, smartphone); a PDF version will be available for download for informational purposes only. In applicable circumstances, non-respondents will receive an automated follow-up email after one week, two weeks, and after three weeks, and the survey will close after one month. SRI will also post periodically in the online community to thank those who have already responded and encourage others to respond.
In a very small number of projects, the participant sample will be a census of registered participants (e.g., a survey link sent to all emails of people who pre-registered for a webinar or to a known audience invited to an event). In most projects, however, awardees will approach participants as possible, as they exit physical events such as fairs or exhibits. SRI will provide guidance for awardees to approach every third or fourth participant in an attempt to randomize the sample following a common practice in visitor studies; however, sampling will depend on staff availability and will not occur continuously through all open event or exhibit hours. Because awardees will approach people during times they have staff availability to do so, and may not be able approach visitors at very regular intervals (e.g., when all participants exit an event in a short time), SRI considers overall that projects will use an opportunity or convenience sample.
SRI will program this survey in Qualtrics, too. SRI will provide each awardee with a QR code and survey link to share with participants following C4I activities or events. Awardee or partner staff will invite participants to complete the online survey by scanning the QR code (for participants at physical events or exhibits) or following the link (for participants in virtual/digital activities). Surveys will be available in up to seven languages and will be very brief to enable wide linguistic accessibility and encourage response. SRI will provide one link per project to enable linking of participant responses to projects but will not be able to link an individual survey response with an individual respondent—an advantage when collecting data on a potentially sensitive topic such as attitudes towards the COVID-19 vaccine.
Interviews
The research team plans to purposely sample project leads and participants associated with two projects in Round 1. SRI will conduct interviews with staff responsible for coordinating the awarded project and organizing project activities, events, and resources.
Projects will be purposely sampled for interviews from among projects that have participants’ contact information. Among the estimated ten projects that meet that first criterion, SRI will consider award amount, types of engagement activities offered, and project lead engagement in the C4I community of practice in making a final selection, with the goal of describing practices and partnership models likely to positively influence vaccine confidence and potentially replicable elsewhere.
Awardee interview sample
Once a project has been selected, SRI will request to speak with the project leads (up to four people per project; average of three). If no such individual exists, the organization will be marked as “unable to interview – no knowledgeable respondent at organization.” If the ideal interviewee has left the organization, the same note will be recorded as their experience external to the organization may have affected their perception of the project or organization.
Participant interview sample
In Round 1, SRI will ask awardee staff of the two projects sampled for interviews to identify eight community participants who participated in awarded project activities, with the aim of selecting four people with a range of demographic characteristics and participation types. SRI will exclude as non-responsive participants who do not respond, or whom they are unable to schedule, after three attempted contacts, and will continue to sample until all interviews are completed.
Survey data will be stored in SRI’s secure Qualtrics platform and exported to a secure database. Log in to both of these tools requires third-factor authentication. They will be accessible only by members of the research team directly involved in analysis of survey data.
The research team will conduct interviews by video (Zoom) or phone per interviewee preference, and will take notes on a protocol in OneNote and record audio using Zoom or OneNote (phone interviews). Audio and notes files will be named using participant ID numbers assigned in the project databases and stored on SRI’s secure Sharepoint site. Access to interview data will be limited to the research team. All sensitive data will be saved to an encrypted network drive, with access limited to SRI staff with a need to work with raw data. Access will only be available on site or via secure remote access, through password-protected computers.
Other notes
Statistical methodology for stratification and sample selection
Not applicable
Degree of accuracy needed for the purpose described in the justification:
For projects able to collect participant data that have large target audiences, SRI will provide guidance to reduce survey burden (Appendix H). For example, projects with large target populations will need no more than 100 survey responses to estimate project-level confidence intervals of 10 percentage points. Given SRI’s experience with convenience samples in similar contexts, we believe average number of participant respondents per site will be no more than 24 respondents.
Unusual problems requiring specializing sampling procedures:
SRI excluded 10 projects from the universe of projects able to collect participant data due to the timing and nature of project activities. SRI also expects most projects will use convenience sampling to collect participant survey data, based on project staff’s availability and ability to intercept participants as they leave events.
B3. Response Rates and Non-Responses
Awardee/partner surveys
SRI estimates a 90% response rate for the awardee/partner survey based on its experience with similar project-level and site-level data collection in evaluations of federally funded programs, and the use of well-tested strategies to improve response rates.2 Strategies to maximize response rates include: online survey design that is user-friendly across device types for respondent convenience and increased accessibility; effective communication before the survey to prepare respondents for participation, including in the online community of practice; assurance that only de-identified, aggregated data will be shared; ongoing response tracking; and targeted email follow-up with non-responders (enabled by unique links). Additionally, the research team will use skip logic to provide awardees with only questions that are relevant to them. Reminders will be sent to respondents that have not responded after one week, two weeks, and again after three weeks.
SRI will collaborate with ASTC to inform the potential respondents of the surveys. Where possible and with support from IMLS, ASTC, and other stakeholder organizations, we will share our findings with awardee project teams to provide a national perspective on their successes, strategies they devised for mitigating challenges, and reported changes in staff and organizational capacity. In SRI’s experience, announcing in advance that national findings will be shared back with project teams—and then following through—engages respondents as active participants not only in participating in the national evaluation, but also in subsequently using findings to inform improvement in future efforts.
Participant surveys
Due to variation in local awardee activities, target populations, and awardee ease and familiarity with survey data collection, SRI is unable to use a single approach to sampling across awardee sites. Rather, sampling for the participant survey will be driven by awardee activity types and staff capacity. As described above, SRI expects that most awardees will use convenience sampling. The benefits of this approach include expedited collection of data, ease of collection, and cost effectiveness. Although convenience samples do introduce the possibility of data that cannot be generalized to the broader population because of potential bias in sampling, SRI considers this approach will be sufficient for documenting associations between awardee activities and respondent self-reported outcomes among the survey sample, particularly when results are triangulated with findings from other data collection activities.
For virtual events, awardees may use a universe sample, sharing a link to the survey with all participants and encouraging participants to complete the survey as the event concludes. Surveying all participants avoids sampling bias. While SRI hopes that the very brief survey form will facilitate higher-than-typical response rates in these instances, it still expects low response rates based on experience (typically very few people complete a survey emailed after an experience—as few as 5-10% in some cases—as compared to higher possible response rates when people are invited to complete the survey in person in the moment) and lack of incentive for participants to complete the survey.
SRI will use as many of the same techniques noted above as feasible for improving response rates for the participant survey, with additional attention to simple phrasing of survey questions and use of simple scales to reduce the cognitive load and literacy level needed to complete the survey quickly. SRI will additionally provide the survey in other languages identified by project teams to improve cultural responsiveness and accessibility. The research team will also guide awardees to assure participants that the survey is anonymous and that their confidentiality and privacy will be maintained.
To support awardees in collecting high-quality participant survey locally, SRI will provide guidance along with the unique QR codes and links generated for each awardee (Appendix H) and will offer to meet project leads to review the guidance and answer questions. SRI will provide an email and phone number for awardees to answer any questions.
Because responses will not be associated with specific individuals, follow-up emails to only non-respondents will not be possible. SRI also acknowledges that it will be difficult to track response rates for surveys administered in person; doing so would require awardee staff to keep detailed records of participants approached to complete the survey, which may not be feasible given the range in size and capacity of participating organizations and the nature of activities.
Interviews
SRI expects nearly a 100% response rate for awardee interviews and a 50% response rate for participant interviews. Only sites that agree to participate and have participant contact information will be included in the final interview sample. Sharing expectations up front will give sites the opportunity to determine if they can commit time to engage in interviews and if they can identify potential community members for participation.
Because SRI will select only two awarded project sites for interviews in Round 1, it will not announce interviews to the whole community of practice for all 52 awarded projects, but rather will send invitations to participate by email, with a follow-up email one week later and phone call if needed (see Appendix F). We will invite respondents in each group to interview on a rolling basis and will keep sampling until we reach the target response for that group.
To promote responsiveness among participants sampled for interviews, SRI will provide a $50 incentive. SRI will also offer to conduct interviews in Spanish and French (two of the languages into which the participant survey will be translated) as well as English to facilitate participation by people from a wider range of target audiences.
B4. Tests of Procedures and Methods
This section explains the different analytic methods we plan to use to address the evaluation questions. Our design consists of mixed qualitative and quantitative methods, leveraging administrative, publicly available data, survey, and interview data across the key stakeholder groups for the project. Our analytic methods focus on deriving the best information from the most effective sources. As mentioned above, the sources include:
C4I awarded project applications and final project reports
Awardee/partner survey
Participant survey
Interviews with people associated with two projects (awardee/partner project leads and participants
Data from online community of practice hosted on the Higher Logic platform
Website usage data (Google Analytics)
Social media data (use of hashtags)
Analysis of survey and administrative data
Data preparation. Survey response will be collected using the online tool, Qualtrics. The evaluation team will examine the data to look for any odd patterns (e.g., straight-lining), incomplete surveys, or multiple submissions from respondents. These responses will be examined and eliminated on a case-by-case basis and based on the total number of survey responses SRI receives. Administrative data from the review will be collated to allow for use of descriptives of the awardees as a possible covariate for change in analyses as well as to provide a picture of the awardee population.
Descriptive and categorical analyses. The research team will use similar analysis approaches for the participant and awardee surveys. SRI will analyze results after the surveys close; they expect to complete preliminary analysis within approximately 2 weeks of that time. Survey estimates will be descriptive in nature and include mean numbers and percentages both overall and broken out by key awardee and participant characteristics (e.g., institution type, participant race and ethnicity, participant activity type). Particularly for participant survey results based primarily on convenience samples, SRI will take care to underscore that results may not be representative of the broader population, given possible differences between survey respondents and non-respondents. The nature of the survey respondents will be described. SRI will provide confidence intervals around survey estimates (percentages and means) to represent uncertainty with respect to sampling error. Given SRI’s estimated participant sample of 1,032 people (see Exhibit 3), this would generate a 95% confidence interval of roughly plus or minus three percentage points for total estimates. SRI will also acknowledge limitations of survey data with respect to non-representativeness, highlighting the inability of sample surveys to adequately account for non-sampling errors, such as non-response bias, and underscoring the lack of representativeness of convenience sample surveys.
Comparative analyses. SRI will generate tables of unweighted frequencies or means for all survey questions. Key results will be disaggregated by relevant site and participant subgroups. In making any statistical comparisons, SRI will utilize t-tests for means comparisons, z-tests for proportions, and chi-square tests for comparing distributions of survey outcomes by key subgroups.
Analysis of qualitative survey data. To analyze qualitative data from open-response items in the awardee/partner survey, researchers will use code-based and word-based text analysis and thematic coding. Researchers will identify themes and sentiments among the awardees and subgroups of interest (e.g., library awardees or museum awardees). The findings of this analysis will be reported in narrative form with some statistical references as needed. The Round 1 participant survey contains no open-ended items.
Reporting. Preliminary findings from the awardee/partner and participant surveys will be presented to ASTC, IMLS, and other stakeholders as part of quarterly briefings. Survey findings will also be triangulated with findings from other data sources in integrated analysis in the final evaluation report (a draft of which is due to IMLS in early July 2022 with a final, publishable version available by late July 2022), as well as in interim products such as blog posts or social media posts.
Qualitative analysis of interview and open-ended survey data
Working from a matrix that maps evaluation topics to instruments and items, SRI will develop a thematic coding scheme to aid in analyzing interview data. SRI will train analysts to use the coding scheme and conduct double-coding exercises on at least 25% of interviews to ensure high inter-rater reliability. Researchers will review interview transcripts to associate excerpts with codes, grouping interview data across interviews thematically in ways aligned to the research questions and ordered to enable triangulation across data sources. By this process, they will identify patterns, themes, and outliers with regard to the scheme.
Researchers will then synthesize the data, first by code, then across groups of codes, to arrive at propositions and draft claims to be evaluated in light of findings from other data sources. Interim analysis memos will document the team’s thinking at this stage. In integrated analysis, researchers will then associate thematically linked findings across data sources and make evaluative statements about the nature of the findings overall and in comparison by data source and across subgroups of data.
To analyze qualitative data from open-response items in the awardee/partner survey, researchers will use code-based and word-based text analysis and thematic coding. Researchers will identify themes and sentiments among the awardees and subgroups of interest (e.g., library awardees or museum awardees). The Round 1 participant survey contains no open-ended items. The findings of this analysis will be reported in narrative form with some statistical references as needed. Integrated findings will be reported in briefings and in written form as described above.
B5. Contact information
Matthew J. Birnbaum, PhD, Supervising Social Scientist in IMLS’s Office of Research and Evaluation, will be the IMLS point of contact for SRI, the contractor with primary responsibility for the C4I project evaluation. Kea Anderson, PhD, is the Principal Investigator and Dan Princiotta, PhD is the quantitative lead. Dr. Princiotta and team member Ms. Kelsey Cooper will lead survey administration and analysis, while Dr. Anderson and Ms. Milby will lead conduct and analysis of interviews. The team will coordinate throughout to achieve accurate and comprehensive interpretation of study results. Dr. Princiotta will additionally conduct the quantitative analysis of national datasets, and Ms. Cooper will lead the analysis of other website data, pending availability. Ms. Walker and Dr. Strobel will advise on data collection and analysis approaches throughout. Exhibit 4 lists the information requested for the staff responsible for collecting and analyzing the study data.
Exhibit 4. Staff responsible for collecting and analyzing study data
Name |
Project role |
Organization |
Phone number |
Matthew J. Birnbaum |
Supervising Social Scientist |
IMLS |
202-653-4647 |
Kea Anderson |
Project Director/Principal Investigator |
SRI |
703-247-8568 |
Daniel Princiotta |
Senior Researcher/Quantitative lead |
SRI |
301-785-7149 |
Allison Milby |
Researcher |
SRI |
646-923-5791 |
Kelsey Cooper |
Policy Research Analyst |
SRI |
703-247-8568 |
Mindy Hsiao |
Research Associate |
SRI |
408-813-7211 |
Annie Walker |
Data Scientist |
SRI |
703-247-8568 |
Johannes Strobel |
Principal Education Researcher |
SRI |
703-247-8568 |
1 This may also bias the perspectives gathered, in that participants who already have a relationship with the hosting institution may be more likely to trust that institution. SRI will aim to mitigate this bias in Round 2 by including in an item in the participant survey that asks for email addresses and phone numbers of respondents who indicate they are willing to be contacted for a follow-up interview.
2 SRI achieved a 95% response rate in a survey administered to a random sample of school district program coordinators in the Study of Experiences and Needs of Rural Education Achievement Program Grantees. Both studies were funded by the U.S. Department of Education. SRI with its partner Education Development Center (EDC) achieved a 100% response rate for a survey administered to 30 public media stations with Ready To Learn grants as part of EDC and SRI’s jointly led Ready To Learn Research.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Daniel Princiotta |
File Modified | 0000-00-00 |
File Created | 2021-10-25 |