IMLS OMB C4I Round 2 Supporting Statement B_20220113

IMLS OMB C4I Round 2 Supporting Statement B_20220113.docx

Communities for Immunity Evaluation (Round 2)

OMB: 3137-0129

Document [docx]
Download: docx | pdf

Evaluation of the Communities for Immunity (C4I) PROJECT

SUPPORTING STATEMENT B

Description of Statistical Methods 

Study Design Overview  

The proposed descriptive study is an independent evaluation of the Communities for Immunity (C4I) project, led by the Institute of Museum and Library Services (IMLS) and the Centers for Disease Control and Prevention (CDC) and administered by the Association for Science and Technology Centers (ASTC). C4I supports libraries, museums, and their partners to engage local communities with the aim of increasing vaccine confidence and ultimately improving community vaccination rates. The project also aims to increase libraries’ and museums’ organizational capacity to partner in addressing critical local issues. SRI International will lead the study.

This new emergency clearance request is for the study design and instruments that SRI proposes to use in evaluating C4I Round 2 award activities. Most of the proposed instruments are the same in Round 2 as in the Round 1 data collection, which OMB approved on November 15, 2021, under OMB Control Number 3137-0129. In consultation with OMB, IMLS has submitted two separate clearance requests to accommodate two sequential timelines and some variation in approach. Data collection activities for Round 1 activities were scheduled for November through December, 2021,and those for Round 2 activities are scheduled for January through April, 2022. Very brief surveys were appropriate for all Round 1 projects, which had a lower per-project funding ceiling, whereas the higher per-project funding ceiling in Round 2 is likely to result in more in-depth project activities that warrant slightly expanded data collection efforts.

This evaluation is a descriptive study that aims to provide insights into local C4I project management and implementation, and into awardee, partner, and participant self-reports of their attitudes and beliefs related to vaccines and the role of library of museums in their communities. This is not an audit of awardees or their individual performances. The evaluation will be guided by these primary objectives:

1. Characterize C4I funded projects (including in groupings of similar target audiences, organizational partnering strategies, and engagement strategies)

2. Describe participants’ perceptions of vaccine confidence and plans to seek vaccines

3. Describe participants’ perceptions of attitudes towards and understanding of awardee organizations and their partners as trusted sources for timely, relevant information and community resources

4. Characterize awardee satisfaction with the C4I project, in terms of its alignment to their organizational and community needs, staff capacity, and their views of project success

5. Describe project leaders’ reported attitudes, capacity, knowledge, and strategies for undertaking similar efforts for community improvement

The data sources for the evaluation are document review, project administrative data, surveys of awardees and their partners, surveys of project participants, and limited follow-up interviews with awardees, their partners, and participants.

This study is descriptive in nature in part because a pre/post design is not aligned with the C4I Program design. In SRI’s experience collecting data in community-facing settings, intercepting potentially vaccine-hesitant visitors to a vaccine information session or other event with a request to take a pre-survey hinders the ability to welcome visitors in a culturally responsive manner and build needed trust. For these reasons, SRI designed a survey to be administered at the end of a visitor’s experience that asks them to report their attitudes after participating. SRI took a similar approach in designing the awardee/partner survey, since activities will already be underway in some projects prior to the start of data collection.1

Further, the study has no true baseline measure of participants’ attitudes about COVID-19 vaccines. Since COVID-19 vaccines have been very widely available for very nearly everyone ages 12 and over for more than six months, the evaluation will use participants’ vaccine status as a proxy for vaccine hesitancy. For example, if survey respondents report they got a first vaccine shot at a C4I event, it is far more likely that they felt confident to get the shot after participating than that they had not been able to access the vaccine until the C4I event. (People must be at least 18 years old to consent to participate in data collection.) Parent and caregiver reports of vaccine status of their 5- to 11-year-old children serve as a proxy for hesitancy in the same way since the vaccine for this age group was authorized much more recently, in late October, 2021; however, gathering parent and caregiver reports of whether their children are vaccinated or they plan to have them vaccinated, in conjunction with reports about their own vaccine status, can still provide valuable information about hesitancy.

B1. Respondent Universe and Selection Methods

The evaluation study proposes to use a mix of secondary and primary data collection methods to study project activities funded in Round 2. The primary data collection methods include a survey of awardees/partners, a survey of participants, and semi-structured interviews with awardees/partners and participants. C4I is a one-time project to address an urgent national need; these data collections have not been conducted previously for C4I Round 2 projects and the related Round 1 data collection is still underway, so response rates achieved are not yet available.

The universe for Round 2 data collection includes two types of respondents: (1) awardee organization staff and partner staff associated with an estimated 52 Round 2 awards and, (2) participants in funded projects with activities that make participant data collection feasible. (Based on review of Round 2 project plans, SRI estimates that two projects will not be able to collect participant survey data due to the nature of their activities). Exhibit 3 provides the universe of awardee, partner, and participant respondents, the number of respondents that will be selected to participate in each data collection activity, and the expected response rates. 

SRI anticipates administering shorter awardee/partner surveys and participant surveys in about 32 projects and administering longer surveys in about 20 projects, to be selected from those that have higher funding ceilings ($25,000 or greater) and offer more in-depth or longer participant experiences, with consideration for selecting projects with a range of target audiences reflective of the overall demographics of Round 2 award target audiences. The longer surveys require approximately five more minutes to complete, for a total of 15 rather than 10 minutes for both the awardee/partner survey and the participant survey.

Exhibit 3. Universe of Respondents and Sample Selection  

Data collection activity

Universe of respondents

Sample

Expected response rate

Expected respondents

Awardee/Partner Survey (longer)

Estimated 1 C4I project lead from each awarded institution and an average of 2 partner leads for an estimated 20 projects that offer more in-depth activities (60 awardee and partner leads total)

Census of 60 awardee and partner leads associated with 20 projects

90 percent 

54 people

Awardee/Partner Survey (shorter)

Estimated 1 C4I project lead from each awarded institution and an average of 2 partner leads, for each of the other estimated 32 Round 2 awards (90 awardee and partner leads total)

Census of 90 awardee and partner leads associated with 32 projects

90 percent 

81 people

Participant Survey (longer)

All participants in an estimated 20 of an estimated 52 total projects (projects that offer more in-depth or longer participation).

Convenience sample of an average of 60 participants per project in 20 projects (1,200 participants total)

Estimated 40% response rate (average of 24 people per project in 20 projects)

480 people

Participant Survey (shorter)

All participants in an estimated 30 of 52 projects (those expected to have a feasible means of collecting participant data but not offering longer or in-depth participation).

Convenience sample of an average of 60 participants per project in 30 projects (1,800 participants total)

Estimated 40% response rate (average of 24 people per project in 30 projects)

720 people

Awardee and Partner Staff Interviews 

Estimated 3 project leads (1 awardee, 2 partners) for each of an estimated 5 projects that will administer the longer participant survey (15 people total).

Estimated 3 people associated with each of 5 purposely sampled projects (15 people total)

100 percent 

15 people

Participant Interviews 

Estimated 50 people who indicate willingness to be interviewed in the longer participant survey and provide valid contact information

Estimated 18 people who participated in projects of greatest interest for interviews

70 percent 

13 people

Longer and Shorter Awardee/Partner Survey Universe

SRI will administer the awardee/partner surveys to the project lead at each participating organization—an estimated average of three organizational leads, one awardee and two partners in each project. To gather richer information where pertinent but keep survey burden proportionate to awardee funding amounts, SRI will field a slightly longer awardee/partner survey for leads associated with an estimated 20 projects that have higher funding amounts and offer more intensive or longer participant activities, and a shorter survey for leads of the other 32 projects.

The universe for the longer awardee/partner survey is all 60 leads associated with 20 awards with greater funding amounts or more intensive activities. The universe for the shorter awardee/partner survey is all 96 leads associated with the other 32 projects. Conducting a census of all project leads will aid SRI in characterizing projects across Round 2 awards, and will support grouping of projects by salient characteristics, such as lead institution type, target audience characteristics, engagement strategies, changes in participant vaccine confidence, alignment of activities to organizational missions, and levels of trust between participating organizations and participants.

Longer and Shorter Participant Survey Universes

SRI estimates that participant surveys will be possible to administer in about 50 of an estimated 52 C4I projects to be awarded in Round 2. In two of 52 Round 2 projects, the type of project activities offered may make participant data collection infeasible. Further, SRI expects that the higher per-project funding ceiling in C4I’s Round 2 will correlate in some projects—an estimated 20 of 52—with more in-depth or longer participation opportunities. SRI intends to field a slightly longer survey for participants in these 20 projects, and to field the same short survey used in Round 1 for participants in the remaining 30 of 52 projects in which participant data collection will be feasible.

The universe for the longer participant survey is all participants in the estimated 20 projects with more in-depth or longer participation opportunities. The universe for the short participant survey is all participants in the estimated 30 other projects in which participant data collection will be feasible. Quantifying the total number of participants is difficult because activities vary greatly across projects, from activities such as distribution of informational bookmarks and yard signs to exhibits or events to webinars or television segments. Accordingly, project target audiences range in size from a few hundred people to large and mass audiences.

Universes for Awardee/Partner and Participant Interviews

SRI will conduct limited interviews in Round 2 to gather in-depth awardee and participant perspectives to elucidate survey findings. SRI prefers to sample for interviews at the project level since interviewing both project leads and participants in the same project will enable triangulation of different perspectives on the same engagement strategies and project activities. However, in Round 2, rather than rely on awardees’ abilities to contact potential participants for interviews (as was the case in Round 1), SRI will ask participants in the longer survey if they are willing to be contacted for a follow-up interview and, if they indicate yes, to provide contact information. Therefore, the ability to sample project leaders and participants from the same projects will depend on the availability of willing participants in those projects as indicated in participant surveys.

This logistical constraint narrows the possible universe to people associated with the 20 projects that will administer the longer participant survey. The universe for awardee/partner interviews is an estimated three project leads associated with each of these 20 projects (60 people total). The universe for participant interviews is an estimated 50 people who indicate in the survey willingness to participate in interviews and provide valid contact information.

B2. Potential Respondent Sampling and Selection

Different methods will be used to sample participants for surveys and interviews, as described below. Stratification will not be used during sample selection.

Longer Awardee/Partner Survey and Shorter Awardee/Partner Survey Samples

The awardee/partner survey will be a census of project leads (awardee and partner organization staff), with approximately 60 project leads (associated with 20 projects) receiving the longer survey and 96 project leads (associated with the other 32 projects) receiving the shorter survey. Projects will be selected for the longer or shorter survey, and staff will be identified, using the research team’s database of awarded Round 2 applications. The database will contain awardee contact information from ASTC administrative records and partner contact information provided by awardees. A screening question will be used to help ensure a knowledgeable person (about the award and the project that it funded) is completing the survey.

Awardee/Partner Survey Administration

The surveys will be administered online using SRI’s Qualtrics survey software. SRI will alert awardees and partners of the surveys by posting messages in the C4I online community and will send unique links to all leads via email. The surveys are designed to be completed online and are user-friendly from a range of devices (computer, tablet, smartphone); a PDF version will be available for download for informational purposes only. In applicable circumstances, non-respondents will receive an automated follow-up email after one week, two weeks, and after three weeks, and the surveys will close one week after the last C4I project activities conclude. SRI will also post periodically in the online community to thank those who have already responded and encourage others to respond.

Longer Participant Survey and Shorter Participant Survey Samples

For most Round 2 C4I projects (an estimated 30 of 52 funded projects), SRI will field a brief participant survey. SRI will field a slightly longer participant survey in an estimated 20 of 52 projects that will offer longer or more in-depth participation opportunities. The nature of project activities will make data collection infeasible in an estimated two of 52 projects.

Sampling for both Round 2 participant surveys will be the same as in Round 1. In a very small number of projects, the participant sample will be a census of registered participants (e.g., a survey link sent to all emails of people who pre-registered for a webinar or to a known audience invited to an event). In most projects, however, awardees will approach participants when feasible, as they exit physical events such as fairs or exhibits. SRI will provide guidance for awardees to approach every third or fourth participant in an attempt to randomize the sample following a common practice in visitor studies; however, sampling will depend on staff availability and will not occur continuously through all open event or exhibit hours. Because awardees will approach people during times that they have staff availability to do so and may not be able approach visitors at very regular intervals (e.g., when all participants exit an event in a short time), SRI considers overall that projects will use an opportunity or convenience sample.



Survey Administration

SRI will program the participant surveys in Qualtrics, too. SRI will provide each awardee with a QR code and survey link to share with participants following C4I activities or events. Awardee or partner staff will invite participants to complete the longer or shorter online survey by scanning the QR code (for participants at physical events or exhibits) or following the link (for participants in virtual/digital activities). Surveys will be available in multiple languages, depending on the translation needs indicated by project leads, and will feature plain, direct language to enable wide linguistic accessibility and encourage response. SRI will provide one link per project to enable linking of participant responses to projects but generally will not be able to link an individual survey response with an individual respondent—an advantage when collecting data on a potentially sensitive topic such as attitudes towards the COVID-19 vaccine. SRI will, however, be able to link survey responses to individuals who indicate willingness to participate in interviews and provide contact information. SRI will store PII separately from survey data files and will destroy it once interviews are complete.

Interviews

The research team plans to purposely sample project leads and participants associated with five projects in Round 2. SRI will conduct interviews with staff responsible for coordinating the awarded project and organizing project activities, events, and resources.

Projects will be purposely sampled for interviews from among projects that administered longer surveys and in which some participant survey respondents indicated willingness to be interviewed. Among the estimated 25 projects that meet these criteria, SRI will consider award amount, types of engagement activities offered, geographic location, target audience, and project lead engagement in the C4I community of practice in making a final selection, with the goal of describing a range of practices and partnership models.

Sampling for participant interviews from among survey respondents who indicate willingness to be interviewed may bias the interview sample. For example, it is possible that the opportunity to be interviewed may appeal more to people who have stronger views about COVID-19 vaccines (either for or against) than most of the broader population. SRI will take this limitation into account in reporting.

Awardee Interview Sample

Once a project has been selected, SRI will request to speak with the project leads (up to four people per project; average of three). If no such individual exists, the organization will be marked as “unable to interview – no knowledgeable respondent at organization.” If the ideal interviewee has left the organization, the same note will be recorded as their experience external to the organization may have affected their perception of the project or organization.

Participant Interview Sample

From the five projects sampled for interviews, SRI will contact up to three people from each project who provided contact information in their survey response (15 people), with the aim of completing interviews with all of them. SRI will exclude as non-responsive participants who do not respond or whom they are unable to schedule after three attempted contacts and will continue to sample until all interviews are completed.



Data Management and Storage

Survey data will be stored in SRI’s secure Qualtrics platform and exported to a secure database. Log in to each tool requires third-factor authentication. They will be accessible only by members of the research team directly involved in analysis of survey data.

The research team will conduct interviews by video (Zoom) or phone per interviewee preference, take notes on a protocol in OneNote, and record audio using Zoom or OneNote (phone interviews). Audio and notes files will be named using participant ID numbers assigned in the project databases and stored on SRI’s secure SharePoint site. Access to interview data will be limited to the research team. All sensitive data will be saved to an encrypted network drive, with access limited to SRI staff with a need to work with raw data. Access will only be available on site or via secure remote access, through password-protected computers.

Other Notes

Statistical Methodology for Stratification and Sample Selection

Not applicable

Degree of Accuracy Needed for the Purpose Described in the Justification

For projects able to collect participant data that have large target audiences, SRI will provide guidance to reduce survey burden (Appendix I). Given SRI’s experience with convenience samples in similar contexts, we believe average number of participant respondents per site will be no more than 24 respondents.

Unusual Problems Requiring Specializing Sampling Procedures:

SRI anticipates excluding approximately 2 of an estimated total 52 projects from the universe of projects able to collect participant data due to the timing and nature of project activities. SRI also expects most projects will use convenience sampling to collect participant survey data, based on project staff’s availability and ability to intercept participants as they leave events.

B3. Response Rates and Non-Responses

Awardee/Partner Surveys

SRI estimates a 90% response rate for the awardee/partner surveys based on its experience with similar project-level and site-level data collection in evaluations of federally funded programs, and the use of well-tested strategies to improve response rates.2 Strategies to maximize response rates include: online survey design that is user-friendly across device types for respondent convenience and increased accessibility; effective communication before the survey to prepare respondents for participation, including in the online community of practice; assurance that only de-identified, aggregated data will be shared; ongoing response tracking; and targeted email follow-up with non-responders (enabled by unique links). Additionally, the research team will use skip logic to provide awardees with only questions that are relevant to them. Reminders will be sent to respondents that have not responded after one week, two weeks, and again after three weeks.

SRI will collaborate with ASTC to inform the potential respondents of the surveys. Where possible and with support from IMLS, ASTC, and other stakeholder organizations, SRI will share findings with awardee project teams to provide a national perspective on their successes, strategies they devised for mitigating challenges, and reported changes in staff and organizational capacity. In SRI’s experience, announcing in advance that national findings will be shared back with project teams—and then following through—engages respondents as active participants not only in participating in the national evaluation, but also in subsequently using findings to inform improvement in future efforts.

Participant Surveys

Due to variation in local awardee activities, target populations, and awardee ease and familiarity with survey data collection, SRI is unable to use a single approach to sampling across awardee sites. Rather, sampling for the participant survey will be driven by awardee activity types and staff capacity. As described above, SRI expects that most awardees will use convenience sampling. The benefits of this approach include expedited collection of data, ease of collection, and cost effectiveness. Limitations include potential bias in sampling that can make findings not generalizable to the broader population.

For virtual events, awardees may use a universe sample, sharing a link to the survey with all participants and encouraging participants to complete the survey as the event concludes. Surveying all participants avoids sampling bias. While SRI hopes that the very brief survey form will facilitate higher-than-typical response rates in these instances, it still expects low response rates based on experience (typically very few people complete a survey emailed after an experience—as few as 5-10% in some cases—as compared to higher possible response rates when people are invited to complete the survey in person in the moment) and lack of incentive for participants to complete the survey.

SRI will use as many of the same techniques noted above as feasible for improving response rates for both participant surveys, with additional attention to simple phrasing of survey questions and use of simple scales to reduce the cognitive load and literacy level needed to complete the survey quickly. SRI will additionally provide the survey in other languages identified by project teams to improve cultural responsiveness and accessibility. The research team will also guide awardees to assure participants that the survey is anonymous (except in cases where respondents opt to provide their name and contact information related to willingness to participate in interviews) and that their confidentiality and privacy will be maintained.

To support awardees in collecting high-quality participant survey locally, SRI will provide guidance along with the unique QR codes and links generated for each awardee (Appendix I) and will offer to meet project leads to review the guidance and answer questions. SRI will provide an email ([email protected]) and phone number for awardees to answer any questions.

Because participant surveys will be administered using one common Qualtrics link for all participants associated with a given project, follow-up emails to only individual non-respondents will not be possible. SRI also acknowledges that it will be difficult to track response rates for surveys administered in person; doing so would require awardee staff to keep detailed records of participants approached to complete the survey, which may not be feasible given the range in size and capacity of participating organizations and the nature of activities.

Interviews

SRI expects nearly a 100% response rate for awardee interviews and a 70% response rate for participant interviews. Because SRI will select only five awarded projects for interviews in Round 2, it will not announce interviews to the whole community of practice for all awarded projects but rather will send invitations to participate by email, with a follow-up email one week later and phone call if needed (see Appendix F). SRI will invite respondents in each group to interview on a rolling basis and will keep sampling until it reaches the target response for that group, as feasible within the evaluation timeframe and budget.

To promote responsiveness among participants sampled for interviews, SRI will provide a $50 gift card incentive. SRI will also offer to conduct interviews in Spanish and French (two of the languages into which the participant survey will be translated) as well as English to facilitate participation by people from a wider range of target audiences.

B4. Tests of Procedures and Methods

This section explains the different analytic methods we plan to use to address the evaluation questions. Our design consists of mixed qualitative and quantitative methods, leveraging administrative, publicly available data, survey, and interview data across the key stakeholder groups for the project. Our analytic methods focus on deriving the best information from the most effective sources. As mentioned above, the sources include:

  • C4I awarded project applications and final project reports

  • Awardee/partner survey

  • Participant survey

  • Interviews with people associated with two projects (awardee/partner project leads and participants

  • Data from online community of practice hosted on the Higher Logic platform

  • Website usage data (Google Analytics)

  • Social media data (use of hashtags)

Analysis of Survey and Administrative Data

Data preparation. Survey responses will be collected using the online tool, Qualtrics. The evaluation team will examine the data to look for any odd patterns (e.g., straight-lining), incomplete surveys, or multiple submissions from respondents. These responses will be examined and eliminated on a case-by-case basis and based on the total number of survey responses SRI receives. Administrative data from the review will be collated to allow for use of descriptive statistics of the awardees as a possible covariate for change in analyses as well as to provide a picture of the awardee population.

Descriptive and categorical analyses. The research team will use similar analytical approaches for the participant and awardee surveys. SRI will analyze results after the surveys close; they expect to complete preliminary analysis within approximately 2 weeks of that time. Survey estimates will be descriptive in nature and include mean numbers and percentages both overall and broken out by key awardee and participant characteristics (e.g., institution type, participant race and ethnicity, participant activity type). Particularly for participant survey results based primarily on convenience samples, SRI will take care to underscore that results may not be representative of the broader population, given possible differences between survey respondents and non-respondents. The nature of the survey respondents will be described. SRI will provide confidence intervals around survey estimates (percentages and means) to represent uncertainty with respect to sampling error. SRI will also acknowledge limitations of survey data with respect to non-representativeness, highlighting the inability of sample surveys to adequately account for non-sampling errors, such as non-response bias, and underscoring the lack of representativeness of convenience sample surveys.

Comparative analyses. SRI will generate tables of unweighted frequencies or means for all survey questions. Key results will be disaggregated by relevant site and participant subgroups. In making any statistical comparisons, SRI will utilize t-tests for means comparisons, z-tests for proportions, and chi-square tests for comparing distributions of survey outcomes by key subgroups.

Analysis of qualitative survey data. To analyze qualitative data from open-response survey items, researchers will use code-based and word-based text analysis and thematic coding. Researchers will identify themes and sentiments among the awardees and subgroups of interest (e.g., library awardees or museum awardees). The findings of this analysis will be reported in narrative form with some statistical references as needed.

Reporting. Preliminary findings from the awardee/partner and participant surveys will be presented to ASTC, IMLS, and other stakeholders as part of quarterly briefings. Survey findings will also be triangulated with findings from document review, analysis of project administrative data, and findings from interviews in integrated analysis in the final evaluation report (a draft of which is due to IMLS in early July 2022 with a final, publishable version available by late July 2022), as well as in interim products such as blog posts or social media posts. SRI will take into careful consideration the limits of the evaluation design in how it articulates findings and will make study limitations clear in reporting. For example, SRI will make clear that the evaluation design does not support any statements regarding impact.

Qualitative Analysis of Interview and Open-ended Survey Data

Working from a matrix that maps evaluation topics to instruments and items, SRI will develop a thematic coding scheme to aid in analyzing interview data. SRI will train analysts to use the coding scheme and conduct double-coding exercises on at least 25% of interviews to help ensure high inter-rater reliability. Researchers will review interview transcripts to associate excerpts with codes, grouping interview data across interviews thematically in ways aligned to the research questions and ordered to enable triangulation across data sources. By this process, they will identify patterns, themes, and outliers regarding the scheme.

Researchers will then synthesize the data, first by code, then across groups of codes, to arrive at propositions and draft claims to be evaluated in light of findings from other data sources. Interim analysis memos will document the team’s thinking at this stage. In integrated analysis, researchers will then associate thematically linked findings across data sources and make evaluative statements about the nature of the findings overall and in comparison by data source and across subgroups of data.

To analyze qualitative data from open-response survey items, researchers will use code-based and word-based text analysis and thematic coding. Researchers will identify themes and sentiments among the awardees and subgroups of interest (e.g., library awardees or museum awardees). The findings of this analysis will be reported in narrative form with some statistical references as needed. Integrated findings will be reported in briefings and in written form as described above.

B5. Contact Information 

Matthew J. Birnbaum, PhD, Supervising Social Scientist in IMLS’s Office of Research and Evaluation, will be the IMLS point of contact for SRI, the contractor with primary responsibility for the C4I Project evaluation. Dr. Birnbaum will direct federal government oversight of the evaluation. Kea Anderson, PhD, is the SRI Principal Investigator and Dan Princiotta, PhD is the quantitative lead. Dr. Princiotta and team member Ms. Kelsey Cooper will lead survey administration and analysis, while Dr. Anderson and Ms. Milby will lead and analyze interviews. The team will coordinate throughout to achieve accurate and comprehensive interpretation of study results. Dr. Princiotta will additionally conduct the quantitative analysis of national datasets, and Ms. Cooper will lead the analysis of other website data, pending availability. Ms. Walker and Dr. Strobel will advise on data collection and analysis approaches throughout. Exhibit 4 lists the information requested for the staff responsible for collecting and analyzing the study data.  

Exhibit 4. IMLS staff responsible for evaluation oversight and SRI staff responsible for collecting and analyzing study data 

Name 

Project role 

Organization 

Phone number 

Matthew J. Birnbaum

Supervising Social Scientist

IMLS

202-653-4647

Kea Anderson 

Project Director/Principal Investigator

SRI 

703-247-8568 

Daniel Princiotta 

Senior Researcher/Quantitative Lead

SRI 

301-785-7149

Allison Milby

Researcher

SRI

646-923-5791

Kelsey Cooper

Policy Research Analyst

SRI

703-247-8568 

Mindy Hsiao

Research Associate

SRI

408-813-7211

Annie Walker

Data Scientist

SRI

703-247-8568 

Johannes Strobel

Principal Education Researcher

SRI

703-247-8568 



1 Lam & Bengo (2003) note that such a retrospective approach can obviate several types of potential bias inherent in pre/post designs; however, their study also found that respondents may over-estimate change when retrospective measures do not ask them to first estimate a baseline prior to the intervention. We will take this limitation into account in analysis and reporting. See Lam, T. C. M., & Bengo, P. (2003). A Comparison of Three Retrospective Self-Reporting Methods of Measuring Change in Instructional Practice. American Journal of Evaluation, 24(1), 65–80. https://doi.org/10.1016/S1098-2140(02)00273-4

2 SRI achieved a 95% response rate in a survey administered to a random sample of school district program coordinators in the Study of Experiences and Needs of Rural Education Achievement Program Grantees. SRI with its partner Education Development Center (EDC) achieved a 100% response rate for a survey administered to 30 public media stations with Ready To Learn grants as part of EDC and SRI’s jointly led Ready To Learn Research. Both studies were funded by the U.S. Department of Education.

6


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2023-09-03

© 2024 OMB.report | Privacy Policy