Part B - OMB PRA package for AAHC Evaluation REVISED (04.17.20)

Part B - OMB PRA package for AAHC Evaluation REVISED (04.17.20).pdf

Assessment of the IMLS African American History and Culture (AAHC) Grant Program

OMB: 3137-0120

Document [pdf]
Download: pdf | pdf
Supporting Statement for Paperwork Reduction Act Submission
Evaluation of the African American History and Culture Grantmaking Program
OMB # 3137-XXXX

Part B. Collections of Information Employing Statistical Methods
Study Design Overview
This proposed evaluation study will assess the performance of the African American History and Culture
(AAHC) grant program of the Institute of Museum and Library Services (IMLS). The AAHC program
provides grants to projects that aim to nurture museum professionals, build institutional capacity, and
increase access to museum and archival collections at African American museums and Historically Black
Colleges and Universities (HBCUs) 1 (see Appendix E for the program Logic Model).
The purpose of this evaluation is to provide insight into AAHC program management and grantee
outcomes supported by the program. These insights will help IMLS and other stakeholders better
understand the AAHC program’s contributions and enable IMLS to make program improvements to help
it better meet the program’s legislative objectives. 2 The study is designed around three primary research
objectives:
1. Develop a thorough understanding of the eligible population of African American museums and
Historically Black Colleges and Universities (HBCUs), the characteristics of successful grantees
compared with this wider population of institutions, and what potential steps could be taken to
expand the pool of eligible, prospective grantees.
Research questions:
a. What is the universe of African American museums and HBCUs? What share have
participated in the AAHC program?
b. What is the universe of government and philanthropic funders of AA museums on a
national-level scope?
c. Are there any key factors that distinguish between grantees and nonapplicants?
d. Are there any key factors that distinguish between applicants and grantees?
e. What might IMLS do to better connect to all potential applicants? What tradeoffs might
it need to make to achieve this result given any other competing objectives?
f.

How has this program adapted to evolving needs of eligible organizations?

For the purposes of this study, HBCUs are defined at the 101 institutions listed as an Accredited HBCU by the US Department
of Education’s National Center for Education Statistics (https://nces.ed.gov/COLLEGENAVIGATOR/?s=all&sp=4&pg=1)
2 As outlined in the program’s enabling legislation: The National Museum of African American History and Culture Act (2003), 20
U.S. Code § 80r–5, B.
1

1

2. Evaluate the performance of the AAHC program, including understanding the role of the AAHC
in enhancing grantee capacity, an analysis of portfolio performance, an understanding of AAHC
grant projects, administrative practices, and the potential for further embedding evaluation into
AAHC grantmaking efforts.
Research questions:
a. How has this IMLS AAHC program made a difference in the capacity of the nation’s
African American museums and HBCUs?
b. Are there certain parts of the AAHC grant portfolio that have performed better? For
instance, how has the performance of small grants for small institutions compared to
large grants for large institutions?

c. How have IMLS administrative practices for the AAHC program influenced:
i. Participation of applicants? Why?
ii. Grantees’ implementation of project awards? Why?
d. How can program evaluation be a more integral part of the grant program and not an
optional or additional feature for grantees?
e. What has happened to the museums (and staff) that received funding in the early
iteration of the program? What have they accomplished post-grant?
3. Interpret the programmatic evaluation’s findings in relation to AAHC’s goals, as outlined in its
enabling legislation
Research questions:
a. How has the AAHC grant program, now in its 13th year, performed overall in meeting its
legislative goals?
b. What are the barriers to the AAHC program achieving better outcomes?
The study will use a mix of secondary and primary data collection methods to answer these research
questions (Appendix G). The primary data collection methods, include a survey to recent (post-2014)
grantees; a survey of eligible nonapplicants; and semi-structured interviews of past grantees, applicants
never awarded a grant, other funders, other key stakeholders (e.g., academics), and IMLS staff.
The evaluation team will overlay a contribution analysis 3 to the evaluation. The goals of which are to
provide credible evidence of influence between programs or initiatives. Since this is a retrospective
study, and the AAHC grant program involves many actors and influences, a contribution analysis will

Kane, Robin, Carlisle Levine, Carlyn Orians, and Claire Reinelt. 2017. “Contribution Analysis in Policy Work: Assessing
Advocacy’s Influence. Washington, DC: Center for Evaluation Innovation. https://www.evaluationinnovation.org/wpcontent/uploads/2017/11/Contribution-Analysis_0.pdf (2017)
3

2

allow the Urban Institute team the ability to examine the contributions of the AAHC program and
answer questions about what seemed to work and why.

B1. Respondent Universe
There are two respondent universes.
Universe one is composed of organizations that are eligible for an AAHC program grant award:
museums focused on African American life, art, history, and/or culture; museum service
organizations with the same foci; and HBCUs. 4 This eligible universe has been divided into three
groups:
• Grantees (those who have received at least one grant from the AAHC program);
• Applicants (those that have applied at least once and have never received a grant); and
• Nonapplicants (those that have never applied).
The grantee group is further divided into three cohorts to better target data collection and
analysis:
• Cohort 1 (grantees that only won a grant(s) prior to 2014);
• Cohort 2 (grantees that only won a grant(s) in 2014 or later); and
• Cohort 3 (grantees that won grants in both periods).
These Cohorts are mutually exclusive (i.e., a grantee that received a grant in 2010 and 2019 will
only be mapped to Cohort 3). There are 27 distinct grantees in Cohort 1, 42 in Cohort 2, and 32
in Cohort 3.
To control for memory bias and staff turnover, we intend to primarily survey and interview
respondents with recent experiences -- collecting information from applicants post-2014 and
grantees in Cohorts 2 and 3, as well as limiting responses of nonapplicants to inquiries about
their current perspectives. 5 However, we also plan to interview some grantees from Cohort 1
(target of 5 respondents) and Cohort 3 (target of 8 respondents) to generate some insights on
longer term contributions of their grants.
Universe two is composed of supporting entities in the African American history and culture
museum landscape: other funders, other key stakeholders, and IMLS staff. Individuals
representing funders and other key stakeholders may be affiliated, either presently or in the
past, with an eligible organization, and this will be asked and recorded.
Full eligibility requirements can be found in the “Museum Grants for African American History and Culture: FY2020 Notice of
Funding Availability,” pp. 7-8, https://www.imls.gov/sites/default/files/fy20-oms-aahc-nofo.pdf.
5 The evaluation team chose to use 2014 as a dividing line when grouping grantees with data collection focused on the last five
complete classes of grantees (grants are awarded annually). A year is considered “complete” if all grantees will have had at
least 9 months of implementation5 by the time of data collection. By the estimated start of data collection (April 2020), some
grantees in the FY19 grant class will qualify for this but some, which have grant start dates of September 1, 2019, will not
qualify by this measure until June 2020.
4

3

Grantee Survey
The grantee survey will be administered to all organizations that received at least one grant in
Fiscal Years 2014-2019 (Cohorts 2 and 3). Using IMLS administrative records of grantees, we
estimate that this includes 74 distinct grantees (organizations that received more than one
grant during the period will only receive one survey request). We anticipate a 75 percent
response rate (approximately 56 respondents). Within these grantee organizations, we will
request that the individual most familiar with the grant and the project that it funded complete
the survey. In most cases, we expect this will be an Executive Director or other senior staff or
volunteer leader (e.g., Project Director). Our analysis will consider a variety of variables and
characteristics when analyzing results by subpopulation including location (region, state, and
rural vs urban); date of founding (i.e., age of organization); type of organization; size of
organization; past AAHC and other IMLS grant program application success; affiliation with
programs and organizations like the Association of African American Museums (AAAM); and
other relevant factors that emerge as we continue to analyze the population.
Nonapplicant Survey
Nonapplicants will be identified through a database of eligible organizations developed by the
research team (using a variety of publicly available information like accreditation affiliation with
organizations like AAAM) and cross referenced against IMLS administrative data to confirm they
have not applied. While we are validating this number through continued research of secondary
data sources, we estimate that there are approximately 300 eligible nonapplicant organizations.
The survey to nonapplicants will include a brief screen to confirm responding organizations are
nonapplicants. We will attempt to survey the entire population. While we will utilize techniques
to improve response rates (e.g., adopt language that is concise and emphasizes benefits and
recognizes their limited time commitment, employ follow-up procedures for nonrespondents,
and leverage social media of partners like AAAM), we anticipate a relatively low response rate
due to limited incentives to participate and the lack of a personal connection to the program.
Specifically, we estimate approximately 30 to 50 percent (or 90 to 150 organizations) will
complete the survey.
Note on Applicants
There are 22 distinct post-2014 applicants (i.e., eligible entities that submitted at least 1
application in 2014-2019 but which have never won an AAHC grant). We will be interviewing a
subset rather than surveying all because of the type of information we want to collect while
managing project resources. Because of recall bias, we are only planning to interview post2014 applicants.
Interviews
We plan to complement the surveys and secondary data collection with up to 50 stakeholder
interviews with key groups in Universe 1 and Universe 2 to gain deeper, customized insights on
and relative to the AAHC program. Universe 1 interviewees (grantees and applicants) will be
4

identified using the database developed by the research team, drawing from IMLS
administrative records, with 5 grantees from Cohort 1, 20 from Cohort 2, and 8 from Cohort 3
as well as 5 post-2014 applicants. Interviewees will be selected randomly for Cohort 1 grantees
and applicants. Interviewees from Cohort 2 and 3 grantees will be drawn from those that
completed the survey and which, by the Urban Institute’s determination from the survey and
other available material (e.g., grantee project reports), have particularly strong perspectives on
the contributions of the AAHC grant to their project grant’s outcomes.
For Universe 2, the Urban Institute evaluators will interview four funders and five “other
stakeholders”. The funders will be identified through the grantee survey 6, conversations with
Urban Institute’s subject matter expert for this evaluation, IMLS staff, and a literature review.
The five “other stakeholders” are comprised of field experts including administrators and
academics who are not currently affiliated with another organization in our sample. They will
be identified primarily through a literature review and conversations with IMLS staff. For
example, representatives from the National Museum of African American History and Culture
and the Association of African American Museums are likely candidates for this category based
on prior conversations. Urban Institute will develop a list of funders and other stakeholders,
confirm the validity of this list with Urban Institute’s subject matter expert for this evaluation,
and share the list with IMLS for feedback. In order to limit the evaluation’s bias, IMLS’s
feedback will be limited to identifying those who have a clear conflict of interest or who are not
the most appropriate respondent at their organization.
B2. Potential Respondent Sampling and Selection Methods
Grantee survey
Overview: The Urban Institute will field the Grantee Survey to the universe of organizations that
received an AAHC grant from 2014-2019 and have completed at least 9 months of
implementation work at time of the survey in order to ensure accuracy in survey results (given
recall bias and high probability of staff turnover). The research team developed a survey
instrument that draws on and adapts tested questions from other surveys to understanding the
role of the grant program in enhancing grantee capacity (Research Objective 2). 7 This survey
will ask questions that aim to capture the activities and short-term outcomes of recent
grantees. A screening question will be used to ensure a knowledgeable person (about the grant
and the project that it funded) is completing the survey. The survey is designed to be

The grantee survey (question 14) probes respondents about their awareness of sources of funding similar to the
AAHC grant. This approach for identifying funders is intended to uncover unique, interesting, or widely recognized
funding sources.

6

Questions related to Research Objective 2 draw from the language and/or format of questions in the Center for Effective
Philanthropy’s Grantee Perception Report® (GPR). The GPR is a widely used grantee survey, and it’s driven by extensive
research and analysis. Hundreds of funders have used it and its comparative data to make choices about how to use their
resources to achieve intended results.” See https://cep.org/assessments/grantee-and-applicant-perception-reports-3/

7

5

completed by the individual at the organization who led project implementation (i.e., the
principal investigator) as they will be most familiar with project goals and outcomes.
Survey Administration: The survey will be administered online using Qualtrics survey software
with access to data limited to the Urban Institute evaluation team. Grantees will receive an
initial letter via email from IMLS explaining the evaluation and the request for data collection
(grantees in this sample will receive one letter that mentions the survey and potential future
interview opportunity). Approximately one week later they will be contacted by Urban Institute
via email and invited to take the survey using an individualized link. The survey is designed to be
completed online, accessible through multiple platforms and devices, such as computer, tablet,
and smart phone; a PDF version will be available for download for informational purposes only.
The survey is designed to take approximately 30 minutes to complete. Non-respondents will
receive an automated follow-up email after one week and after three weeks, and the survey
will close after one month.
Data Management and Storage: While the survey is being fielded, completed and partially
completed, surveys will be stored on the Qualtrics secure site. Access to the survey by
respondents will be through a link with a unique ID provided in the email invitation. Once the
survey has been completed, respondents will no longer have access. Access to survey data by
Urban Institute staff will be password-controlled and limited to those staff involved in fielding
the survey and who have signed a confidentiality agreement. All survey data will be saved to an
encrypted network drive, with access limited to Urban Institute staff with a need to work with
raw data, and who have signed the confidentiality agreement. Access will only be available onsite or secure remote access, through password-protected computers.
Analysis: Survey results will be analyzed following completion of the survey, and preliminary
analysis is expected to be completed approximately 2 weeks after end of the survey data
collection. The research team will examine the data to look for any incorrectly filled out
responses, odd patterns (e.g. straight-lining), incomplete surveys, or multiple submissions from
respondents. These responses will be examined and eliminated on a case by case basis and
based on the total number of survey responses we receive. The open-ended questions will be
reviewed to determine respondent’s understanding of and engagement with the question. A
summary will be provided upon survey completion with tables of frequencies for all survey
questions. Results from the Grantee Survey will be presented in a written interim report
presenting findings from the primary data collection (due to IMLS in July 2020). Findings from
the Grantee Survey will also be used as inputs into the final evaluation report (a draft of which
is due to IMLS in September 2020 and a final, publishable version of the report will be made
available in November 2020).
For close-ended survey questions (25 questions or sub-questions), we will determine the total
number of respondents for the survey and establish a response rate for all the subgroups of
interest. Researchers will categorize responses to these closed ended survey questions into
6

groups based on the options we have selected and report the proportions of each response for
each question. To do this, researchers will first examine the survey data to evaluate the raw
percentages for each of the survey questions. Then, we will filter the results by cross-tabulating
subgroups. 8 The percentages will show how many people gave each answer as a proportion of
the number of people who answered the question. We will examine the responses by subgroup
and interrogate the data to determine whether there are any correlations among the variables
of interest. Researchers will report the findings from the closed ended survey analysis in
narrative and quantitative forms with accompanying data visualizations.
In order to effectively analyze the open-ended survey responses (5 questions or sub-questions),
researchers will use a systematic process to organize the responses and highlight meaning. To
do this, researchers will organize and classify this qualitative data through the process of codebased and word-based text analysis, thematic coding, and reorganizing the text using NVivo. 9
Researchers will analyze the open-ended questions to identify themes and sentiments among
the grantees and other subgroups of interest. The findings of this analysis will be reported in
narrative form with some statistical references as needed.
Other Notes: The Grantee Survey does not require sample selection or specialized sampling
procedures because it will be fielded to all entities in Cohorts 2 and 3. Progress on the survey
administration and analysis will be provided to the IMLS COR via bi-weekly updates and to the
entire IMLS evaluation team during monthly check-ins. This survey will only be administered
once.
• Statistical methodology for stratification and sample selection: N/A
• Estimation procedure: N/A
• Degree of accuracy needed for the purpose described in the justification: N/A
• Unusual problems requiring specialized sampling procedures: We excluded some subgroups of the population from participation in the survey component of this study
design, because we assessed the potential limitations of time and memory factors on
survey responses. We also used specialized sampling to only include respondents who
We expect some crosstabs to compare respondents based on how many grants they received and their perceptions of grant
requirements (for example, a cross tab might find that first time grantees find grant requirements more difficult to comply with
than repeat grantees). Additional cross-tabs of interest might be the extent to which the AAHC has allowed specific project
goals to be achieved and the number of times the project has received AAHC funding; awareness of other funding sources and
whether organizations would have been able to implement the project; and suggestions for IMLS to improve the AAHC program
and activities and projects that are consistent with the goals of AAHC, but not eligible for funding. Some key independent
variables of interest are the number of AAHC awards grantees have received, the length of time grantees have been a part of
the AAHC program (at least nine months of programmatic implementation time), whether grantees are museums or as HBCUs,
etc.

8

Intercoder reliability is an important component in the content analysis process for open-ended survey responses. To ensure
that the interpretation of the content is trustworthy and valid, we will develop a codebook that describes each code along with
a clear definition of the code along with an example quote from the data. Using the pre-test data we collect, we will generate a
document that includes sample open-ended responses and each member of the coding team will code the responses based on
the code book. Agreements and disagreements between coders will be tallied for each pre-test participant’s responses by
directly comparing the codes applied to the same excerpts.
9

7

have completed at least nine months of implementation to ensure that respondents can
address all the survey questions.
• Any use of periodic (less frequent than annual) data collection cycles to reduce burden:
N/A
Nonapplicant survey
Overview: To capture and understand basic characteristics of eligible nonapplicants, including,
importantly, reasons they may not have applied to the AAHC program in the past, the
evaluation will also be conducting a short survey of these nonapplicants. The survey will be sent
to the full population of eligible nonapplicants (entities that are eligible to apply for the AAHC
grant based on criteria set out by IMLS but which have never applied). This population is
estimated to include approximately 300 entities. Given the estimated survey response rate of
30 to 50 percent (see Section B3), we anticipate 90 to 150 organizations will complete the
survey, which will provide sufficient information to inform the evaluation. A screening question
will ensure that only those organizations that have never applied to the AAHC program before
and are eligible to do so are completing the survey.
Survey Administration: The survey will be administered online using Qualtrics survey software.
Nonapplicants will be invited to complete the survey by email from the Urban Institute,
explaining the purpose of the survey and use of information collected. This email will include a
link to the survey. The survey is designed to be completed online, accessible through multiple
platforms, such as computer, tablet, and smart phone. The survey is designed to take
approximately five minutes to complete. Non-respondents will receive an automated follow-up
email after one week and after three weeks, and the survey will close after one month. The
evaluation team will also work with external partners, including AAAM to promote the survey
via their social media platform.
Data Management and Storage: While the survey is being fielded, completed and partially
completed surveys will be stored on the Qualtrics secure site. Access to the survey by
respondents will be through a link provided in the email invitation. Once the survey has been
completed, respondents will no longer have access. Access to survey data by Urban Institute
staff will be password-controlled and limited to those staff involved in fielding the survey and
who have signed the confidentiality agreement. All survey and other sensitive data will be
saved to an encrypted network drive, with access limited to Urban Institute staff with a need to
work with raw data, who have signed the confidentiality agreement. Access will only be
available on-site or secure remote access, through password-protected computers.
Analysis: Survey results will be analyzed following completion of the survey and preliminary
analysis is expected to be completed approximately 2 weeks after end of the survey data
collection. A summary will be provided upon survey completion with tables of frequencies for
all survey questions. Results from the Grantee Survey will be presented in a written interim
8

report presenting findings from the primary data collection (due to IMLS in July 2020). Findings
from the Grantee Survey will also be used as inputs into the final evaluation report (a draft of
which is due to IMLS in September 2020 and a final, publishable version will be made available
in November 2020).
Other Notes: The Nonapplicant Survey does not require sample selection or specialized
sampling procedures because it will be fielded to the full universe (as close as our research to
identify this full universe yields). Progress on the survey administration and analysis will be
provided to the IMLS COR via bi-weekly updates and to the entire IMLS evaluation team during
monthly check-ins. This survey will only be administered once.
• Statistical methodology for stratification and sample selection: N/A
• Estimation procedure: N/A
• Degree of accuracy needed for the purpose described in the justification: N/A
• Unusual Problems requiring specialized sampling procedures: N/A
• Any use of periodic (less frequent than annual) data collection cycles to reduce burden:
N/A
Interviews
Overview: The Urban Institute will conduct up to fifty interviews with respondents from five
different groups (funders, applicants, grantees (with three sub-groups), IMLS staff, and other
stakeholders) to collect key evaluation data as per the breakdown in Table B2. The research
team will, through outreach and follow up, identify the appropriate personnel at the
respondent organization to participate in the interview.
Outreach to each target organization will be written to identify the appropriate interviewee as
follows:
•

•

In Universe 1: interviews will be conducted with individuals at grantee organizations
who led project implementation (i.e., the principal investigator) as they will be most
familiar with project goals and outcomes. Interviewees at applicants will be the
individual who led the application process. Should the target interviewee be
unavailable, the research team will identify another individual at the organization that is
similarly knowledgeable on the project/application. If no individual exists, the
organization will be marked as “Unable to interview – no knowledgeable respondent at
organization.” If the ideal interviewee has left the organization, the same note will be
recorded as their experience external to the organization may have impacted their
perception of the project or organization.
In Universe 2, interviewees at funders and other stakeholders will be selected as those
at their organization with the greatest familiarity of IMLS and funding for African
American museums. If no individual exists, the organization will be marked as “Unable
to interview – no knowledgeable respondent at organization.”

9

TABLE B.2
Types of questions by interviewee type
Respondent
type
Funders

Applicants

Grantees: pre2014
Grantees:
2014-2019
Grantees: longterm
IMLS Staff

Target
Areas of information to be collected
number of
respondents
4
• Their priorities/focus within space
• Awareness of other funders and pools of eligible grantees
• Perception of challenges faced by grantees (including
availability and type of funding)
• Trends in space
5
• Level of awareness of AAHC program and its objectives
• Recollection of application process
• Organizational needs
• Other funding sought and won
5
• Outcomes attributed to AAHC program
• Trajectory of project/organization post grant (if applicable)
20
• Experience related to application process (if recent)
• Identified capacity challenges
8
• Funding picture and outlook
3

•
•
•

Other key
stakeholders 10

5

•
•
•
•

Evolution of AAHC program
Management of AAHC program and interaction with
grantees
Landscape of eligible entities, other funders, and challenges
within the field
Perceived impact of AAHC program
Opportunities for improvements
Perspectives and thoughts on the AAHC program and its
impact
Landscape of eligible entities, other funders, and challenges
within the field

Consent: Through phone and email contact, identified staff members will be asked to
participate in the interview. Prior to data collection, potential participants will receive a letter
from IMLS (Appendix A), followed by an email from the Urban Institute (Appendix D),
encouraging their participation. Respondents will be informed that their participation is
voluntary but important to the evaluation and be informed about the anonymity of their
responses with data reported in aggregate. At the start of the interview, respondents will be
asked to provide their informed consent. The only exception to data aggregation is if the Urban
Institute evaluation team believes that, based on responses, a quote from an individual

This group includes key partners at organizations and associations such as the National Museum of African American History
and Culture and the Association of African American Museums as well as academics and other leaders in the field, some of
which may have served as AAHC application reviewers in the past or currently.

10

10

respondent provides needed clarity or illumination; in such cases, the Urban Institute
evaluation team will seek permission from that individual to use the quote..
Interview administration: The interviews will be conducted at a time convenient for each
respondent. They are expected to take one hour or less. The interviews will be semi-structured
with a set of questions and prompts (Appendix D), customized and tailored to the responding
organization, used to guide the conversation. These questions are designed to elicit feedback
under the areas in Appendix E in order to help address the evaluation’s research questions.
Data Management and Storage: Access to interview data by Urban Institute staff will be
password controlled and limited to those staff involved in fielding the interview, and who have
signed the confidentiality agreement. All interview data will be saved to an encrypted network
drive, with access limited to Urban Institute staff with a need to work with raw data and who
have signed the confidentiality agreement. Access will only be available on site or via secure
remote access, through password protected computers.
Analysis: Notes taken during the interviews will be analyzed by Urban Institute staff using NVivo
coding software or an excel matrix. The Urban Institute evaluation team will develop a coding
structure with a priori codes based on the research questions and protocol. The research team
will meet to discuss emerging themes throughout the data collection period. The team will then
code the interview transcripts, adding emergent codes for themes that emerge from the
interviews. After coding, queries will be used to pull key information from the transcripts, which
will be summarized in the report. Interview data will be analyzed on a rolling basis and analysis
is expected to be completed approximately 2 weeks after end of the interview data collection
period. A summary will be provided upon survey completion with tables of frequencies for all
survey questions. Results from the Grantee Survey will be presented in a written interim report
presenting findings from the primary data collection (due to IMLS in July 2020). Findings from
the Grantee Survey will also be used as inputs into the final evaluation report (a draft of which
is due to IMLS in September 2020 and a final, publishable version will be made available in
November 2020).
Other Notes: The sample targets identified in Table B.2 and Table B.3 were selected to provide a
reasonably representative perspective on each respondent group with weighting towards
collecting insights from grantees given the evaluation’s emphasis on AAHC program’s
contributions to grantee outcomes. Progress on the interview administration and analysis will
be provided to the IMLS COR via bi-weekly updates and to the entire IMLS evaluation team
during monthly check-ins. This interview will only be administered once.
B3. Response Rates and Non-Responses
Grantee survey
Given the recent – and in some cases ongoing – relationship between this population and the
AAHC program, the survey’s use of introductory emails from IMLS, the clear potential benefit
11

for them to participate, and planned follow up emails and calls to encourage participation of
non-respondents, we anticipate a high response rate of approximately 75 percent, 11 resulting in
approximately 56 completed surveys.
The proposed survey will be fielded to the full universe of respondents and will not be based on
sampling. The accuracy and reliability of the survey data will be a factor of adequate response
rates and data quality. Prior to survey launch, IMLS will send an introductory Respondent
Contact Letter to all respondents at the start of the survey period to encourage participation
(Appendix B) which will be followed by personalized outreach via email from the Urban
Institute.
The research team will employ a variety of techniques to ensure the highest possible response
rate, including: survey design that allows respondents to use various types of electronic devices
to complete the survey; effective communication before the survey to prepare respondents for
participation; assurance that only de-identified, aggregated data will be shared; survey
reminders throughout the fielding period; ongoing response tracking; and email follow-up with
non-responders. Reminders will be sent to respondents that have not responded after one
week and again after three weeks. At the three-week point, we will also call respondents and, if
they choose, we will conduct the survey by phone. Completed surveys from the late-tocomplete cohort (and those using the phone survey) will be logged and responses will be
compared with all other completed surveys to test for significant differences.
To measure the extent of response bias for each survey, respondents and non-respondents will
be compared based on key organizational characteristics such as budget size, governance, type
of organization, geographic region, staff size, etc. If the characteristics of respondents are
significantly different from the rest of the respondents, adjustments to the estimates through
weighting may be used, and the results of the testing will be reported. Because the entire
population of interest, which is not large, will be surveyed, and because the expected pool of
non-respondents is small, the team anticipates efforts to increase response rates overall will be
more effective than post-survey adjustments for producing reliable data.
Nonapplicant survey
The proposed survey will be fielded to the estimated full universe and will not be based on
sampling. Unlike the grantee survey, the research team anticipates low response rates due to
unfamiliarity with IMLS and the AAHC program and fewer incentives to participate. Based on
past experience, we estimate a response rate of approximately 30 to 50 percent.
This is anticipated to be much higher than the average response rates to computer-based surveys, which generally hover
around 20 to 30 percent (see Manfreda, Katja Lozar, Michael Bosnjak, Jernej Berzelak, Iris Haas, and Vasja Vehovar. 2008. “Web
Surveys versus other Survey Modes: A Meta-Analysis Comparing Response Rates.” International Journal of Market
Research, 50(1), 79–104. https://doi.org/10.1177/147078530805000107). We estimate a much higher response rate for this
survey based on past Urban Institute experience conducting similar evaluations of recent grantees as well as techniques
described in this Part B on managing non-response rates (drawing from, among others, National Research Council 2013.
Nonresponse in Social Science Surveys: A Research Agenda. Washington, DC: The National Academies Press.
https://doi.org/10.17226/18293).,
11

12

We will employ a variety of techniques to ensure the highest possible response rate, including:
survey design that allows respondents to use various types of electronic devices to complete
the survey; effective communication before the survey to prepare respondents for
participation; and assurance that only de-identified, aggregated data will be shared.12 Because
responses will not be associated with individual organizations (in part not to discourage
participation), follow up emails to non-respondents will not be possible. Furthermore, the
research team will utilize external partnerships, including the social media accounts of the
Association of African American Museums to encourage participation.
We are not using this survey to produce population counts. Instead, the survey is being used to
understand reasons why eligible institutions may not apply to the AAHC program. Therefore,
while the response rate is less than optimal, we still anticipate we will obtain valuable
information on nonapplicant’s perceptions of the program and potential barriers to applying.
Nonetheless, we will employ a nonresponse methodology to address potential nonresponse
bias issues with this group.
If these nonapplicants differ from those who do participate we may have some potential bias.
The size of the potential bias depends on both how much the non-participants differ and the
response rate. Using administrative data, we will compare the nonapplicants that respond to
those that do not respond (e.g., on budget size, staff size, location, type of museum, etc.) and if
there are significant differences then we will adjust our estimates by applying a poststratification weighting adjustment that would make our sample have the same demographic
make-up as the overall population which will reduce the potential for nonresponse bias.

Interviews
Response rates are expected to vary across different interviewee groups. We will recruit
interviewees on a rolling basis until we have reached our target for that group, totaling 50
across all groups. (Details on how we will identify respondents are provided in Section B1,
above).
To encourage participation, IMLS will send an introductory Respondent Contact Letter to all
selected informants at the start of the data collection to encourage participation (Appendix A).
Invitations to participate in an interview will be sent by the Urban Institute within one week of
that outreach. The Urban Institute will send a follow-up email 1 week later and send a second
email, accompanied by a phone call, 2 weeks after interview launch, as needed. Response rates
are indicated in table B.3. We will invite respondents in each group to interview on a rolling
basis and will keep sampling until we reach target response for that group.
Table B.3

These techniques draw from Urban Institute experience fielding web-based surveys (without monetary incentives) as well as
evidence on improving response rates. See, for example, National Research Council 2013. Nonresponse in Social Science
Surveys: A Research Agenda. Washington, DC: The National Academies Press. https://doi.org/10.17226/18293

12

13

Interview Response Rate Estimates
Respondent type

Universe

Target
responses

Anticipated
response rate
(incudes
attrition) 13

Estimated individuals needed for
outreach to achieve target (target
responses/anticipated response rate)

Applicants (20142019)

22

5

60%

9

Grantees: pre2014 (Cohort 1)

27

5

50%

10

Grantees: 20142019 (Cohort 2)

42

20

90%

23

Grantees: longterm (Cohort 3)

32

8

90%

9

8
(estimate)

4

75%

6

3

3

100%

3

20
(estimate)

5

75%

7

Funders
IMLS Staff
Other key
stakeholders

B4. Tests of Procedures and Methods
Prior to OMB submission, the Grantee Survey was subjected to several rounds of internal
testing using Urban Institute and IMLS personnel, including senior survey methodologists. It
was reviewed by three experts in the field, similar to but distinct from those in our target
sample. This review was designed to trouble-shoot technical issues and reduce the time burden
on respondents. The reviewers’ comments led to the revision and consolidation of several
questions to improve the survey’s clarity and reduce the time burden on respondents.
13

Response rates were estimated using the following logic and evidence:

Grantees and applicants: The CEP reviews of American Foundations tend to have a 40% response rate for grantees. This low
response is associated with challenges many organizations have in communicating with them. Based on our history with high
response rates for active grantees in other evaluations, and the fact that this is an interview request with active follow-up, we
expect a higher response rate in this project, particularly for more recent cohorts.
Funders: The Center for Effective Philanthropy reported sending surveys to 163 CEOs of foundations that had used CEP’s
Grantee Perception Report, in 2006, asking about the types of support they provide to grantees, and why: about half
responded. Given that this outreach will be more targeted, to a more bespoke set of funders who are likely to have heard of
IMLS and the AAHC program, we anticipate a higher response rate. We also draw on the fact that we’re asking for interviews
and not survey responses which we think we may have more success with since it gives funders an opportunity to talk about
their own work and their perspectives on the field.
Other stakeholders: There is relatively limited evidence to reliably estimate response rates for field experts with estimates
based on existing Urban Institute projects interviewing key stakeholders (see, for example:
https://www.huduser.gov/portal/sites/default/files/pdf/PayforSuccess.pdf), consultation with Urban Institute’s Subject Matter
Expert and Urban’s senior survey methodologist, and the fact that interviewees will likely have good familiarity with IMLS and
the AAHC program. As such, we anticipate a relatively high (75%) response rate for this group.

14

B5. Contact Information.
a. The agency responsible for receiving and approving contract deliverables is:
Office of Digital and Information Strategy
Institute of Museum and Library Services
955 L’Enfant Plaza North, SW
Suite 4000
Washington, DC 20024
Person Responsible: Matthew Birnbaum, PhD, Supervisory Social Scientist,
[email protected] (202-653-4760)
b. The organization responsible for survey design, data collection, and data analysis is:
The Urban Institute
500 L’Enfant Plaza, SW
Washington, DC 20024
Persons Responsible:
Shena Ashley, PhD, Principal Investigator, Urban Institute, [email protected] (202261-5725)
LesLeigh Ford, PhD, Primary Data Collection Lead, Urban Institute, [email protected]
(202-261-5843)
Matthew Eldridge, MSc, Project Manager, Urban Institute, [email protected] (202261-5707

15


File Typeapplication/pdf
AuthorEldridge, Matthew
File Modified2020-05-15
File Created2020-05-15

© 2024 OMB.report | Privacy Policy