The analysis plan for this study is designed to address the evaluation mandates outlined in Section A.2. We propose collect data from non-federal respondents by conducting two surveys as well as a series of focus groups.
Set 1: Audience Survey consists of two administrations of the survey, one in May and the second in October 2021. The survey will engage the REALM project’s audience of LAM organizations that are part of OCLC’s contact list. The method of analysis will be the same for both administrations, involving a mixed method of quantitative scoring of perceived value and appropriateness of information shared by the project as well as qualitative responses digging deeper into the rationale for these viewpoints and requests for improvement.
Set 2: Museum and Archive representative focus groups will be conducted in response to observations made by members of the Steering Committee and Working Groups around a lower-than-expected level of engagement of members of this segments of the LAM community with the REALM project’s disseminated information. Intended to understand the reasons for reduced engagement, the focus groups will involve members of the museum and archive communities that have been identified by members of the Steering Committee and Working Groups as key informants for their communities. Thematic analysis will be conducted to identify themes that emerge from the collaborative discussion.
Set 1: Audience Survey Participants
The universe for this set includes LAM organizations that have accessed or used information derived from the REALM Project, including laboratory test results, literature reviews, or toolkits. The universe consists of 9,854 subscribers. Based on information provided by OCLC:
7,466 organizations primarily identify as libraries
1,678 organizations primarily identify as archives
473 organizations primarily identify as museums
227 organizations were unable to identify as one of the three options.
All 50 states are represented, as well as the District of Columbia and the Commonwealth of Puerto Rico
As explained in Section B.2, we will conduct a stratified sample of the subscribers based on primary identification as a library, archive, or museum and geographic area.
Set 2: Museum and Archive Representative Focus Groups
Members of the museum and archive community have not engaged with the REALM Project as expected. As such, a special series of focus groups have been developed to dig deeper into the question of why. The universe for this set includes museum and archive organizations in the United States that either have or have not engaged with the REALM Project. Up to 36 individuals will participate in one of three focus groups with up to twelve participants. A participant pool will be developed through recommendations made by members of the Working Groups, Steering Committee, OCLC, and IMLS staff. They will then be classified by association with either an archive or museum, their geographic location, and whether they have engaged with REALM Project information. Our final selection of 36 individuals will be made from this pool in a purposeful fashion, looking to engage those most knowledgeable about their representative communities, their needs, their interests, and willingness to engage. Due to the engagement of all museums and about a third of the universe for archives in the surveys to support Set 1, it is possible that there will be some overlap between Sets 2 and 1.
Our proposes sampling and selection methods vary for each respondent set with in the universe of participants for each set as described in Section B.1.
Set 1: Audience Survey Participants
We plan to use a stratified random sample drawn from OCLC’s contact list of 9,854 subscribers.
After eliminating organizations that members of the Steering Committee and Working Groups represent, we will stratify the list initially by primary identification and then by the five geographic regions of the United States (Alaska and Hawaii with West, District of Columbia with the Northeast, and the Commonwealth of Puerto Rico with the Southeast). We will sample up to the following within each geographic region:
750 libraries or approximately one third of the population within the universe
500 archives or approximately one third of the population in the universe
All museums to reach as broadly as possible to ensure the best representation
All organizations unable to primarily identify by any of the LAM designations
This would result in a total sample of 1,996 organizations with one respondent for each. This will ensure that each respondent group of the LAM community’s voice is heard through the survey – providing the ability to nuance the viewpoints of each sub-group versus letting one group’s voice outweigh the others. To further validate that the contact list does represent the majority of users of the REALM project information, we will ask OCLC to compare their contact list against participant lists in REALM webinars, assessing whether there is a near perfect overlap of participants and OCLC contacts, or if a meaningful percentage of webinar participants are not on the OCLC contact list.
We recognize that there is a broader LAM community beyond membership on the contact list. The samples drawn are not intended to represent wholly the larger LAM community, but rather focuses on those that we expect will access REALM Project information and be able to provide us with an informed observation as to its relevance, value, and usefulness to their organizations.
Table B.2.1. Set 1 Sampling
|
Geographic Region |
||||
Organization Type |
West |
Southwest |
Midwest |
Northeast |
Southeast |
Libraries |
150 |
150 |
150 |
150 |
150 |
Archives |
100 |
100 |
100 |
100 |
100 |
Museums |
~94 |
~94 |
~94 |
~94 |
~94 |
Other |
~55 |
~55 |
~55 |
~55 |
~55 |
Note: Numbers of museums and other institutions by region are estimates as crosstabs of organizational type by region were not available at time of writing.
Set 2: Museum and Archive Representative Focus Groups
Archives and Museums might not have been engaged at the level envisioned when the REALM Project was started. As such, based on conversations with OCLC, we are proposing to conduct a series of focus groups, specifically targeting members of the museum and archive community. The universe for this set is all museums and archives in the United States. Cognizant of the enormity of this group and the inappropriateness of generating a random sample, we will work with OCLC and museum and archive representatives that have been participating in the Steering Committee and Working Groups to identify appropriate individuals that can represent the sectors. While we expect our expert members of the Steering Committee and Working Groups to identify the key informants for their sectors. We will then stratify the sample based on whether the individual has engaged with the REALM Project. A total of three focus groups will be conducted, comprising of up to twelve individuals, requiring a total of 36 potential participants, half of which will have engaged with the Project and half will have not. Our intent is to leverage their knowledge, needs, and experience to help OCLC and IMLS better understand why members of these sectors might not be engaging with the information provided by the Project.
Three focus groups will be conducted, comprising up to twelve individuals, requiring a sampling of 36 potential participants.
Set 1: Audience Survey Participants
Prior to the onset of the COVID-19 crisis, we would have anticipated a 40% response rate of LAM members that receive communication from OCLC. Similar surveys of similar groups achieve a 25% to 30% response rate1. Our confidence in our higher response rate was also based on experiences with organizations that are invested in the work. Member organizations of the LAM community are interested in learning what the risks are and what they can do to mitigate the effects of COVID-19 in their spaces. This level of interest and involvement makes them more likely to participate in a survey assessing the quality of the information as well as provide feedback as to what can be done better to meet their needs2. However, our experience with online surveys this past year has reduced our expectation to about a 30% response rate. We have seen decreases in survey participation rates, even with individuals that are deeply engaged with topics. To achieve the 30% rate, we will work with OCLC to reach out to their community contacts to explain the importance of the survey. We will also send reminders on a periodic basis, which have also been shown to increase response rates.
While the response rate is important, we find that the focus on rates to be somewhat spurious. Rather, we focus on representativeness2 – looking to ensure to the best of our ability, that the responses captured, best represent the viewpoint of the LAM community. To this end, the use of our few organizational demographic questions (e.g., institution’s sector, geographic area) will aid us in determining whether we have a representative sample of the community – ensuring that representatives of the library, museums, and archives are captured, and their viewpoints reflected in our analysis.
To reduce survey burden, we are keeping our surveys to less than 30 minutes. The surveys will also be shared with OCLC, IMLS, and the Steering Committee prior to administration to ensure that the questions are clear and appropriate. Our own experience is echoed by Saleh and Bista,3 where they found a similar response rate when shorter surveys of similar structure and perceived importance are presented to a population of individuals having some interest in the topic.
Should we find that our response rate is lower than expected and/or note that certain portions of the community are underrepresented, we will conduct a comparative analysis, using late respondents as a proxy for those that might not have responded to compare against the responses of those that participated earlier. Our intent is to uncover any patterns in response that might be the cause of or related to reluctance to respond. Such issues could involve strong negative views of the research, perceived usefulness of the research, or communication around the information found within the toolkit.
Set 2: Museum and Archive Representative Focus Groups
We expect that the response rate of this group will be around 50%. While we will be leveraging personal connections with members of the various working groups and steering committee, these individuals, some of which will have had no experience with the REALM Project will have less invested in the Project and its success. In Fincham’s meta-analysis, the researcher found that as perspective participants are less interested in the results of a study, they are less likely to participate.4 We are balancing that challenge with the personal relationships that the REALM Project group members have prospective participants.
Much as with our efforts to support representation, we will assess those sampled for representativeness, looking to include representatives of various types of museums and archives, geographic region, and exposure to REALM Project materials. Should an area be under-represented, we will work to replace the missing individuals with others sharing similar organizational demographics to ensure that we receive an equal voice.
This section explains the different analytic methods we plan to use to address the following evaluative questions.
How has the LAM community responded to the research test results? To what extent have the research findings been valuable to the LAM community?
What was the quality of the LAM-specific research information? To what extent did perceived quality of the research information differ by sector (i.e., libraries, archives, museums)?
How did the results of the research and the availability of the toolkit shift LAM institutions’ practices?
How effective was the REALM program been in disseminating the information? What channels were most or least effective?
To what extent do LAM institutions feel the toolkit provides them with more sector-relevant information than received from other sources?
Set 1: Audience Survey Analysis
The two survey administrations are intended to address all five main evaluation questions. Individual survey questions are mapped back to the evaluation questions in Table B.4.1.a.
Table B.4.1.a.
Evaluation Question |
Survey Questions |
|
Q9a, c, d, & e Q13a ,c, d, & e Q18a, c, d, e, & f Q25a, c, d, & g |
|
Q9a & b Q13a & b Q18a & b Q25a & b |
|
Q10, Q14, Q16, Q19, Q26 |
|
Q6, Q7, Q8, Q11, Q22-Q24, Q25e & f |
|
Q9d, Q13d, Q18d |
Set 2:
The focus groups are intended to address the overall evaluative questions specific to the sub-set of archives and museums.
Table B.4.1.b.
Evaluation Question |
Survey Questions |
|
Q4, Q5, Q7 |
|
Q4, Q7 |
|
Q6 |
|
Q2, Q3 |
|
Q2, Q4 |
B.4.2. Data Analysis
Our evaluation design consists of mixed qualitative and quantitative questions, requiring different forms of analysis.
B.4.2.a. Quantitative Data Analysis
Both survey and administrative data require quantitative data analysis. Our process includes the following.
Data preparation. Survey response will be collected using the online tool, SoGoSurvey. The data will be cleaned and coded. For example, Likert Type test survey responses will be coded to ordinal values for statistical analysis. Administrative data will be collected from a representative from OCLC. This data will be assessed for completeness, replacement data sought if necessary, and cleaned.
Descriptive and categorical analyses. We will conduct initial descriptive analysis of the survey to understand response patterns across items for the whole sample and by sub-community (e.g., libraries, archives, museums). These descriptive analyses will examine means, distribution of scores on Likert-type scales for each item. We will also conduct a series of crosstabs to examine patterns for different sub-communities. We will present these results using a series of data tables and graphical representations.
Comparative analyses. We propose to then conduct a statistical power analysis prior to engaging in any comparative analyses to ensure that the requirements for valid comparative analyses are present. We will also use these analyses to identify which statistical methods will be applied. For example, nonparametric analyses (e.g., Kruskal-Wallis Rank Sum Test, Chi-Square) and the Student t-test5 are more resistant to the effects of non-normal distributions versus the F-test used in a standard ANOVA. Should no comparative analyses be possible, the descriptive and categorical analyses will be presented across the sub-communities.
Our assessment of statistical power for the dataset will include tests for skewedness (clumping of scores at the low or high end of scales) and kurtosis (tightness of variance). If the variance among the sub-community scores is small enough to allow, we will conduct comparative analyses assessing likely differences among the sub-communities.
B.4.2.b. Qualitative Data Analysis
Survey and focus group derived data require qualitative data analysis. Our process includes the following.
Data preparation and coding. The process of data preparation and coding differs for survey versus interview and focus group data.
Survey - Open-ended responses from surveys will be maintained in the associated data array and coded into a separate, categorical variable that can be used both for comparative and descriptive analyses. We will employ an inductive coding process where codes will be developed as they emerge from the data assessment. We will conduct a calibration check half-way through the coding process – testing for interrater reliability (IRR). If the IRR is less than 80%, the coders will review those statements where coding differed and rationalize that difference, coming to an agreement on the proper code. At this point, an official codebook will be generated that associates exemplar statements with the proper code. Previously erroneously coded content will be recoded to the correct coding and all future coding will follow the developed codebook.
Focus group data – Transcripts will be made of the interviews and focus groups. Much like the survey coding process, we will use an inductive coding process that follows the following steps:
Initial coding – The coders will read through the various statements for each question within each transcript to develop a general idea of what the respondents have shared – developing overall notes and broad categories of codes for the statements.
Specific coding and categorization – Based on the broad categories, the coders will reread the narrative responses to each question and clarify codes. These codes would be tied to specific target issues such as satisfaction with a process or value of content shared by OCLC.
Thematic analysis. Our evaluation questions and tools are designed to elicit emergent themes, rather than elucidate constructs already identified a priori. Started in the preparation and coding process, our team members will take the codes generated to develop larger categories of thought or overarching themes. We will look for commonality of statement across the participants, pulling together the different perspectives and stories into more coherent statements of agreement. These statements will be generated not only across the interview and focus groups, but also garnered at meaningful sub-group levels. These include unique statements of agreement where and when there are differing or unique view regarding an evaluative question by participant organizations’ demographics (e.g., rationale for not reviewing REALM Project information, value of the information, recommendations for future dissemination).
PPG
Ijeoma Ezeofor, PhD, MPH, [email protected]
Charles Gasper, MS(R), MS, [email protected]
Beth Cain, MA, [email protected]
OCLC
Sharon Streams, [email protected]
Mercy Proccacini, [email protected]
IMLS
Matthew Birnbaum, PhD, m[email protected]
1 Meta-analysis provided in Fincham, J.E. (2008). Response rates and responsiveness for surveys, standards, and the journal. American Journal of Pharmaceutical Education, 72(2). Doi 10.5688/aj720243 found similar patterns in survey responses.
2 Cook C, Heath F, Thompson RL. A meta-analysis of response rates in web- or internet-based surveys. Educ and Psychol Meas. 2000;60(6):821–36
3 Saleh, A. & Bista, K. (2017). Examining factors impacting online survey response rates in educational research: perceptions of graduate students. Journal of MultiDisciplinary Evaluation (13,29): issn 1556-8180.
4 Meta-analysis provided in Fincham, J.E. (2008). Response rates and responsiveness for surveys, standards, and the journal. American Journal of Pharmaceutical Education, 72(2). Doi 10.5688/aj720243 found similar patterns in survey responses.
5 Student t-tests are considered highly robust even in conditions where non-normal data is present. However, the methodology is suspect for skewed data, where Chi-Square analysis will be used if needed. For reference please see Havlick, L.L. & Peterson, N. L. (1974). Robustness of the t test: A guide for researchers on effect of violations of assumptions. Psychological Reports (34.3c), Simkovic, M. & Trauble, B. (2019). Robustness of statistical methods when measure is affected by ceiling and/or floor effect. PLOS One (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0220889) and Posten H.O. (1984) Robustness of the Two-Sample T-Test. In: Rasch D., Tiku M.L. (eds) Robustness of Statistical Methods and Nonparametric Statistics. Theory and Decision Library (Series B: Mathematical and Statistical Methods), vol 1. Springer, Dordrecht (https://doi.org/10.1007/978-94-009-6528-7_23)
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Amanda Bakale |
File Modified | 0000-00-00 |
File Created | 2021-03-19 |