Museums
for Museums for Digital Learning Project Evaluation
Part B. Description of Research/Statistical Methodology
Newfields and IMLS are interested in documenting the implementation of the tool and gaining a general understanding of the experiences of all participants. The results of this evaluation are intended to benefit the museum field as well as the public.
The main evaluation questions:
What elements of the platform support stakeholders? What types of changes are needed for the product to be scaled up to a larger group of museums and educators?
Museum Stakeholders:
To what extent does the platform allow museums to easily create and configure content for collections-based educational materials?
Does the platform increase institutions’ ability to serve K-12 educator needs?
Educator Stakeholders:
Has this platform increased educators' knowledge, access, and ability to use museum-based collections?
Do educators feel this platform support and enhance learning within the classroom? If yes, in what ways?
How effectively have the team members collaborated in designing and then testing the product?
How and in what ways does the collaborative process support each stakeholder's needs?
Has the team articulated shared goals, expectations, and understanding of roles and process?
In what ways could the process of co-creation be improved?
B. 1. Respondent Universe
The MDL team consists of staff from three museum partners (Newfields, History Colorado, and The Field Museums), ten K-12 educators from across the nation, and up to ten additional museum partner sites that will join in Year Two.
This
evaluation includes collecting data from three distinct audiences
that will be involved with MDL:
The collaborative, cross-institutional team across all three partnership sites with ten (10) educators embedded as co-creators for the project;
The ten K-12 educators included in the design and assessment phase of the digital tool;
Museum professionals from up to ten additional museum partner sites introduced in Year Two of the project.
In order to understand the collaborative and co-created process across the museum professionals and K-12 educators participating in MDL platform development, formative evaluation data will be collected from participants via the following methodologies: museum professionals and educators (survey); educators (think alouds and semi-structured interviews); and additional museum partner participants (survey; table 1).
Table
1: Data collection summary
Participant group |
Methodology |
Sample size |
Museum staff and K-12 educators |
Collaboration
Survey: Instrument #1, T1 |
n = 22 (12 museum staff participants and 10 educators) |
K-12 educators |
Think-Alouds and Semi-Structured Interviews: Instrument #2, T1 |
n = 10 |
Museum staff and K-12 educators |
Collaboration Survey: Instrument #1, T2 |
n = 22 (12 museum staff participants and 10 educators) |
K-12 educators |
Classroom Impact Survey: Instrument #3 |
n = 10 |
K-12 educators |
Think-Alouds and Semi-Structured Interviews Instrument #2, T2 |
n = 10 |
|
Follow-Up Questionnaire: Instrument #4
|
n = up to 24 (average of 2.4 museum professionals per partnership for up to 10 partnership museums) |
Newfields and IMLS are interested in documenting the implementation of the tool and gaining a general understanding of the experiences of all participants. The results of this evaluation are intended to benefit the museum field as well as the public.
The
main research questions, paired with the data sources:
What elements of the platform support stakeholders? What types of changes are needed for the product to be scaled up to a larger group of museums and educators? Data Sources: Think Alouds with Educators T1 and T2; Collaboration Survey T2, Questions 10-14, Additional Museum Partners Questionnaire
How effectively have the team members collaborated in designing and then testing the product? Data sources: Collaborative Survey T1 and T2, Questions 5-8, Classroom Implementation Questionnaire.
B.2. Potential Respondent Sampling and Selection Methods
Our design for the MDL project is mixed methods, containing both quantitative and qualitative elements across multiple points in time to reduce bias. While the sample size is quite small for quantitative methods, the change in scores across time will allow us to examine strengths and weaknesses within the project and give the team feedback on areas to improve.
The
following data collection procedures will be implemented:
1. Collaboration
Survey:
This
instrument is based on other validated instruments, such as the
Wilder Collaboration Inventory and modified for deployment in
cross-institutional informal learning projects. A series of
Likert-type rating scales allows us to measure key factors known to
influence collaboration success using a scale of 1 to 7. This survey
will be conducted twice, to monitor collaboration strength during the
project. The survey will be deployed once at the conclusion of the
grant’s Phase 1 and before K-12 educators enter their
classrooms for the summer (target: August 2019), and then repeated at
the conclusion of the design phase (target: March 2020).
Participants will receive an email invitation to complete
the survey. The survey will take approximately 30 minutes to
complete. Participants will also be instructed that they can request
a paper version of the survey by replying to the email. A paper-based
survey and self-addressed stamped envelope then will be sent to the
requesting participant. Paper surveys will be entered into the system
by project staff. Whether completing an online or paper survey,
facilitators will be reminded a maximum of two times via email to
complete the survey. Once the survey data has been analyzed, it will
be destroyed.
2.
Think-Aloud Protocols paired with semi-structured interviews for
educators:
The
Wikipedia definition of a Think-aloud protocol is "a type of
protocol used to gather data in usability testing in product design
and development, in psychology and a range of social
sciences...Think-aloud protocols involve participants thinking aloud
as they are performing a set of specified tasks. Participants are
asked to say whatever comes into their mind as they complete the
task. This might include what they are looking at, thinking, doing,
and feeling. This gives observers insight into the participant's
cognitive processes rather than only [their visible actions]."
We find the think-aloud protocol particularly helpful in
understanding user misperceptions and confusion points within digital
tools.
Think-Aloud
Protocols that are paired with semi-structured interviews are
particularly well-suited to collect feedback on the MDL platform
because we will be able to collect live feedback that can be paired
with video footage. Our experience with platform and software testing
has shown that a significant amount of feedback is given while a user
goes through the actual site contribution process or search. We find
that post-use surveys and questionnaires are inadequate to address
software feature success or failure points, as issues arise that the
user may not deem important but are critical from a developer
perspective, and the number of small suggestions or errors may be
beyond the amount a user can recall for a survey completed after the
fact.
Think-Alouds, where a user comments on their
thoughts and actions while using the software, followed-up by a short
interview gains us a richer and more complete data on their
interaction. Adding interview questions with the Think-Alouds gains
us the same rigorous and comprehensive feedback as a survey, while
being efficient with educators’ time.
For this project, we will use screen-recording methods over a telephone interview call so the team can see the exact screens the user interacts with along with their commentary. Participants will be reached by email for an invitation to participate in the interview. The interview will be scheduled for a time that is requested by the educator. Participation in this study requires no additional hardware or materials other than what is normally available in a work/home setting: access to a telephone and access to a laptop or desktop computer and internet access. The interview will take approximately 60 minutes to complete. Once the interview is transcribed by a professional transcription service, the audio and video data will be destroyed.
3.
Follow-up questionnaire for the K-12 educators based on their
observations of MDL use in their classroom:
This
questionnaire will focus on actual implementation and impact from the
educator perspective.
Educators
will receive an email with Survey asking to complete the
questionnaire online. The survey will take approximately 30 minutes
to complete. Educators will also be instructed that they can request
a paper version of the survey by replying to the email. Whether
completing an online or paper survey, participants will be reminded a
maximum of two times via email to complete the survey. Once the
survey data has been analyzed, it will be destroyed.
4.
Follow-up questionnaire for the professionals from the ten
additional museum partner sites.
The
goal is to measure ease of use in contributing content via the
template created for the digital tool. This questionnaire will focus
on actual implementation and impact from the museum partner
perspective. This questionnaire is designed to surface
recommendations for future implementation and future collaborations.
An
email invitation will then be sent to museum participants to complete
the survey online. The survey will take approximately 30 minutes to
complete. Additional museum partners will also be instructed that
they can request a paper version of the survey by replying to the
email. Whether completing an online or paper survey, facilitators
will be reminded a maximum of two times via email to complete the
survey. Once the survey data has been analyzed, it will be destroyed.
B.3. Response Rates and Non-Responses
HG&Co estimates a response rate of at least 85% of the total participating members, with a target of 100% response rate.
Based
on the proposed plan and the evaluation team’s past experience
with data collection related to collaboration, we do not expect
non-response to be an issue for this study. Our experience is that
professionals whose work is directly linked to the intended
implementation have a very high response rate. An overview of the
evaluation plan will be provided to all participants, and adequate
time for completion will be provided for the K-12 educators.
Additional incentives for survey completion or interview
participation will not be provided.
B.4. Tests of Procedures and Methods
Survey instruments for this evaluation have been based on instruments adapted from Wilder Collaboration Factors Inventory, an online collaboration assessment widely used to measure how well a collaboration is functioning and is designed to get insights from team members on ways to improve the collaboration.
The
Wilder Collaboration Factors Inventory consists of 20 factors
informed by empirical studies designed to measure successful
collaborations formed by nonprofit organizations, government agencies
and other organizations. Wilder Collaboration Factors Inventory is a
tool designed for collaborative groups to help identify strengths and
weaknesses with respect to key factors that influence collaborative
success, such as how well the collaboration environment includes
concrete and attainable goals, mutual understanding and respect, and
whether the members of the collaboration have a stake in the process
and outcomes. We believe that an adaption of this survey will result
in an accurate measure of the collaboration process, as well as
provide additional recommendations about how the process can
improve.1
During the creation of the MDL data gathered is for
usability purposes. The evaluators will review all the comments of
the testers and their context and use that information to create a
list of recommended changes. As usability testing can find
significant errors through use with a single user, each data point is
useful. No statistical analysis will be completed.
The MDL remains within the pilot phases during this grant, so the use of the system will be limited to recruited museums and K-12 educators. As we will be working with a limited set of museums and educators, too small of a sample for statistical analysis. We will present some descriptive statistics and case study findings to frame a discussion of impact.
B.5. Contact Information for Statistical or Design Consultants
Project Director: Kate Haley Goldman, Principal, HG&Co
Project Lead: Rosanna Flouty, Managing Director, HG&Co
Expert
Consultant: Leslie
Kadish, Research
Associate, HG&Co
Federal Contact: Helen Wechsler, Office of Museum Services, Institute of Museum and Library Services
Federal Contact: Matthew Birnbaum, Office of Impact Assessment and Learning, Institute of Museum and Library Services
1 Adapted from Mattessich, P. W., Murray-Close, M. & Monsey, B. R. (2001). The Wilder Collaboration Factors Inventory: Assessing your collaboration’s strengths and weaknesses. Saint Paul, MN: Fieldstone Alliance.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Rosanna N. Flouty |
File Modified | 0000-00-00 |
File Created | 2021-01-15 |