Justification

Volume I - SSOCS 2018 Usability Testing.docx

NCES Cognitive, Pilot, and Field Test Studies System

Justification

OMB: 1850-0803

Document [docx]
Download: docx | pdf






National Center for Education Statistics (NCES)



School Survey on Crime and Safety (SSOCS) 2018

Usability Testing





Volume I – Supporting Statement

OMB #1850-0803 v.208







September 2017



Appendixes:

Appendixes A-J SSOCS 2018 Usability Testing Recruitment Materials

Appendixes K-L Usability Testing Protocol & Questionnaire for User Interaction Satisfaction (QUIS)

Appendix M-N SSOCS 2018 Principal Invitation Letter with Log-in & Questionnaire

This material is being submitted under the generic National Center for Education Statistics (NCES) clearance agreement (OMB# 1850-0803), which allows NCES to conduct various procedures (e.g., focus groups, cognitive interviews, usability testing) to test new methodologies, question types, or delivery methods to improve survey and assessment instruments as well as study procedures, resulting in improved data quality, utility, and participant experience.

Background

The School Survey on Crime and Safety (SSOCS), a nationally representative survey of elementary and secondary schools, is one of the nation’s primary sources of school-level data on crime and safety in public schools. Conducted by NCES, within the U.S. Department of Education, SSOCS has been administered six times—covering the 1999–2000, 2003–04, 2005–06, 2007–08, 2009–10, and 2015–16 school years. A seventh administration is planned for the 2017–18 school year. SSOCS is the only recurring federal survey that collects detailed information on the incidence, frequency, seriousness, and nature of violence affecting students and school personnel, as well as other indices of school safety from the school’s perspective. As such, SSOCS fills an important gap in data collected by NCES and other agencies. Topics covered by SSOCS include school programs and practices, parent and community involvement, school security staff, school mental health services, staff training, limitations on crime prevention, the type and frequency of crime and violence, and the types of disciplinary problems and actions used in schools. SSOCS questionnaires are completed by school principals or other school personnel designated by the principal as the person who is “most knowledgeable about school crime and policies to provide a safe environment.”

The first six SSOCS administrations were conducted by mail, with telephone and e-mail follow-up. The 2017–18 data collection (SSOCS:2018) will be conducted mainly by mail, with telephone and e-mail follow-up, but will also include a modal experiment with a web-based data collection instrument. Developing a web-based data collection instrument is in direct response to feedback received during the SSOCS:2018 cognitive laboratory interviews conducted in fall 2016 (OMB #1850-0803 v.171), which indicated that respondents would be more likely to respond to a web-based version of the survey. The web test treatment (approximately 1,150 randomly selected schools) will be evaluated against the control group, which will follow the traditional SSOCS data collection mode, using paper questionnaires (approximately 3,650 randomly selected schools). The web test treatment schools will be given the option to respond by paper during follow-up mailings later in the data collection.

Study Rationale

This request is to conduct usability testing in fall 2017 to refine the functionality of the web instrument for the 2017–18 data collection. Volunteers will be asked to complete the survey using the web-based instrument and will be asked questions about their experience. Usability testing has been conducted for other NCES surveys in recent years, including the ED School Climate Surveys (EDSCLS), the National Teacher and Principal Survey (NTPS), and the National Household Education Surveys (NHES). NCES is authorized to conduct this survey by the Education Sciences Reform Act of 2002 (ESRA 2002, 20 U.S.C. §9543).

The objective of the proposed usability testing is threefold. First, testing will determine design inconsistencies and problem areas within the web application (e.g., navigation errors, insufficient instructions, improper button usage). Second, testing will exercise the application under controlled test conditions with representative users. Data on completion time and missing items will be used to assess whether usability goals regarding an effective, efficient, and well-received user interface have been achieved. Finally, NCES will establish baseline user performance and user-satisfaction levels of the application for future usability evaluations.

Specifically, the usability testing interviews will focus on the following:

Program/Interface Design:

  1. Ease of navigating within and between items (e.g., selecting/inputting responses, previous and next buttons)

  2. Expectations of how the web-interface features should function (e.g., radio buttons, write-in text boxes)

  3. Helpfulness of alerts (e.g., errors)

  4. Engagement in the survey (e.g., ease or frustration with features and items)

  5. Design inconsistencies and usability problem areas within the application

Completion of Specific Tasks:

  1. Ease of logging into the instrument

  2. Ease of using the Frequently Asked Questions (FAQ) function

  3. Ease of exiting and returning to the instrument

This document describes the proposed usability testing methods to be conducted, the sample and recruitment of participants, the data collection process, the analysis plans, the hourly burden, and the cost of usability testing. A report highlighting the key findings will be prepared as the primary deliverable from this study. The usability interviews are designed to result in an improved web survey that will be easy for respondents to use, reduce response burden, and yield accurate information.

Usability Testing

Usability testing will explore the interaction that respondents have with the prototype of the SSOCS web instrument. The testing will take place after the application has been developed by the U.S. Census Bureau and reviewed for quality assurance by NCES and its SSOCS contractor, the American Institutes for Research (AIR).

Methods

In each usability testing interview, trained interviewers will provide an overview of the survey platform and then ask participants to complete the survey. All interviews will be conducted using a structured protocol (Appendix K), and observers will take notes on what participants say. The sessions will also be audio- or video-recorded so that researchers can refer back to participants’ feedback. Participants will be asked to complete the survey using a desktop or laptop computer, and the web browser they would most likely use when completing the actual survey.

As participants work through the survey, interviewers will occasionally interrupt them to ask them to complete certain tasks in order to test specific interaction elements or features created for the platform (e.g., re-accessing the survey after taking a break). The participants’ success or difficulty in completing assigned tasks will be analyzed to determine which information or control elements are missing or insufficient to allow successful completion of anticipated user tasks.

Interviewers will use two methods to collect information from respondents: think-aloud interviewing and verbal probing techniques (also known as concurrent and retrospective recall probing, respectively). With think-aloud interviewing, respondents are explicitly instructed to think aloud (i.e., describe what they are thinking) as they work through the survey and the tasks. During the think-aloud process, interviewers will not provide any assistance to the participants or answer any questions directly related to the tasks. In the event that a participant cannot move forward for a prolonged period of time or asks for help, the interviewer will only respond with a generic prompt, such as “Is there anything else you can do to help you move forward?”

As participants work through the web survey and “think aloud,” the interviewer will periodically interrupt them to probe for more information on specific situations encountered. With verbal probing techniques, the interviewer asks probing questions, as necessary, to clarify points that are not evident from the “think-aloud” process or to explore additional issues that have been identified a priori as being of particular interest. Interviewers will use specific probes, such as “Are any of the buttons or links on this page confusing?”

Some observations of participants’ behavior in the usability testing may also be noted by interviewers as supplementary information. Behavioral observations may include nonverbal indicators (e.g., perceived frustration or engagement) and interactions with the assigned task (e.g., ineffective or repeated actions suggestive of usability issues). Interviewers will be trained to distinguish between situations in which the user is having difficulty with a question’s content versus a question’s presentation or navigation. Interviewers will be given instructions to record behavior when the respondent navigates to selected pages or when a certain event occurs (e.g., a respondent receives logic violation text).

It is expected that the data collection platform and interview protocol will evolve during testing. This research will be iterative, meaning that the protocol design and probes may change throughout the testing period based on problems identified during the interviews.

Measurement

Usability testing will evaluate participants’ efficiency, accuracy, and satisfaction while completing the web survey. Efficiency will be measured by survey completion time and interviewer observations of behaviors, such as delayed item response time and respondent frustration (if any). Accuracy will be assessed based on whether participants are able to successfully complete the tasks assigned to them. Satisfaction will be assessed based on subjective satisfaction ratings through a modified version of the Questionnaire for User Interaction Satisfaction (QUIS).

QUIS is a survey that was developed by a multidisciplinary team of researchers in the Human-Computer Interaction Lab (HCIL) at the University of Maryland, College Park, to assess users' subjective satisfaction with specific aspects of the human-computer interface. QUIS successfully addresses the reliability and validity problems found in other satisfaction measures, creating a measure that is highly reliable across many types of interfaces. It has been used during usability testing for the National Teacher and Principal Survey and the National Household Education Surveys.

Interview Length

Given the number of SSOCS survey items to be tested (Appendix N), it is estimated that each usability interview will last up to 90 minutes, providing enough time for respondents to complete the entire questionnaire (SSOCS has an estimated response time of 53 minutes), complete the assigned tasks, complete the QUIS survey, and discuss any issues, as needed, with the interviewer.

Sample and Recruitment Plans

NCES has contracted with AIR to conduct the SSOCS usability testing in October 2017 through December 2017. The participants will include elementary, middle, and high school principals or the persons they have designated as “the person most knowledgeable about school crime and policies to provide a safe environment.” The participants will be recruited primarily from metropolitan areas near AIR offices (such as the District of Columbia, Chicago, San Mateo, Boston, and Austin metro areas) to maximize scheduling and interviewing efficiency and flexibility. A remote interview option will be available for principals not located near AIR officers. The remote interview option will be used if recruitment efforts near AIR offices do not provide enough participants or if those recruited do not reflect the diversity of U.S. public schools.

On behalf of NCES, AIR will recruit approximately 10 participants who represent a range of school characteristics, including schools in urban and suburban areas, schools serving different grade levels, and schools with varying enrollment sizes. Note that while the sample will include a mix of characteristics, the results will not explicitly measure differences by these characteristics.

AIR will use multiple outreach methods and resources to recruit participants, such as contacts with schools and community organizations and advertisements in newspapers and on the Internet. Potential respondents will be contacted by e-mail and phone, and AIR will confirm that interested individuals are eligible to participate. AIR will look up interested participants’ schools using the NCES Public School Search tool to confirm that the school is a regular traditional public or public charter school. During the recruitment phone call (Appendixes C-D), AIR will ask whether the potential participant has (a) access to a laptop or desktop computer and (b) at least one full year of experience at the school. To manage the goal of sample diversity across school characteristics, AIR will use the NCES Public School Search tool to record the characteristics of each participant’s school.

AIR’s recruitment experiences with the cognitive interviews for SSOCS:2016 and SSOCS:2018 indicated that principals and primary administrative staff are a hard-to-reach population; thus, increased time and effort will be dedicated to meeting recruitment targets. For the selected participants, AIR will confirm the interview date, time, and location via e-mail and phone. Participants will complete consent forms at the time of the interview and as a thank-you for their time and participation, they will receive a gift card (see section about Costs for Recruiting and Paying Respondents later in this document). Recruitment materials are provided in Appendices A-J.

Data Collection Process

Usability testing participants will be given the option of participating in an interview in-person or remotely. In-person interviews will be conducted primarily in schools located in metropolitan areas near AIR offices (such as the District of Columbia, Chicago, San Mateo, Boston, and Austin metro areas). Participants who request that the usability testing be conducted remotely will be asked to share their desktop screen through video-conferencing software such as GoToMeeting. Remote participants will be encouraged to share video of themselves via webcam, when possible.

Remote interviews will encourage participation by offering a flexible option that may better fit into participants’ busy schedules. Furthermore, remote interview participants may be recruited from anywhere in the United States; thus, they will help NCES and AIR meet their goal of recruiting schools that reflect varying sociodemographic characteristics (in terms of racial makeup, percent of students eligible for free or reduced-price lunch, etc.). Given the diversity of in-person interview locations as well as the option of remote interviews, NCES and AIR expect that those who participate will better represent the target population of SSOCS-sampled schools than if all participants were recruited from the same region or city.

This data collection process is consistent with the cognitive interviews that were conducted for SSOCS:2016 and SSOCS:2018, both of which utilized a multi-mode/locale strategy. By using remote cognitive interviews, we were able to recruit participants from different regions and locales of the country (extending beyond the cities and states in which AIR offices are located), resulting in a more geographically representative sample than would have been feasible with in-person interviews only. At the same time, AIR recruiters and interviewers were able to be more flexible to meet the needs of participants. We anticipate similar benefits again utilizing both in-person and remote options for the usability interviews in fall 2017.

Analysis Plans

For the usability testing, the unit of analysis will be each screen that respondents encounter as they complete the web survey. Information gathered from the usability interviews will be used to refine development of the final web survey. The results of the usability interviews will include the following information about the questionnaire items and web platform:

  • think-aloud verbal reports;

  • responses to generic questions about respondents’ reactions to the assigned tasks;

  • responses to targeted questions specific to an item or task;

  • additional comments volunteered by participants; and

  • any observations noted by interviewers.

After the NCES’s contractor’s interviewer notes are checked for quality, researchers will identify common trends in users’ experiences as well as difficulties that users encountered. This approach will help ensure that data are analyzed in a thorough and systematic manner that enhances the identification of problems with SSOCS web items and tasks. The usability interviews analyses will be summarized in a report highlighting the key findings. The report will also provide recommendations for addressing any identified issues.

Assurance of Confidentiality

Respondents will read a confidentiality statement and sign a consent form before interviews are conducted (see Appendix H). They will be notified that their participation is voluntary and that all of the information they provide may be used only for statistical purposes and may not be disclosed, or used, in identifiable form for any other purpose except as required by law (20 U.S.C. §9573 and 6 U.S.C. §151).

The interviews will be audio- or video-recorded solely for research purposes. NCES staff may listen in or may observe some usability interviews. Participants will be assigned a unique identifier (ID), which will be created solely for data file management purposes (i.e., to keep all participant materials together). A participant’s ID will not be linked to his or her name in any way or form, and the only identification included in the audio or video files will be the participant ID. The recorded files will be secured for the duration of the study—with access limited to key NCES and AIR project staff—and will be destroyed after the usability testing report is finalized.

Estimate of Hourly Burden

We estimate that the initial contact of each potential participant will require 3 minutes (or 0.05 hours) and that 12 participants will need to be contacted in order to identify each eligible and willing participant. Thus, approximately 120 contacts will be needed to yield 10 participants. The usability interviews are expected to last approximately 90 minutes. Table 1 provides all burden estimated for this study.

Table 1. Estimate of hourly burden for SSOCS:2018 recruitment and usability interviews

Activity

Number of Respondents

Number of Responses

Burden Hours per Respondent

Total Burden Hours

Recruitment (initial contact and confirmation of eligibility)

120

120

0.05

6

Usability interviews

10

10

1.5

15

Total

120

130

-

21

Estimate of Costs for Recruiting and Paying Respondents

To encourage principals’ participation in the usability interviews and to thank them for their time and effort, all participants will be offered an incentive. Those participating in an in-person interview will receive the incentive at the time of the interview while those participating in a remote interview will be mailed the incentive after completing the interview.

Given the difficulty of recruiting principals, the need to recruit a diverse group of 10 principals, and the estimated length of the usability interview of 90 minutes, as in the SSOCS:2018 cognitive interviews conducted in 2016 (OMB# 1850-0803 v.171), contacted principals will be offered a $50 gift card for interviews conducted remotely (via screen sharing and telephone or video conference) and a $75 gift card for interviews conducted in person, at the school.

Estimate of Cost Burden and Cost to Federal Government

There are no direct costs to respondents to participate in this study. The estimated cost to the federal government for the usability testing—including development of a detailed protocol, OMB clearance materials, recruitment, data collection, data analysis, and preparation of a report on the findings—is $67,000.

Schedule

Recruitment will begin upon OMB approval. Table 2 provides the SSOCS:2018 usability testing activities schedule.

Table 2. Schedule of high-level activities for SSOCS:2018 usability testing

Activity

Start Date

End Date

Recruitment and scheduling of participants

September 2017

11/15/17

Conduct usability testing (data collection)

October 2017

12/06/17

Draft report (including findings and recommendations)

December 2017

12/11/17

Final usability testing report

12/11/17

12/13/17

NOTE: There is some flexibility in the dates for these activities, and they may be changed at the discretion of project staff.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSteven Hummel
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy