NIST Comparative Usability of Robotic Control Systems for Manufacturing Based Human Robot Interaction Survey

0693-0043-ComparativeUsabilityRobotic-ManufacturingHumanRobot-Interaction-SupportingStatement-03-28-17.doc

NIST Generic Clearance for Usability Data Collections

NIST Comparative Usability of Robotic Control Systems for Manufacturing Based Human Robot Interaction Survey

OMB: 0693-0043

Document [doc]
Download: doc | pdf

OMB Control No. – 0693-0043 – NIST Generic Clearance for Usability Data Collections



NIST Comparative Usability of Robotic Control Systems for Manufacturing Based Human Robot Interaction Survey


FOUR STANDARD SURVEY QUESTIONS



1. Explain who will be surveyed and why the group is appropriate to survey.


The goal of this survey is to capture the usability of robotic platforms for a group of individuals who are representative of those working in small and medium enterprises in the manufacturing industry work force. In 2014 the U.S. Bureau of Labor reported that there were about 12,188,300 people working in manufacturing [1] and 5,086,905 working in small and medium enterprises in 2012 [2], additionally it was reported that 73% of the overall manufacturing workforce was male and 27% was female [3]. Minors under the age of 18 are not considered in this study as they are less likely to be employed full time by a manufacturing company. Employees working in the manufacturing industry may or may not have been previously exposed to robotic technologies and their control systems, at best a worker in the manufacturing industry is already trained in robotic technologies and is an expert and at worst a worker may have never interacted with an industrial robot and is a beginner. To achieve accurate representation, participants must be over the age of 18 with varying experience levels with robotic interface, from beginner to expert, to be selected. For each of the pair of interfaces that we collect comparative data on, we will survey between 20 participants (4 female identifying participants, 16 male identifying participants) and 33 participants (7 female identifying participants, 26 male identifying participants) to achieve a confidence interval between 22% and 17% respectively. We will be conducting the survey for up to 4 pairs of interfaces, giving us a total of 80 to 132 participants with an overall estimated confidence interval between 11% and 8%.


2. Explain how the survey was developed including consultation with interested parties, pre-testing, and responses to suggestions for improvement.


To ascertain the experience level of the user, there are two 4-point multiple choice scale questions on robotic experience and procedural thinking experience. This method is used in other similar studies to be able to compare effectiveness in relation of experience level. There are two NASA Task Load Index (NASA-TLX) style questionnaires, one per interface, to determine interface effectiveness in terms of user effort. The NASA-TLX questionnaire was developed by the Human Performance Group at the NASA Ames Research Center over 3 years and more than 40 laboratory trials [4]. It is widely used in human factors research to determine user perceived workload. After consulting with other members of the study, we determined that the NASA-TLX was the best choice for this study as we could use the individual slider questions in addition to the overall TLX scores to analyze the interface’s effect on each factor. Additionally, there are two multiple choice questions on which interface the user thought was easier to learn to use and which interface the user would chose to use in a practical setting. Originally there were four multiple choice questions that asked if the user felt more frustrated using interface 1 than they did using interface 2 and whether they felt more efficient using interface 2 than they did using interface 1, but these were eliminated after pre-testing to reduce the time burden similar data points can be derived from the raw values of the NASA-TLX questions. After discussing security issues with one of the participating investigators, we determined that the survey would be better served as a paper survey than the previous google form online survey as the data would not be processed externally and we could better ensure the privacy of the participant.

After pre-testing of the survey, we determined that on average, the participant will spend around 1 minute to read instructions, 7 seconds per multiple choice problem and about 10 seconds for each scalar question. Overall, there are 4 multiple and 12 scalar questions, giving an estimated time of 3 minutes and 28 seconds for an individual to complete the survey.


3. Explain how the survey will be conducted, how customers will be sampled if fewer than all customers will be surveyed, expected response rate, and actions your agency plans to take to improve the response rate.


The survey will be conducted during a research study appointment after the individual has completed their interaction with the robot interfaces, all participants in the study will be surveyed. We expect to receive 100% response rate as the participant will have agreed to complete the survey before they are scheduled for their research study appointment.


4. Describe how the results of the survey will be analyzed and used to generalize the results to the entire customer population.


The data will be analyzed in four groups derived from the responses to the experience questions. Participants who selected either “no experience” or “beginner” will be sorted into the “novice” group for that question, and those who selected either “intermediate” or “advanced” will be sorted into the “expert” group for that question. Using these groups, we will be able to determine which factors contribute to the usability of the interface across experience levels within the industry.

For each expertise group, the average response rates will be calculated for each question. Additionally, for each group the mean, median, mode and standard deviation will be calculated for the overall NASA-TLX scores and individual NASA-TLX factor scores. NASA-TLX scores will be calculated from each set of 6 scalar questions using the framework provided by the NASA-TLX guidebook [5].



References

[1] United States Bureau of Labor, “Employment Projections by Major Industry Sector”, Web. 26 Jan.

2017.

[2] A. Caruso, “Statistics of U.S. Businesses Employment and Payroll Summary: 2012”, Economy Wide

Statistics Briefs, US. Census Bureau, February 2015.

[3] G. Carrick, J. Proctor, “2015 Women in manufacturing study Exploring the gender gap”,

2015, Deloitte.com.

[4] S. G. Hart, L. E. Staveland, Development of NASA-TLX (Task Load Index): Results of

Empirical and Theoretical Research, In: Peter A. Hancock and Najmedin Meshkati, Editor(s),

Advances in Psychology, North-Holland, 1988, Volume 52, Pages 139-183.

[5] NASA, Nasa Task Load Index (TLX) v. 1.0 Manual, 1986.

3


File Typeapplication/msword
File TitleOMB Control No
AuthorDarla Yonder
Last Modified ByYonder, Darla (Fed)
File Modified2017-03-29
File Created2017-03-29

© 2024 OMB.report | Privacy Policy