TY - GEN
T1 - Customized Handling of Unintended Interface Operation In Assistive Robots
AU - Gopinath, Deepak
AU - Javaremi, Mahdieh Nejati
AU - Argall, Brenna
N1 - Funding Information:
This material is based upon work supported by the National Science Foundation under Grants IIS-1552706 and CNS-1544741, and U.S. Office of Naval Research under the Award Number N00014-16-1-2247. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the U.S. Office of Naval Research.
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - We present an assistance system that reasons about a human's intended actions during robot teleoperation in order to provide appropriate modifications on unintended behavior. Existing methods typically treat the human and control interface as a black box and assume the measured user input is noise-free, and use this signal to infer task-level human intent. We recognize that the signal measured through the interface is masked by the physical limitations of the user and the interface they are required to use. With this key insight, we model the human's physical interaction with a control interface during robot teleoperation, and distinguish between interface-level intended and measured physical actions explicitly. By reasoning over the unobserved intentions using model-based inference techniques, our assistive system provides customized modifications on a user's issued commands. We validate our algorithm both in simulation and with a 10-person human subject study in which we evaluate the performance of the proposed assistance paradigms. Our results show that the assistance paradigms helped to significantly reduce task completion time, number of mode switches, cognitive workload, and user frustration, and improve overall user satisfaction.
AB - We present an assistance system that reasons about a human's intended actions during robot teleoperation in order to provide appropriate modifications on unintended behavior. Existing methods typically treat the human and control interface as a black box and assume the measured user input is noise-free, and use this signal to infer task-level human intent. We recognize that the signal measured through the interface is masked by the physical limitations of the user and the interface they are required to use. With this key insight, we model the human's physical interaction with a control interface during robot teleoperation, and distinguish between interface-level intended and measured physical actions explicitly. By reasoning over the unobserved intentions using model-based inference techniques, our assistive system provides customized modifications on a user's issued commands. We validate our algorithm both in simulation and with a 10-person human subject study in which we evaluate the performance of the proposed assistance paradigms. Our results show that the assistance paradigms helped to significantly reduce task completion time, number of mode switches, cognitive workload, and user frustration, and improve overall user satisfaction.
UR - http://www.scopus.com/inward/record.url?scp=85125461461&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85125461461&partnerID=8YFLogxK
U2 - 10.1109/ICRA48506.2021.9561096
DO - 10.1109/ICRA48506.2021.9561096
M3 - Conference contribution
AN - SCOPUS:85125461461
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 10406
EP - 10412
BT - 2021 IEEE International Conference on Robotics and Automation, ICRA 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE International Conference on Robotics and Automation, ICRA 2021
Y2 - 30 May 2021 through 5 June 2021
ER -