TY - GEN
T1 - Bounding Perception Neural Network Uncertainty for Safe Control of Autonomous Systems
AU - Wang, Zhilu
AU - Huang, Chao
AU - Wang, Yixuan
AU - Hobbs, Clara
AU - Chakraborty, Samarjit
AU - Zhu, Qi
N1 - Publisher Copyright:
© 2021 EDAA.
PY - 2021/2/1
Y1 - 2021/2/1
N2 - Future autonomous systems will rely on advanced sensors and deep neural networks for perceiving the environment, and then utilize the perceived information for system planning, control, adaptation, and general decision making. However, due to the inherent uncertainties from the dynamic environment and the lack of methodologies for predicting neural network behavior, the perception modules in autonomous systems often could not provide deterministic guarantees and may sometimes lead the system into unsafe states (e.g., as evident by a number of high-profile accidents with experimental autonomous vehicles). This has significantly impeded the broader application of machine learning techniques, particularly those based on deep neural networks, in safety-critical systems. In this paper, we will discuss these challenges, define open research problems, and introduce our recent work in developing formal methods for quantitatively bounding the output uncertainty of perception neural networks with respect to input perturbations, and leveraging such bounds to formally ensure the safety of system control. Unlike most existing works that only focus on either the perception module or the control module, our approach provides a holistic end-to-end framework that bounds the perception uncertainty and addresses its impact on control.
AB - Future autonomous systems will rely on advanced sensors and deep neural networks for perceiving the environment, and then utilize the perceived information for system planning, control, adaptation, and general decision making. However, due to the inherent uncertainties from the dynamic environment and the lack of methodologies for predicting neural network behavior, the perception modules in autonomous systems often could not provide deterministic guarantees and may sometimes lead the system into unsafe states (e.g., as evident by a number of high-profile accidents with experimental autonomous vehicles). This has significantly impeded the broader application of machine learning techniques, particularly those based on deep neural networks, in safety-critical systems. In this paper, we will discuss these challenges, define open research problems, and introduce our recent work in developing formal methods for quantitatively bounding the output uncertainty of perception neural networks with respect to input perturbations, and leveraging such bounds to formally ensure the safety of system control. Unlike most existing works that only focus on either the perception module or the control module, our approach provides a holistic end-to-end framework that bounds the perception uncertainty and addresses its impact on control.
UR - http://www.scopus.com/inward/record.url?scp=85108270661&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85108270661&partnerID=8YFLogxK
U2 - 10.23919/DATE51398.2021.9474204
DO - 10.23919/DATE51398.2021.9474204
M3 - Conference contribution
AN - SCOPUS:85108270661
T3 - Proceedings -Design, Automation and Test in Europe, DATE
SP - 1745
EP - 1750
BT - Proceedings of the 2021 Design, Automation and Test in Europe, DATE 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 Design, Automation and Test in Europe Conference and Exhibition, DATE 2021
Y2 - 1 February 2021 through 5 February 2021
ER -