TY - GEN
T1 - End-to-end uncertainty-based mitigation of adversarial attacks to automated lane centering
AU - Jiao, Ruochen
AU - Liang, Hengyi
AU - Sato, Takami
AU - Shen, Junjie
AU - Chen, Qi Alfred
AU - Zhu, Qi
N1 - Funding Information:
We gratefully acknowledge the support from NSF grants CNS-1839511, CNS-1834701, IIS-1724341, CNS-1850533, CNS-1929771, CNS-1932464, USDOT grant 69A3552047138 for CARMEN UTC (University Transportation Center), and ONR grant N00014-19-1-2496.
Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/11
Y1 - 2021/7/11
N2 - In the development of advanced driver-assistance systems (ADAS) and autonomous vehicles, machine learning techniques that are based on deep neural networks (DNNs) have been widely used for vehicle perception. These techniques offer significant improvement on average perception accuracy over traditional methods, however have been shown to be susceptible to adversarial attacks, where small perturbations in the input may cause significant errors in the perception results and lead to system failure. Most prior works addressing such adversarial attacks focus only on the sensing and perception modules. In this work, we propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules. In particular, we choose a target ADAS application, the automated lane centering system in OpenPilot, quantify the perception uncertainty under adversarial attacks, and design a robust planning and control module accordingly based on the uncertainty analysis. We evaluate our proposed approach using both public dataset and production-grade autonomous driving simulator. The experiment results demonstrate that our approach can effectively mitigate the impact of adversarial attack and can achieve 55% 90% improvement over the original OpenPilot.
AB - In the development of advanced driver-assistance systems (ADAS) and autonomous vehicles, machine learning techniques that are based on deep neural networks (DNNs) have been widely used for vehicle perception. These techniques offer significant improvement on average perception accuracy over traditional methods, however have been shown to be susceptible to adversarial attacks, where small perturbations in the input may cause significant errors in the perception results and lead to system failure. Most prior works addressing such adversarial attacks focus only on the sensing and perception modules. In this work, we propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules. In particular, we choose a target ADAS application, the automated lane centering system in OpenPilot, quantify the perception uncertainty under adversarial attacks, and design a robust planning and control module accordingly based on the uncertainty analysis. We evaluate our proposed approach using both public dataset and production-grade autonomous driving simulator. The experiment results demonstrate that our approach can effectively mitigate the impact of adversarial attack and can achieve 55% 90% improvement over the original OpenPilot.
UR - http://www.scopus.com/inward/record.url?scp=85118894989&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85118894989&partnerID=8YFLogxK
U2 - 10.1109/IV48863.2021.9575549
DO - 10.1109/IV48863.2021.9575549
M3 - Conference contribution
AN - SCOPUS:85118894989
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 266
EP - 273
BT - 32nd IEEE Intelligent Vehicles Symposium, IV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 32nd IEEE Intelligent Vehicles Symposium, IV 2021
Y2 - 11 July 2021 through 17 July 2021
ER -