Abstract
Neural networks are being increasingly applied to control and decision making for learning-enabled cyber-physical systems (LE-CPSs). They have shown promising performance without requiring the development of complex physical models; however, their adoption is significantly hindered by the concerns on their safety, robustness, and efficiency. In this work, we propose COCKTAIL, a novel design framework that automatically learns a neural network based controller from multiple existing control methods (experts) that could be either model-based or neural network based. In particular, COCKTAIL first performs reinforcement learning to learn an optimal system-level adaptive mixing strategy that incorporates the underlying experts with dynamically-assigned weights, and then conducts a teacher-student distillation with probabilistic adversarial training and regularization to synthesize a student neural network controller with improved control robustness (measured by a safe control rate metric with respect to adversarial attacks or measurement noises), control energy efficiency, and verifiability (measured by the computation time for verification). Experiments on three non-linear systems demonstrate significant advantages of our approach on these properties over various baseline methods.
Original language | English (US) |
---|---|
Title of host publication | 2021 58th ACM/IEEE Design Automation Conference, DAC 2021 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 397-402 |
Number of pages | 6 |
ISBN (Electronic) | 9781665432740 |
DOIs | |
State | Published - Dec 5 2021 |
Event | 58th ACM/IEEE Design Automation Conference, DAC 2021 - San Francisco, United States Duration: Dec 5 2021 → Dec 9 2021 |
Publication series
Name | Proceedings - Design Automation Conference |
---|---|
Volume | 2021-December |
ISSN (Print) | 0738-100X |
Conference
Conference | 58th ACM/IEEE Design Automation Conference, DAC 2021 |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 12/5/21 → 12/9/21 |
Funding
We gratefully acknowledge the support from NSF grants 1834701, 1839511, 1724341, 2038853, 2048075, 2008827, 2015568, 1934931, and ONR grant N00014-19-1-2496, Simons Institute (Theory of Reinforcement Learning), Amazon, J.P. Morgan, and Two Sigma.
ASJC Scopus subject areas
- Computer Science Applications
- Control and Systems Engineering
- Electrical and Electronic Engineering
- Modeling and Simulation