TY - GEN
T1 - Towards verification-aware knowledge distillation for neural-network controlled systems
T2 - 38th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2019
AU - Fan, Jiameng
AU - Huang, Chao
AU - Li, Wenchao
AU - Chen, Xin
AU - Zhu, Qi
N1 - Funding Information:
ACKNOWLEDGMENT This work was supported in part by the DARPA BRASS program under agreement number FA8750-16-C-0043, NSF under award number 1834701, 1834324, 1839511, 1724341 and 1646497, and by the Air Force Research Laboratory (AFRL).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Neural networks are widely used in many applications ranging from classification to control. While these networks are composed of simple arithmetic operations, they are challenging to formally verify for properties such as reachability due to the presence of nonlinear activation functions. In this paper, we make the observation that Lipschitz continuity of a neural network not only can play a major role in the construction of reachable sets for neural-network controlled systems but also can be systematically controlled during training of the neural network. We build on this observation to develop a novel verification-aware knowledge distillation framework that transfers the knowledge of a trained network to a new and easier-to-verify network. Experimental results show that our method can substantially improve reachability analysis of neural-network controlled systems for several state-of-the-art tools.
AB - Neural networks are widely used in many applications ranging from classification to control. While these networks are composed of simple arithmetic operations, they are challenging to formally verify for properties such as reachability due to the presence of nonlinear activation functions. In this paper, we make the observation that Lipschitz continuity of a neural network not only can play a major role in the construction of reachable sets for neural-network controlled systems but also can be systematically controlled during training of the neural network. We build on this observation to develop a novel verification-aware knowledge distillation framework that transfers the knowledge of a trained network to a new and easier-to-verify network. Experimental results show that our method can substantially improve reachability analysis of neural-network controlled systems for several state-of-the-art tools.
UR - http://www.scopus.com/inward/record.url?scp=85077786637&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077786637&partnerID=8YFLogxK
U2 - 10.1109/ICCAD45719.2019.8942059
DO - 10.1109/ICCAD45719.2019.8942059
M3 - Conference contribution
AN - SCOPUS:85077786637
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
BT - 2019 IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2019 - Digest of Technical Papers
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 4 November 2019 through 7 November 2019
ER -